ChatGPT may help you find information faster, but you learn less
- Richard Kunst

- Dec 6, 2025
- 5 min read
Updated: Dec 14, 2025
Chatbots such as ChatGPT can provide direct answers to your questions that seem complete and comprehensive. But a new study shows that, compared to a regular Google search, the amount of knowledge gained is less.
We have become accustomed to looking up information on Google when working on projects or just having casual conversations and want to verify an argument. Google usually responds by referring you to a number of websites that deal with the topic, which you have to go through to pick out the information you are looking for.

On the other hand, chatbots, which have access to huge data sets and use large language models, can sift through all that material for you, and spit out an answer in a very conversational, almost personal way. Because the chatbot is programmed to sound like a human and give you a thorough result, the tendency is to accept it at face value.
A study at the University of Pennsylvania tested more than 10,000 students over seven different experiments by asking them to learn about a topic — such as how to plant a vegetable garden — then write advice on that topic to a friend. Half of the group used a regular Google search, the other used the latest version of ChatGPT.
The results were then analyzed for how much the participants had actually learned, and how likely they were to follow their own advice. Researchers found that across all seven experiments, those who used the regular Google search provided longer, more thoughtful presentations using richer, more varied language than those using ChatGPT. And the people who used ChatGPT were overall less informative in the advice they gave, and less likely to have confidence in the advice they gave.
The scientists concluded that because a Google search presents a number of links on the topic, it requires more brainpower to understand the subject before writing it down. Different sites might provide alternative views or suggestions, so the users had to check the sources of that information and come to their own conclusions.
Even though one of the experiments involved using a version of ChatGPT with plenty of links and references, only a quarter of the participants actually clicked a single link. The rest just accepted whatever answers were given.
This study is just one of many similar projects looking at the cognitive costs of chatbots. A group at MIT used EEGs to peer into the brains of subjects as they wrote SAT essays. In a pre-print study, they found that those using chatbots all wrote similar essays, and had the lowest level of neural activity through the process. They also discovered that the results were cumulative, so the engagement in the brain reduced more with each subsequent essay.
Meanwhile, the participants who used no technology at all to write their essays showed the highest levels of neural activity, and were overall more creative, curious, engaged, and satisfied with the end results.
This demonstrates the difference between information and knowledge. We are surrounded by an ocean of information on our computers and devices, but much of it can easily pass through one ear and out the other. If it doesn’t sink in, you haven’t learned anything.
Whereas if you have to actually do some thinking, and assemble information through research to gather it into a body of knowledge, there is a much better chance it will stick. You have really learned something.
Clearly, AI programs are here to stay. They are proving to be powerful research tools, gathering details or seeing patterns in data that humans miss. But these results suggest that critical thinking, fact checking, and source verification are still important parts of the process.
I tried this myself on the topic of how to stay healthy after the age of 70. ChatGPT gave me a simple list of daily exercise and eating habits to stay fit. Google gave me links to pages from medical institutions with far more in-depth reading material that explained the process of aging, how the body changes as we grow older, and how to adapt exercise and diet appropriately for a senior.
The end results were the same, but the Google results came from a much richer database that I cruised through myself to see which method works best. And I felt more informed after the Google search than I did using the chatbot.
Whatever search engine you use, remember to take the time to think about the results you’re getting, and you just may really learn something.
*******************************************************************************************
FEEDBACK and Discussion regarding A.I.
This is how it started … with our Blog Posts
Then this response ….
Hi Richard,
One point about AI is its segmentation. ChatGPT and other chat bots are capturing the consumer space but the enterprise space is another topic.
The goal here, so far anyway, is to automate as much as possible - with the ultimate objective being cost reduction as productivity goes up.
But Deepseek is offering enterprises rock bottom pricing to compete with Gemini, Anthropic etc. and at least one article I’ve read has them at close to 50% market share in the U.S.
The crazy valuations placed on OpenAI etc is going to get hard to justify if the revenues projected dont show up.
I had one of my programming project teams develop a custom ai agent this term. Agentic AI is definitely going to make lazy brains even lazier, that’s pretty clear.
Miles McDonald
Complemented by Nik with this ....
The smarter and more general AI is, the less the division between Consumer vs Enterprise will hold. You can already see that with the example Miles gives - Deep Seek, which is an Open Source AI that can and has been used and adopted to both. Once we hit AGI, it will be good for everything - that's the whole point of it.
Nikola DanaylovFuturist | Keynote Speaker | Tech Raconteur🎤 Speaker Reel: youtu.be/yCObQw5e_J4
Me ...
So what is AGI ??
Nik politely educating stupid me ....
It’s a longer conversation, but here’s the bottom line: AI comes in two flavors — Artificial Narrow Intelligence and Artificial General Intelligence. Narrow AI is great at one thing at a time: beating you at chess, welding a car frame, classifying images, and so on. General AI is a different beast entirely: it can learn anything a human can, adapt to any domain, and eventually match or exceed human capability across the board.
Once intelligence becomes truly “general,” the whole idea of “consumer AI” versus “enterprise AI” collapses. That line is already fading fast. Today, the underlying model is often the same — what changes is simply the data and context it’s trained or fine-tuned on. Same brain, different diet.
As capabilities expand, AI stops being a tool for a single task and starts becoming a universal problem-solver — not just chess or welding, but dating advice, legal strategy, investment analysis, medical triage, and everything in between.
That’s why AGI, by definition, is applicable everywhere. Context is the only variable. The intelligence is the same.
Nikola DanaylovFuturist | Keynote Speaker | Tech Raconteur🎤 Speaker Reel: youtu.be/yCObQw5e_J4
THIS IS THE BLOG THAT NIK POSTED EARLIER THIS WEEK.



Comments