Here's an uncomfortable thought: every time we ask an LLM to summarize a document, draft an email, or write our code, we might be making ourselves a little less capable, to put it nicely.
I'm not here to vilify these tools - I use them often. However, both the neuroscience and HCI research suggests this claim is reasonable.
What the Research Found
A team from Carnegie Mellon University (shoutout to CMU, the HCI Department is so close to heart!) and Microsoft Research surveyed 319 knowledge workers who use AI tools like ChatGPT and Copilot at least weekly. They collected 936 real-world examples of AI-assisted tasks — writing emails, analyzing data, generating code, summarizing documents.
The core finding: higher confidence in AI correlates with less critical thinking. Higher confidence in one's own abilities correlates with more critical thinking.
Across all six categories of cognitive activity they measured — recall, comprehension, application, analysis, synthesis, evaluation — workers reported less effort when using AI. The biggest reductions were in the lower-order activities like recall and comprehension. But even evaluation and analysis reportedly took less effort.
Users cited three barriers to critical thinking:
Awareness. Many assumed AI was competent for "simple" tasks. One participant mentioned the following: "With straightforward factual information, ChatGPT usually gives good answers."
Motivation. Time pressure. A sales rep shared: "I must reach a certain quota daily or risk losing my job. Ergo, I use AI to save time and don't have much room to ponder over the result."
Ability. Even when subjects spotted problems, they struggled to improve the output: "I'm not sure how I could have improved the text that ChatGPT wrote."
What the Psychology and Neuroscience Says
Bringing back my alma-mater into the picture. The brain adapts to the demands we place on it, and when we stop placing demands, it adapts to that too.
This isn't metaphor — new neurons in the hippocampus (the region critical for learning and memory) survive only if the brain engages in effortful tasks, and without that challenge, most of them die within weeks! Extended periods of cognitive ease don't just feel easy; they physically reduce the brain's capacity for complex thought.
We have seen this pattern before with other technologies: GPS weakens spatial memory, calculators erode mental arithmetic, and search engines reduce our retention of facts. The mechanism is consistent: when we outsource a cognitive function to an external tool, the internal capacity for that function diminishes over time. Just like taking the elevator every day — the convenience is real, but so is the atrophy. Nothing wrong with it, until you realize you're winded walking up two flights.
Generative AI is this pattern at scale. We're not just outsourcing memory or calculation anymore — we're outsourcing analysis, evaluation, and synthesis, the activities that constitute thinking itself. This one's worth thinking about (while we still can; pun intended).
How AI Changes the Nature of Thinking
The research identified three shifts in how people think when they use AI, and none of them are simply "less work."
The first is a shift from gathering to verifying. AI fetches information instantly, but the effort doesn't disappear — it moves downstream. People now spend their time checking whether what AI told them is actually true. One participant noted that AI "tends to make up information to agree with whatever points you are trying to make," which means the work of critical evaluation becomes more important, not less.
The second is a shift from solving to integrating. AI generates solutions, but they rarely fit the context perfectly. You're no longer working through the problem yourself; you're taking a best-guess-type generated answer and reshaping it to fit your requirements. The cognitive task changes from construction to editing.
The third is a shift from doing to supervising. The researchers call this "task stewardship" — you're still accountable for the outcome, but you've delegated the actual work to a machine. You've become quality control. The problem is that quality control requires expertise, and expertise comes from doing the work, which you're no longer doing. Paradox.
Limitations
A few caveats worth noting, because no study is perfect.
The data is self-reported. Participants described their own critical thinking, which is notoriously unreliable — people aren't great at observing their own cognition. The sample also skews young and tech-savvy, so generalizability is in question.
More importantly though, the study did not establish causation. It found that higher confidence in AI correlates with less critical thinking, but it can't tell us which way the arrow points. Maybe trusting AI makes people think less. Or maybe people who already think less critically are just more likely to trust AI. This research design was not meant to distinguish between these.
And "critical thinking" itself is a fuzzy concept. The researchers used an established framework (Bloom's taxonomy) to measure it, but there's no objective meter for how much thinking someone actually did. We're relying on people's perceptions of their own effort, back to the limitation mentioned prior.
Still — almost a thousand examples across diverse professions is substantial, and the qualitative data is hard to dismiss. When workers describe in their own words why they skip reflection, those explanations ring true.
What To Do About It
The research doesn't suggest we stop using AI — that ship has sailed. And don't get me wrong - these tools are genuinely useful! But it made me rethink how I use them, and maybe you will find my points worth considering.
My first aim is to keep strengthening confidence in my own abilities, not just prompting skills as most social media posts suggest. The data shows that self-confidence predicts critical engagement with AI output: people who trust themselves think more carefully about what AI gives them. For me, this means to keep doing cognitively demanding tasks systematically without AI just to remember that I can. It could sound almost embarrassing to admit, but if there is a small panic that sets in when you start a task and realize you don't and won't have access to an LLM - that panic is probably a sign you might want to pay attention to.
The second goal is to treat routine tasks as practice, not just productivity. Users in the study skipped critical thinking for tasks they considered trivial. I am tempted to do this too - why spend ten minutes writing an email when AI can draft it in seconds? But I've started to wonder if those "trivial" tasks are actually low-stakes reps for my brain. Writing an email myself isn't inefficient. It's maintenance. Plus, it keeps it authentic.
The third aim is to actively notice the moment when I've stopped thinking and started just smashing that "enter" key. There's a difference between using AI to enhance your thinking and using it to skip your thinking, and the line is blurrier than I would like it to be. I start catching myself sometimes just nodding along to AI output because it sounds reasonable, not because I've actually evaluated it. That's the trap. That's a sign to step back and reflect for a second.
Closing Thoughts
I started writing this post because I watched Advait Sarkar's TED talk at breakfast today and it stuck with me. He describes knowledge workers becoming "middle managers for their own thoughts" — overseeing ideas we never actually formed in the first place. That image hasn't left.
The research doesn't tell us to stop using AI. Neither does the neuroscience. (And neither do I — God forbid!) But both suggest that how we use these tools matters more than we might think. The question isn't whether AI makes us more productive — it clearly can. The question is what we're trading for that productivity, and whether we're making that trade consciously.
I don't have a clean answer. I'm still figuring out where my own line is. But here's what I keep coming back to: in five years, will we look back and wish we had been more intentional about this? Or will we not even remember what it felt like to think differently? What is clear to me is I am glad I didn't grow up with these tools. I often catch myself saying things like "I am so glad I didn't have AI in school", and I truly mean it. Understanding the convenience it brings to us and being at the forefront of new solutions to every problem there is doesn't make me wish I had it before. Maybe it could have made my educational journey somewhat easier but it would have also made it so much more ineffective at the same time. And to finish off with that, I am getting my Masters the old-school way: me and my physical notebook (and matcha) against the world still grinding! :)
Where do you draw the line?
References:
- Lee, M., Sarkar, A., Tankelevitch, L., et al. (2025). "The Impact of Generative AI on Critical Thinking." CHI 2025.
- Sarkar, A. (2025). "How to stop AI from killing your critical thinking." TED Talk.
- Shors, T. J., et al. (2012). "Use it or lose it: How neurogenesis keeps the brain fit for learning." Behavioural Brain Research.
- Risko, E. F., & Gilbert, S. J. (2016). "Cognitive Offloading." Trends in Cognitive Sciences.