AI cut your thinking effort by 40%. Your reasoning paid the price

AI cut your thinking effort by 40%. Your reasoning paid the price

·4 min readLearning & Mental Models

The tool that thinks for you is quietly making you worse at thinking

A peer-reviewed study in Computers in Human Behavior tracked students solving scientific problems with two different tools. One group used ChatGPT. The other used Google. Both had the same time, the same questions, the same goal.

The ChatGPT group reported roughly 40% lower cognitive load. They felt the task was easier, faster, less mentally taxing. And their final reasoning? Significantly worse. Their justifications were shallower, their argumentation weaker, their conclusions less nuanced than the group that had to wrestle with search results manually.

This is not a productivity story. This is a story about what happens to your brain when the friction disappears.

Why less effort means worse output

Cognitive load theory explains the mechanism. When you search for information manually, your brain engages in what researchers call "germane processing": making inferences, comparing sources, filtering contradictions, building mental models. That effort is not wasted energy. It is the learning itself.

LLMs bypass that process entirely. Instead of constructing understanding, you receive a pre-packaged answer. Your brain shifts from active reasoning to passive verification: "Does this look right?" That question demands far less cognitive work than "What does this evidence actually mean?"

The result is a paradox that should concern anyone who uses AI daily. The tool reduces effort, and the effort it reduces is precisely the effort that produces quality thinking.

AI critical thinking: the confidence trap

A Microsoft Research and Carnegie Mellon survey of 319 knowledge workers across multiple countries found something equally troubling. The more confident people were in generative AI, the less critical thinking they applied to its outputs. One participant captured it bluntly: "I use AI to save time and don't have much room to ponder."

This is not laziness. It is a rational response to a system that feels authoritative. When an AI delivers a polished, well-structured answer in seconds, your brain's verification circuits relax. You stop interrogating the reasoning because the presentation signals competence.

MIT researchers studying 54 college students writing essays with LLM assistance found the downstream effects: lower memory recall and over 15% reporting they felt no ownership of their own work. The students did not just think less carefully. They stopped feeling like the thinking belonged to them.

Faster is not smarter: what Utah proved mathematically

If AI makes you think less carefully, speed makes you think less accurately. Research from the University of Utah, published in Physical Review E, built a mathematical model of how decision speed relates to bias.

The finding was stark. In large groups, the fastest decision-makers were overwhelmingly those with the strongest pre-existing biases. Their choices aligned with their initial predispositions regardless of what the evidence actually showed. Slow deciders, by contrast, acted "as if they were initially unbiased," according to lead researcher Samantha Linn. Their decisions reflected accumulated evidence rather than gut reactions.

Now combine these two findings. AI tools reduce your cognitive effort by roughly 40%. That reduced effort makes you faster. And faster decisions, as Utah proved, are more biased decisions. The AI overreliance and trust bias problem is not just about accepting wrong answers. It is about building a workflow that systematically degrades the quality of your reasoning.

The real cost nobody is measuring

Companies tracking AI productivity paradox numbers focus on hours saved and tasks completed. Almost nobody measures reasoning quality. Nobody audits whether the decisions made with AI assistance are actually better than the ones made without it.

Briana Vecchione at Data & Society, after reviewing multiple studies on AI and cognition, put it plainly: "There's less effortful cognitive processes" across the board. Viktor Kewenig at Microsoft Research warned that users risk "offloading many critical faculties, such as critical thinking."

The pattern is consistent across every study. AI does not make you stupid. It makes you comfortable. And comfort, in cognitive terms, is the enemy of depth.

What actually protects your thinking

The answer is not to stop using AI. It is to stop using it as a replacement for thinking and start using it as a challenge to your thinking.

Three evidence-backed adjustments shift the dynamic. First, form your own position before consulting AI. The Utah research shows that initial deliberation, even brief, dramatically reduces bias in final decisions. Second, treat AI output as a draft to interrogate, not a conclusion to accept. The students who reasoned poorly with LLMs were the ones who stopped at the first answer. Third, build mental models for better decisions that force you to articulate why you agree or disagree with what AI tells you.

The 40% cognitive reduction is real. The question is whether you spend that saved effort on more thinking or no thinking at all.

Sources and References

  1. Computers in Human Behavior (ScienceDirect)Students using LLMs experienced roughly 40% lower cognitive load but produced significantly worse reasoning and argumentation quality compared to those using traditional search engines.
  2. Physical Review E / University of UtahIn large groups, the fastest decision-makers were overwhelmingly those with the strongest pre-existing biases; their choices aligned with initial predispositions regardless of evidence quality.
  3. Microsoft Research & Carnegie Mellon (CHI 2025)Survey of 319 knowledge workers found higher confidence in generative AI is associated with less critical thinking applied to AI outputs.
  4. MIT / Undark MagazineMIT study of 54 college students found AI-assisted essay writers showed lower memory recall, with over 15% reporting they felt no ownership of their own work.

Read about our editorial standards

You might also like: