43% of people follow AI advice they know is wrong, and trust makes it worse
Forty-three percent of the time, people follow AI guidance they can see is wrong.
That number comes from a February 2026 study published in Scientific Reports, where researchers at Lancaster University asked 295 participants to evaluate whether faces were real or AI-generated. Half received guidance labeled as coming from AI. The results revealed something unsettling: participants accepted incorrect AI suggestions nearly half the time, even when the mistakes were detectable.
But the deeper finding is what should keep you up at night. People who reported more positive attitudes toward AI performed worse at spotting errors than skeptics did. The more you trust the machine, the less you actually think.
The trust trap nobody talks about
Automation bias (the tendency to defer to automated recommendations even when you know better) is not new. Aviation researchers identified it decades ago. What is new: the scale at which it now operates.
A 2025 experiment from LMU Munich involving 2,784 participants made this concrete. Researchers asked people to extract greenhouse gas data from corporate reports, with AI providing suggestions that were sometimes wrong. The single strongest predictor of whether someone caught errors was not their education, experience, or incentive structure. It was their attitude toward AI.
Participants who viewed AI favorably showed what the researchers called "dangerous overreliance on algorithmic suggestions." Skeptics caught the same mistakes consistently. Financial incentives for accuracy made no meaningful difference. This means the problem is not about motivation; it is about a cognitive default that bypasses deliberate reasoning.
When experts stop being experts
The consequences amplify in high-stakes environments. A study reviewed by the Society for Neuroscience found that when AI provided incorrect mammography results, inexperienced and moderately experienced radiologists dropped their cancer detection accuracy from roughly 80% to 22%. Their training, their pattern recognition, their years of practice: all overridden by a confident machine answer.
Even very experienced radiologists (the ones you would expect to trust their own judgment) fell from 80% accuracy to 45%. Experience offered some protection, but not nearly enough.
This is not a story about bad AI. The AI in these studies was deliberately set to give wrong answers as a test. The story is about what happens to human judgment when an algorithmic recommendation sits next to it. The recommendation acts like an anchor, pulling decisions toward it regardless of whether the human has independent evidence pointing elsewhere.
The feedback loop that makes everything worse
Here is where the problem compounds. A systematic review in the Journal of the American Medical Informatics Association documented a consistent pattern: once people begin relying on automated systems, they stop performing the verification steps they used to do manually. Over time, the skills that would let them catch AI errors atrophy.
In healthcare, this means clinicians who rely on AI diagnostic tools gradually lose their independent diagnostic ability. In hiring, it means recruiters who defer to AI screening stop evaluating candidates with their own criteria. In finance, it means analysts who follow algorithmic trading signals stop questioning the underlying logic.
The Lancaster University researchers found something that crystallizes this: participants who trusted AI more actually performed worse at the core task. Not because AI made them lazy, but because trust itself changes how the brain processes information. When you believe a system is reliable, your error-detection circuits turn down their sensitivity. You are not choosing to ignore problems. Your brain is literally processing them less.
What actually reduces automation bias
The research does point to interventions that work, though none are easy fixes. The LMU Munich study found that requiring people to actively correct AI suggestions (rather than just accept or reject them) improved accuracy by a small but significant margin. Forcing engagement with the reasoning behind a recommendation, not just its output, breaks the passive acceptance pattern.
Other research suggests that making AI error rates visible before people use the system reduces blind trust. When hiring managers in one study were told upfront that an AI screening tool had specific biases, they were more likely to override its recommendations appropriately.
The most effective intervention, though, is uncomfortable: cultivating healthy skepticism toward AI in a culture that increasingly treats AI confidence as competence. The 2,784-participant Munich study found that skeptics consistently outperformed optimists at catching errors, not because they were smarter, but because they never stopped questioning.
Every time you accept an AI recommendation without checking it against your own judgment, you are training yourself to do it again. The 43% who followed wrong guidance in the Lancaster study were not foolish. They were behaving exactly the way human cognition works when an authoritative-sounding system says "here is the answer."
The question is not whether you trust AI too much. The research suggests you almost certainly do. The question is whether you are willing to be slower, more deliberate, and occasionally wrong on your own, instead of effortlessly wrong because a machine told you so.
Sources and References
- Lancaster University / Scientific Reports — In a study of 295 participants, people followed incorrect AI guidance 43% of the time.
- LMU Munich / University of Maryland — A randomized experiment with 2,784 participants found attitudes toward AI were the strongest predictor of error detection.
- BrainFacts.org / Society for Neuroscience — When AI provided incorrect mammography results, inexperienced radiologists dropped cancer detection from 80% to 22%.
- Journal of the American Medical Informatics Association — Systematic review found people stop performing verification steps once relying on automated systems.
Read about our editorial standards →


