Researchers from the University of Pennsylvania have identified 'cognitive surrender,' where people outsource reasoning to AI without verification. In experiments, participants accepted incorrect AI responses 73.2 percent of the time across 1,372 participants. Factors like time pressure increased reliance on flawed outputs.
A new study from the University of Pennsylvania explores how large language models prompt users to abandon their own logical thinking, dubbing the phenomenon 'cognitive surrender.' The research builds on dual-process theory, introducing 'artificial cognition' as a third mode where decisions stem from AI outputs rather than human deliberation. Unlike traditional tools like calculators, AI invites wholesale acceptance of its confident responses, often without oversight, the researchers note. They conducted experiments using Cognitive Reflection Tests, where participants had access to a chatbot programmed to give wrong answers half the time. Those consulting the AI used it for about 50 percent of problems, accepting correct answers 93 percent of the time and faulty ones 80 percent. Despite errors, AI users reported 11.7 percent higher confidence in their answers compared to those relying solely on their brains. Incentives for correct answers boosted overruling of bad AI advice by 19 percentage points, while a 30-second timer reduced it by 12 points. Across over 9,500 trials, participants overruled faulty AI just 19.7 percent of the time. People with high fluid intelligence were less prone to surrender, while those viewing AI as authoritative were more susceptible. The researchers caution that while risky with imperfect AI, surrender could benefit from superior systems in data-heavy domains.