Khaberni - A recent study conducted at the Wharton School of the University of Pennsylvania warned that excessive trust in artificial intelligence may make individuals more susceptible to what is known as "cognitive surrender", i.e., relying on smart system responses without sufficient thinking.
Although these behaviors may improve the accuracy of users when the system responses are correct, they conversely lead to a significant decline in performance when errors occur in the algorithms, according to the PsyPost website.
From two systems to three cognitive systems
Traditionally, cognitive psychologists have divided human cognition into two systems: one based on quick, instinctive, and emotional responses, while the other is linked to thoughtful consideration and solving complex problems.
However, researchers point out that the spread of generative artificial intelligence has added a new dimension that does not fully fit within this model, as many individuals now delegate thinking processes to external software in various fields, from writing messages to making complex decisions.
For his part, researcher Stephen Shaw, from the Wharton School, stated that artificial intelligence has become a "continuously available cognitive partner," noting that public discussions often focus on the accuracy of these systems, while an important question regarding their impact on the way humans think when relying on them is overlooked.
Meanwhile, researchers proposed what they called the "three-system theory", which adds artificial intelligence as a third system to the traditional two, encompassing generative algorithms that operate externally and dynamically based on data.
According to the new model, the integration of artificial intelligence into the cognitive process creates what researchers described as a "triple cognitive environment", where the artificial system participates in shaping human thinking.
Between assistance and surrender
Researchers differentiated between "strategic aid", which supports human thinking, and "cognitive surrender", which occurs when an individual relinquishes their mental decisions entirely in favor of the algorithm's judgment.
Three studies conducted by the team showed that participants with greater confidence in technology were more likely to adopt incorrect answers, while individuals inclined to deep thinking had a greater ability to detect errors and reject them. It was also found that those with higher intelligence were more resistant to this type of surrender.
The results also indicated that time constraints reduce accuracy of performance, but do not limit reliance on algorithms, while financial incentives and immediate feedback relatively reduced cognitive surrender, without entirely eliminating it.
The allure of artificial intelligence
Shaw clarified that cognitive surrender is not necessarily negative in itself, as it can improve the speed and accuracy of performance in some situations, but it becomes problematic when it deprives the user of the ability to make independent decisions.
However, he explained that users may fall into this pattern unconsciously, due to the allure of modern language models and their features that tend to satisfy the user.
The study concluded that optimal use of artificial intelligence depends on "calibration", i.e., knowing when to use it as a support tool and when it should not be entirely delegated with thinking.
Researchers emphasized the importance of maintaining critical thinking in some contexts, by formulating ideas personally first, then using artificial intelligence tools to expand, test, or improve them, rather than fully replacing them.



