AI use makes us overestimate our cognitive performance

New research warns we shouldn’t blindly trust Large Language Models with logical reasoning –– stopping at one prompt limits ChatGPT’s usefulness more than users realise.
Daniela da Silva Fernandes on the left and Robin Welsch on the right.
Daniela da Silva Fernandes on the left and Robin Welsch on the right. Photo by Matti Ahlgren.

When it comes to estimating how good we are at something, research consistently shows that we tend to rate ourselves as slightly better than average. This tendency is stronger in people who perform low on cognitive tests. It’s known as the Dunning-Kruger Effect (DKE) –– the worse people are at something the more they tend to overestimate their abilities and the “smarter” they are, the less they realise their true abilities. 

However, a study led by Aalto University reveals that when it comes to AI, specifically, Large Language Models (LLMs), the DKE doesn’t hold, with researchers finding that all users show a significant inability to assess their performance accurately when using ChatGPT. In fact, across the board, people overestimated their performance. On top of this, the researchers identified a reversal of the Dunning-Kruger Effect ––   an identifiable tendency for those users who considered themselves more AI literate to assume their abilities were greater than they really were.  

‘We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,’ says Professor Robin Welsch. ‘We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.’  

The finding adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like ‘dumbing down’ people’s ability to source reliable information and even workforce de-skilling. While people did perform better when using ChatGPT, it’s concerning that they all overestimated that performance. 

‘AI literacy is truly important nowadays, and therefore this is a very striking effect. AI literacy might be very technical, and it’s not really helping people actually interact fruitfully with AI systems’, says Welsch.  

‘Current AI tools are not enough. They are not fostering metacognition [awareness of one’s own thought processes] and we are not learning about our mistakes,’ adds doctoral researcher Daniela da Silva Fernandes. ‘We need to create platforms that encourage our reflection process.’ 

The article was published on October 27th in the journal Computers in Human Behavior. 

Why a single prompt is not enough 

The researchers designed two experiments where some 500 participants used AI to complete logical reasoning tasks from the US’s famous Law School Admission Test (LSAT). Half of the group used AI and half didn’t. After each task, subjects were asked to monitor how well they performed –– and if they did that accurately, they were promised extra compensation. 

‘These tasks take a lot of cognitive effort. Now that people use AI daily, it’s typical that you would give something like this to AI to solve, because it’s so challenging’, Welsch says. 

The data revealed that most users rarely prompted ChatGPT more than once per question. Often, they simply copied the question, put it in the AI system, and were happy with the AI’s solution without checking or second-guessing. 

‘We looked at whether they truly reflected with the AI system and found that people just thought the AI would solve things for them. Usually there was just one single interaction to get the results, which means that users blindly trusted the system. It’s what we call cognitive offloading, when all the processing is done by AI’, Welsch explains.  

This shallow level of engagement may have limited the cues needed to calibrate confidence and allow for accurate self-monitoring. Therefore, it’s plausible that encouraging or experimentally requiring multiple prompts could provide better feedback loops, enhancing users’ metacognition, he says. 

So what’s the practical solution for everyday AI users? 

‘AI could ask the users if they can explain their reasoning further. This would force the user to engage more with AI, to face their illusion of knowledge, and to promote critical thinking,’ Fernandes says.

This news item was originally published on the Aalto University Website on 29.10.2025

  • Updated:
  • Published:
Share
URL copied!

Read more news

Aalto University, Computer Science Department, Highlight, Research Published:

A survey on users' experiences of Mykanta in collaboration between Aalto University and Kela

Senior university lecturer Sari Kujala's research group is exploring, in collaboration with Kela, users' experiences with the Mykanta online patient portal and the MyKanta mobile application.
Robot welding
Aalto University, AI, Department of Information and Communications Engineering, Highlight, Research Published:

Specialised AI models could be Finland's next global export

Finland has the potential to build AI solutions that are different from ChatGPT-like large language models. Aalto University's School of Electrical Engineering already has decades of experience in developing specialised, resource-efficient AI models. They could be a key component of future 6G networks, automation, and industrial systems – and the next competitive edge of our country.
Patric Ostergard
Aalto University, Awards, Department of Information and Communications Engineering, Highlight Published:

Professor Patric Östergård becomes a member of the Finnish Society of Sciences and Letters

Finnish Society of Sciences and Letters is Finland's oldest science academy. It promotes scientific discussion, publishes scientific literature, awards prizes and provides financial support for research.
Scan of Awards
Aalto University, Awards, Department of Information and Communications Engineering, Highlight Published:

Postdoctoral researcher Eloi Moliner makes history as a 5-time award winner

Eloi Moliner doesn’t brag about his achievements: being one of the most decorated doctoral researchers in Aalto University’s history or getting headhunted for prestigious research internships across the globe. However, his community would like to highlight his success and contributions to the field of audio signal processing