AI can make us more efficient, but research warns of a price tag. When working with AI, we overestimate our abilities, our brain activity weakens, and our critical thinking diminishes. The solution is hardly to avoid new technologies, but rather to use them more wisely.
Text: Martti Asikainen, 22.1.2026 | Photo: Adobe Stock Photos
Many of us use generative AI to write emails, build presentations, schedule meetings, and take notes. What’s more, I’ve even heard of people who get AI agents to create their shopping lists and choose their meals from food delivery services when they can’t decide themselves. As the saying goes, everything has its price – whether that price is temporal, mental, financial, or even ethical. And so does AI.
Recently, numerous studies have emerged examining AI’s effects on our cognitive abilities. According to research published by Aalto University earlier this year, AI can cause people to overestimate their own intellectual capabilities and skills. Researchers found that all users in the study significantly overestimated their own performance. It was initially assumed that people with good AI literacy would also be good at assessing their performance, but this conclusion proved wrong. Most test participants blindly trusted not only their own abilities but also the AI. (Fernandes et al. 2025.)
Similar results have emerged from other studies showing that users are inclined to believe AI responses, particularly when the answer is formulated fluently and convincingly (Zou et al. 2023; Ovide 2025).
Research published this year by MIT suggests that using generative language models may, over time, damage an individual’s capacity for critical thinking (Kosmyna et al. 2025). According to the researchers, brain activity in people who wrote using their own mental effort was significantly more extensive and stronger, whilst in the group using ChatGPT, brain activity dropped by as much as 55%.
The results support claims about modern leadership and decision-making made in a column for Harvard Business Review (HBR) by Cheryl Einhorn, CEO of the AREA Method (familiar from TED talks). According to Einhorn, whilst promotional speeches claim AI creates space for deeper thinking, in reality it may tempt us to outsource our thinking entirely to AI (Einhorn 2025). We may not question at all whether the content it produces is reliable, or whether we’re outsourcing thinking and decision-making that should belong to us and our job description.
On the other hand, determining the “right” level of trust when it comes to AI is difficult, because the question of what’s too much and what’s appropriate is always case-specific. Research shows that people who find AI useful are more likely to trust its results (see Gillespie et al. 2025; Noh et al. 2025). At the same time, however, it’s reasonable to assume that experienced active users understand AI functionality and its limitations better than others.
The aforementioned MIT study also provided evidence of AI’s impact on our memory. According to the results, a staggering 83.3% of people who used ChatGPT in their writing process couldn’t remember a single quote from the text they’d produced just moments earlier. Users essentially wrote something, pressed the save button, and then their brains wiped the slate clean of the work they’d done. (Kosmyna et al. 2025)
According to the researchers, this is because in reality, AI was responsible for the thinking rather than the human (Kosmyna et al. 2025). Other studies have also suggested that AI takes a toll on an individual’s cognitive abilities. An international research group reported earlier this year on an extensive survey revealing that the more experts rely on AI, the less they report using critical thinking – particularly evaluative and verifying reasoning. (Lee et al. 2025)
According to the researchers, this may be because AI changes the nature of critical thinking by shifting the focus from broad information gathering to interpretation and synthesis. AI lightens a person’s cognitive load and makes work more efficient, but at the same time, excessive trust can easily lead to superficial evaluation, because critical thinking is best preserved when the user keeps their own judgement active throughout the work process. (Lee et al. 2025)
A German study conducted this year also confirms the theory about AI’s effects on cognitive abilities. The research found that shifting thinking work to AI can reduce the user’s own analysis and reflection. The study’s subjects showed a clear negative correlation between AI use and low critical thinking scores. Particularly younger users, who were especially dependent on AI, achieved weaker results than older participants. (Gerlich 2025)
As an AI trainer and workplace development professional, I can’t help but wonder whether I, along with others like me, am one of the Four Horsemen of the Apocalypse, bringing the end times upon humanity by destroying our capacity for critical thinking. Are we unknowingly turning thinking apes into snot-eating idiots who can’t tie their own shoelaces without someone else guiding them through the process?
Yet I want to believe otherwise. The calculator didn’t make us worse mathematicians. On the contrary, it enabled us to perform more complex calculations. Nor did writing weaken our memory or make our learning more superficial, even though Socrates, loitering on the street corners of ancient Athens, expressed deep concern about the impact of written text on human memory. Admittedly, it permanently changed the nature of our learning, but it didn’t destroy it.
Still, I wouldn’t dispute that someone could end up in a spiral where they need constant validation from AI for their decisions. Which of us wouldn’t use AI to create a summary rather than wade through an entire hundred-page report? Just as we might create strategies based solely on trend reports produced by AI. Without questioning data quality at all. Or ask AI to evaluate things on our behalf, even when that’s precisely our job.
According to Einhorn, writing in Harvard Business Review, these aren’t individual failures but signs of how quickly the human elements at the heart of decision-making, such as values and judgement, can slip from our grasp entirely unnoticed (Einhorn 2025). When technology promises more speed, efficiency, and ease, it’s entirely natural to take the offer. We want to stay in the race, and preferably ski past our competitors whilst simultaneously scrolling through our smartphones.
So how do we avoid becoming cognitive couch potatoes? The answer isn’t to abandon AI but to use it mindfully. This means maintaining our own thinking as an active part of the process – not just accepting what AI produces at face value.
We need to remember that AI is a tool, not a substitute for our judgement. Just as a calculator performs calculations but doesn’t understand mathematics, AI can process information but doesn’t truly comprehend context, ethics, or strategic implications the way we do.
The key is to use AI for what it does well – handling routine tasks, processing large amounts of data, generating initial drafts – whilst we focus on what humans do best: critical evaluation, creative synthesis, ethical reasoning, and strategic decision-making.
Perhaps the real skill we need to develop isn’t just AI literacy, but AI wisdom – knowing when to use it, how to use it, and crucially, when not to rely on it.
Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C. & Welsch, R. 2025. AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Computers in Human Behavior, 175, 108779. Elsevier.
Gerlich, M. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. MDPI.
Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne & KPMG International.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X-H., Beresnitzky, A. V., Braunstein, I. & Maes, P. 2025. Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv. https://arxiv.org/abs/2506.08872
Lee, H-P., Sarkar, A., Tankelvitch, L., Drosos, I., Rintel, S., Banks, R. & Wilson, N. 2025. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25) (Art. 1121, pp. 1–22). ACM.
Microsoft. 2024. Work Trend Index: AI at work is here. Now comes the hard part. Microsoft Corporation. Referenced 29 November 2025.
Ovide, S. 2025. You are hardwired to blindly trust AI. Here’s how to fight it. Published on Washington Post website 3.6.2025. Referenced 29.11.2025.
Einhorn, C. S. 2025. When Working With AI, Act Like a Decision-Maker—Not a Tool-User. Published in Harvard Business Review 31 October 2025. Harvard Business Publishing. Referenced 29 November 2025.
Zou, A., Wang, Z., Carlini, N., Nars, M., Kolter, J.Z. & Fredrikson, M. (2023). Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv. Cornell University.
Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi
This article was originally published in Haaga-Helia University of Applied Sciences’ eSignals Pro online magazine and was produced as part of Haaga-Helia and Estonian Business School’s FinEstAI project (Equipping knowledge workers in support functions with AI skills in Finland and Estonia). The project provides training in AI application use for women over 50 in Finland and Estonia. It is funded by the EU’s Interreg Central Baltic programme.
Finnish AI Region
2022-2025.
Media contacts