
A Bit of a Stand
Author Name
Joachim Nicolodi
Published On
July 24, 2025
Keywords/Tags
Cognitive Offloading, Cognitive Debt, Critical Thinking
My last blog post ended with a note on conferences; this one will begin that way. Conferences are great, arguably one of the nicest things about being an underpaid and overworked researcher. Sipping a negroni at 3:00 a.m. at the after-after-after party, overlooking a Greek harbour in the pale moonlight, discussing whether particles and fields really exist, and why Hemingway decided to name his most famous character “Santiago” (a Spaniard in Cuba?) – I cannot help but think: it’s all worth it. A nerd among nerds, peaceful and happy.
Conferences are great for another reason, too. They are the perfect place to gauge the current vibes within a field, vibes that will eventually (so the researcher hopes) reverberate throughout society and influence public opinion, policymakers, and industry leaders alike. It’s fair to say that the vibes within the AI field have changed, as swiftly and dramatically as the tides of the Aegean Sea. Where two years ago hope and curiosity reigned supreme, scepticism and disenchantment have taken hold. When I tell fellow researchers that a substantial part of my work is concerned with studying LLMs, I’m usually greeted with a slight scoff. “The party is over,” they seem to think, “winter is coming”: yet another decade of dried-up funding and failed companies, fuelled by shattered dreams and broken promises.
Boiled down to a single sentence, and only slightly exaggerated, the current vibes would look something like this: “LLMs are perfectly unintelligent systems, based on unsophisticated mathematics, which, at best, are silly little gadgets that write bad jokes and, at worst, turn generations of users into brainless, drooling zombies – all while polluting our environment unlike anything since the invention of the internal combustion engine.” This blog post is a stand, however small, that things are more complicated than that – if only for the sake of a healthy Hegelian dialectic. The “brain rot” charge deserves particular attention, since it has not only dominated recent conference chatter but also news and social media feeds.
The two papers I’ve heard mentioned most often in support of this charge both came out this year – one by the MIT Media Lab as recently as June (Kosmyna et al., 2025), the other in April by Microsoft Research Cambridge, just around the corner (Lee et al., 2025). Let’s start with the longer, more recent piece.
In “Your Brain on ChatGPT: Accumulation of Cognitive Dept when Using an AI Assistant for Essay Writing Task”, Kosmyna and colleagues studied the cognitive and neural impact of LLMs across three groups: one using LLMs to assist with essay writing, one relying solely on traditional search engines, and a “Brain-only” group using no tools at all. The study included 54 university students and was conducted over a four-month period. In a nutshell, the Brain-only group exhibited the strongest and widest-ranging neural activity, shown by increased firing across all four EEG frequency bands. The LLM group exhibited the lowest activity, with search engine users falling somewhere in between. Furthermore, the LLM group: (1) produced more homogeneous essays, (2) felt less ownership over them, and (3) could recall fewer pieces of information about the essays immediately afterwards. Again, the Brain-only group fared best, with search engine users in the middle. Four months later, participants were retested after being reassigned to different groups. The LLM-to-Brain cohort struggled with the essay-writing task, even when they picked the same topic they had written about earlier. In contrast, the Brain-to-LLM group could easily quote from their original work and use the tools more strategically.
Now, one immediate point is a boring one about statistical power. Only 18 of the original 54 participants completed the final session, since it wasn’t compulsory, meaning the LLM-to-Brain and Brain-to-LLM groups contained just 9 participants each. That’s not exactly much to base grand claims about long-term brain development on, though the authors certainly insinuate as much in their abstract and public commentary (Chow, 2025). A more substantial issue is this: the main findings amount to something like, “If you let ChatGPT write your essays for you, your brain will not be particularly challenged, and you might not learn that much.” Who would have thought? By the end of the study, most users in the LLM group were simply copy-pasting the generated text; only a few used the tool for grammar checks and research support. Demonstrating that this approach is neither cognitively nor pedagogically useful feels like low-hanging fruit, and one wonders why it took over 200 (!) pages to make the point.
Granted, solid behavioural and neuroimaging data are always welcome, even when they support slightly self-evident claims. The study also neatly highlights the ever-present human tendency to take the path of least resistance, even when it comes at a cost. The problem, however, is that these findings are being spun – by the media and by fellow conference-goers – into a sweeping narrative about how AI use is making us “stupid” (Sellman, 2025) or “dumb” (Brooks, 2025), and the researchers do little to counter this interpretation. But that’s not what the study shows. It shows that a very specific way of using AI is problematic: namely, outsourcing entire tasks to the system with little thought or external guidance (from teachers, researchers, etc.). Importantly, no serious proponent of LLMs advocates for that kind of use. Instead, they support applications such as using LLMs as tutors for students from diverse backgrounds, or as collaborative partners that guide learners toward further reading (Grassucci et al., 2025). A recent six-week study in Nigerian secondary schools followed exactly this model, testing personalized, LLM-powered tutoring. The result: significant improvements across the board, and at lower cost than conventional interventions (De Simone et al., 2025). So: copy-pasting was never a good idea – not for my high school Latin translations, and not now with AI. But that’s not all the technology has to offer.
Alright, a quick word on “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” by Microsoft Research Cambridge. (Quick also because I don’t want to ruin my chances with a potential future employer. Let a philosophy grad dream.) Lee and colleagues studied the impact of generative AI on critical thinking in a large sample of 319 participants. They wanted to know: (1) where in the workflow involving GenAI critical thinking occurred, and (2) how effortful that critical thinking was, given the presence of these tools. The methodology is questionable in places, since it relies entirely on participants’ subjective reports – many of whom, as the authors themselves note, may not be accustomed to reflecting on their own work processes. Moreover, it’s notoriously hard to avoid researcher bias in studies like this: even a slightly awkward phrasing when introducing a task can nudge responses in a particular direction. But again, the more important issue is elsewhere: even if we grant that the methodology is sound, the actual results are weaker than the conclusions drawn by the researchers, and certainly weaker than those drawn by some media outlets (Turner, 2025).
The researchers summarise their findings as follows: “[…] While GenAI can improve worker efficiency, it can inhibit critical engagement with work and potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” This conclusion isn’t borne out by the data. Critical thinking didn’t decrease – it shifted. Participants reported engaging in critical thinking when drafting prompts, evaluating outputs, and integrating content into specific contexts. Overall, self-reported critical thinking remained high, particularly for high-stakes tasks. Where it decreased was in low-stakes tasks – but that alone doesn’t justify claims about general cognitive decline. Yes, cognitive offloading can weaken the skill in question (think navigation in the age of Google Maps), but whether such “decline” generalises to other functions is unclear and, frankly, implausible. A researcher using AI to write trivial emails doesn’t automatically become worse at writing academic papers. If any kind of cognitive offloading caused such general decline, then today’s researchers should be far worse at their job than earlier generations – simply because we’re worse at navigating library aisles in the age of online databases.
The point here isn’t to discredit any specific researchers or articles (Microsoft Cambridge: please remember this when my application lands on your desk). Rather, it’s a defence – yet again – of the oh-so-boring but oh-so-necessary middle ground. Just as it was a mistake to believe, a couple of years ago, that ChatGPT would solve all the world’s problems, we should be equally wary of swinging too far in the other direction and treating this technology as omnium malorum origo (see? good thing I didn’t copy-paste back in the day). This might sound like a platitude, but platitudes often convey deep and important truths. If we discard this entire technology because of a few preliminary studies raising potential issues, we risk missing out on monumental benefits.
One such benefit was already mentioned: using ChatGPT as personal tutor in underfunded schools. The population of sub-Saharan Africa is projected to double by 2050, and widely available, high-quality education is arguably the single most important factor in securing the region’s future prosperity (Fornino & Tiffin, 2024). Another example is Morley et al.’s (2025) recent article in Minds & Machines, which explores the potential of AI to improve global healthcare – through more targeted treatments, remote access to care, and automation of administrative tasks to reduce burnout and cognitive overload. The article is commendable precisely because it outlines what a balanced approach might look like: emphasising benefits while also addressing real implementation challenges.
In brief: let’s think critically about CO₂ emissions; let’s educate users that generating silly poems isn’t the best use of energy; let’s caution against overreliance and intellectual laziness – but let’s not throw the baby out with the bathwater. This technology still holds too much promise to be casually and carelessly cast aside, even by the usually careful researchers attending my conferences.
Bibliography
- Brooks, D. (2025, July 3). Are We Really Willing to Become Dumber? The New York Times. https://www.nytimes.com/2025/07/03/opinion/aritificial-intelligence-education.html
- Chow, A. (2025, June 23). ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. Time Magazine. https://time.com/7295195/ai-chatgpt-google-learning-school/
- De Simone, M., Tiberti, F., Barron Rodriguez, M., Manolio, F., Mosuro, W., & Dikoru, E. J. (2025). From Chalkboards to Chatbots: Evaluating the Impact of Generative AI on Learning Outcomes in Nigeria. Washington, DC: World Bank. https://doi.org/10.1596/1813-9450-11125
- Fornino, M. & Tiffin, A. (2024, April 25). Sub-Saharan Africa’s Growth Requires Quality Education for Growing Population. International Monetary Fund Blog. https://www.imf.org/en/Blogs/Articles/2024/04/25/sub-saharan-africas-growth-requires-quality-education-for-growing-population
- Grassucci, E., Grassucci, G., Uncini, A., & Comminiello, D. (2025). Beyond Answers: How LLMs Can Pursue Strategic Thinking in Education (arXiv:2504.04815). arXiv. https://doi.org/10.48550/arXiv.2504.04815
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (arXiv:2506.08872). arXiv. https://doi.org/10.48550/arXiv.2506.08872
- Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–22. https://doi.org/10.1145/3706598.3713778
- Morley, J., Hine, E., Roberts, H., Sirbu, R., Ashrafian, H., Blease, C., Boyd, M., Chen, J. L., Filho, A. C., Coiera, E., Cohen, G. I., Fiske, A., Jayakumar, N., Kerasidou, A., Mandreoli, F., McCradden, M. D., Namuganza, S., Nsoesie, E. O., Parikh, R. B., … Floridi, L. (2025). Global Health in the Age of AI: Charting a Course for Ethical Implementation and Societal Benefit. Minds and Machines, 35(3). https://doi.org/10.1007/s11023-025-09730-3
- Sellman, M. (2025, June 18). Using ChatGPT for work? It might make you stupid. The Times. https://www.thetimes.com/uk/technology-uk/article/using-chatgpt-for-work-it-might-make-you-more-stupid-dtvntprtk
- Turner, B. (2025, April 3). Using AI reduces your critical thinking skills, Microsoft study warns. Live Science. https://www.livescience.com/technology/artificial-intelligence/using-ai-reduces-your-critical-thinking-skills-microsoft-study-warns
-
Julieta Longo & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
https://iai.tv/articles/there-are-no-particles-or-fields-only-structure-auid-3243
https://www.theguardian…com/media/2024/dec/02/brain-rot-oxford-word-of-the-year-2024
