GPT → Dementia

GPT → Dementia

My new prediction: over-reliance on GPT and other LLMs will lead to meaningful increases in early cognitive decline and dementia.

Nearly 20 years ago I offered the rather modest prediction that over-reliance on automated navigation (e.g., Google Maps, Apple Maps, Waze) would lead to increased dementia, and about a decade ago the research began confirming my prediction. Well I guess GPT is the new GPS, and the neuroscience of our new cognitive crutch is only just beginning.

In a new study, students used LLMs for essay writing while others had only search engines or no digital tools at all. Students in the “brain-only” condition “exhibited the strongest, most distributed [functional cortical] networks”. In contrast, those students relying on LLMs showed significantly less brain connectivity.

Consistent with these tools being used for efficiency, “cognitive activity scaled down in relation to external tool use”. Further, the LLM students “showed reduced alpha and beta connectivity, indicating under-engagement”, while the control students “exhibited higher memory recall and activation of occipito-parietal and prefrontal areas”.

There have already been some fair criticisms of this paper for having a bit of an apple-to-ornages final comparison in which brain-only and LLM students switch tools for a final essay. Also, the idea that this specific instance of essay writing is leading to cognitive decline is wrong. The revealed processing differences are dynamic changes in functional connectivity, not long-term connectivity. But my argument isn’t based on the final crossover phase. In the first three essay phases, the brain-only group is showing much greater functional connectivity, cognitive engagement, and creativity. If this pattern persists over time, then cognitive decline is accelerated.

LLM over-reliance raises serious concerns about diminished learning and critical thinking skills. Even more concerning, consistent under-engagement of these cognitive networks could have long-term implications for vulnerability to conditions like dementia by reducing the brain's resilience to age-related changes.  While direct, long-term studies on LLM impact are still nascent (given the recency of widespread LLM use), we can extrapolate from existing knowledge about brain plasticity, cognitive reserve, and the "use it or lose it" principle.

The brain's ability to withstand neurological damage (e.g., from aging, injury, or disease) while maintaining function builds up through mentally stimulating activities, education, and complex cognitive engagement throughout life. If LLMs consistently perform tasks that previously required deep thinking, critical analysis, complex problem-solving, information retrieval and synthesis, and creative ideation, individuals will almost certainly experience a reduction in the functional connectivity and glial activity that builds these reserves. Regularly outsourcing mental "heavy lifts" to AI in favor of “efficiency” means fewer opportunities to strengthen neural pathways and build robust cognitive networks. Compounded over decades, this will lead to increases in age-related cognitive decline or neurodegenerative diseases.

Similarly, more specific brain regions and neural circuits associated with targeted skills also benefit from regular, deep engagement. If individuals consistently rely on LLMs for tasks like writing and structuring arguments then the ability to organize thoughts, construct coherent arguments, and articulate complex ideas might decline, skills in searching, evaluating, and integrating information from multiple sources could weaken, and the core human capacity for divergent thinking and novel problem-solving might diminish.

There are also implications for metacognition. If AI consistently provides "answers" without requiring the user to engage in the struggle of learning or problem-solving, metacognitive skills related to self-monitoring, strategy selection, and error detection might not develop fully. Interestingly, rather than reducing individuals’ sense of self-efficacy, most people engaging with AI, or even just simple search, tend to dramatically overestimate their self-efficacy, internalizing the AI’s capabilities as though they were their own even when told they wouldn’t have access to the AI in the future.

The LLM-trained students in the study above experienced a decrease in their sense of ownership and agency, even when no longer using the tools. More speculative (for me) is whether a deep diminishment of agency and accomplishment will emerge. Further, if AI consistently provides seemingly "perfect" outputs, it might distort individuals' perception of normal human error and learning processes, potentially impacting intellectual humility and fostering unhealthy social comparisons.

All of these predictions are just that. LLMs are complex; human brains are even more complex; the two together…what a mess! And, humans are highly heterogeneous. Susceptibility to these negative effects will likely vary substantially by individual cognitive ability, genetic endowment, age, and other independent factors.

Also, we don’t all engage with AI in the same way. The crucial issue here—shallow engagement robbing the brain of deep processing—may be a dominant motif for human-AI interaction but it hardly describes everyone all of the time. Yet, the way LLMs are used matters immensely. Using them as brainstorming partners, as critics for idea evaluation, or for automating truly tedious, cognitively shallow sub-tasks is very different from passively accepting LLM outputs. But in the end, this is the point: we don’t have to automate away ourselves.

As always the question for me is how to gain the real benefits of LLMs without losing the productive friction that drives learning and cognitive health. This has been my challenge to classrooms of engineering & entrepreneurship students for 10 years about GPS navigation apps; now I’ll give the the same challenge for GPT “thinking” apps: build AI that forces us to be better, not only when we’re using it but better than where we started when we turn it off.


Follow me on LinkedIn or join my growing Bluesky!