Is AI Making You Forget How to Think?

AI cognitive offloading trades short-term convenience for long-term memory loss. The science explains why, and what tools designed for understanding do differently.

Elliott Tong

Elliott Tong

March 14, 2026

16 min read

Is AI Making You Forget How to Think?

Is AI Making You Forget How to Think?

AI tools reduce the cognitive effort required to get an answer. Research consistently shows this also reduces what you remember and what you can think through independently. But AI didn't create this problem. The same trade-off has appeared with every convenience tool since Google. The real question: were any of your tools designed to build knowledge, not just deliver it?


In 2020, researchers published a study in Nature Scientific Reports that's easy to dismiss as obvious until you look at the numbers. The finding: habitual GPS use causes measurable spatial memory decline. Not metaphorical decline. Dose-dependent, longitudinally tracked, r = -0.68 correlation between GPS reliance and the brain's ability to find its own way without assistance.

The more you outsource navigation to a device, the worse your internal navigation becomes. At scale. Consistently.

People who heard about this study mostly nodded and moved on. We all knew GPS was changing how we get around. The interesting part isn't that GPS affected navigation. It's what the study tells us about every other convenience tool we use without measuring what we lose.

What Is Cognitive Offloading, and Why Does It Matter?

Cognitive offloading is using external tools to handle mental tasks. A calculator for maths. GPS for spatial orientation. A calendar for scheduling. An AI assistant for writing, summarising, and answering questions you'd otherwise have to think through yourself.

This isn't new. Humans have always used external tools to extend mental capacity. The question isn't whether offloading happens. It's what you lose when it does.

A 2016 review by Risko and Gilbert in Trends in Cognitive Sciences synthesized evidence across dozens of studies and found a consistent pattern: cognitive offloading improves immediate task performance while reducing memory for the offloaded content. The correlation between offloading and memory loss is r greater than or equal to .51 across replicated measurements. That's not a small effect.

The mechanism makes sense once you know it. Memory is built through retrieval, not exposure. When you look something up, you get the answer. But because you didn't have to retrieve it from your own memory, you didn't strengthen any memory trace. You have the answer. You don't have the knowledge.

The pattern showed up with Google in 2011. It showed up with GPS in 2020. It's showing up with AI now. The tool changes. The mechanism stays the same.

The Google Effect Was a Warning Nobody Took Seriously

In 2011, Sparrow, Liu, and Wegner published a study in Science documenting what they called the "Google Effect." When people knew information was stored somewhere accessible, they didn't bother encoding it. They remembered where to find it, not the information itself. They remembered the folder, not the fact.

A follow-up by Storm et al. (2016) found something that should have been treated as an alarm: 30% of participants had stopped attempting to answer simple questions from memory at all. The default had shifted. Before reaching into their own knowledge, people reached for their phone. The behaviour had automated.

The crueler finding was the illusion. People reported feeling significantly more knowledgeable after searching online, even when they'd absorbed nothing. The internet created a false confidence in one's own knowledge. People blurred the line between "I know this" and "I can find this." These are not the same thing. One lets you think with it. The other requires a connection and a search engine.

That was 2011. Before smartphones were everywhere. Before social media became the primary reading context. Before AI assistants could summarise anything you pointed them at.

If the Google Effect didn't generate the design response it warranted, the next decade was going to be much harder.


Did AI Create the Cognitive Offloading Crisis?

No. And the distinction matters, because the framing "AI is making us dumber" leads to the wrong conclusions.

Herbert Simon identified the core design error in 1971, before personal computers existed: "Many designers of information systems incorrectly represented their design problem as information scarcity rather than attention scarcity." They kept building systems that gave people more information, when what was needed were systems that helped people understand it.

That same error runs through every decade of tool design since:

YearToolOptimised ForDidn't Address
2007PocketSaving articlesWhether you'd read or remember them
2008Early RSS readersSubscribing to moreRetaining anything
2010sTTS tools (Speechify et al.)Getting through content fasterComprehension or retention
2015+Read-it-later appsManaging the backlogThe backlog was the wrong problem
2020+AI summariesGetting answers fasterWhether you'd built knowledge you could use

Nicholas Carr documented the deep reading crisis in 2010, in The Shallows, without any ChatGPT to blame. Maryanne Wolf spent years afterward watching digital habits erode the ability to read deeply, and wrote Reader, Come Home in 2018, still before large language models were in consumer hands.

AI didn't create the passive consumption problem. It inherited the infrastructure of a passive consumption culture and made the consequences impossible to ignore.

The reframe matters because it changes what you should do about it. If the problem is AI specifically, the answer is to use less AI. If the problem is tool design across six decades of building for access instead of understanding, the answer is different: you need tools built for the right goal.

What the AI-Specific Research Actually Shows

This doesn't mean AI has no distinct effects. The research since 2024 has uncovered some patterns worth knowing.

A Microsoft and Carnegie Mellon study presented at CHI 2025 surveyed 319 knowledge workers and found a significant negative correlation between confidence in AI and critical thinking engagement. The more someone trusted AI to handle thinking, the less independent critical thinking they applied. This wasn't about competence. People who used AI most confidently were often doing less evaluative work, not more.

A Corvinus University randomized controlled trial (2025) found students with unrestricted AI access showed 20 to 40 percentage point knowledge declines on offline tests compared to students without it. The catch: AI-permitted assessments showed the opposite. Students with AI access scored higher. The contradiction resolves when you understand what's being measured. AI-permitted tests measure access to information. Offline tests measure whether the knowledge is actually yours.

The Anthropic Education Report (2025) analyzed 574,740 student-AI interactions and found 47% were direct answer-seeking. Students offloaded creating (39.8%) and analyzing (30.2%) to AI. Those are two of the highest-order cognitive tasks in Bloom's taxonomy. The exact tasks that build the deepest understanding. They're also the ones that feel hardest, which makes them the most tempting to bypass.

One pre-print from MIT Media Lab (Kosmyna et al., 2025) made headlines with a finding that participants who used ChatGPT for essay writing showed lower neural connectivity than those who wrote without AI assistance. The study is N=54 and not yet peer-reviewed, so treat it as suggestive rather than conclusive. But the direction is consistent with everything else: when tools do the thinking, the thinking atrophies.

The more durable finding is the expertise split.

The Expertise Duality: Why AI Doesn't Affect Everyone the Same Way

The most replicated and theoretically grounded finding in the AI-and-cognition research isn't that AI makes everyone worse. It's that it affects experts and novices in opposite ways.

Experts use AI as an amplifier. They bring prior knowledge to the interaction, evaluate what the AI produces against what they already know, identify errors and gaps, and synthesize toward something better than either they or the AI would produce alone. Their cognitive effort stays high. Their output improves.

Novices use AI as a bypass. They come without foundational knowledge to evaluate the output against. They accept answers that seem plausible. They skip the reading, the working-through, the confusion that would have built understanding. Their cognitive effort drops. Their learning stops.

The variable isn't the tool. It's whether the user has enough prior knowledge to know if the output is any good.

This creates a compounding problem. To use AI well, you need foundational knowledge. To build foundational knowledge, you need to actually learn things. Learning things requires the cognitive effort AI is most tempting to replace. The novice who uses AI as a bypass isn't just getting less from AI than the expert. They're also preventing themselves from building the knowledge that would let them use AI productively.

The people who get the most from AI are the people who already know the most. Everything else follows from this.


What Happened to Reading Itself

The individual effects of AI on cognition are one half of the picture. The other half is what AI summaries have done to the act of reading at scale.

When Google's AI Overviews appear in search results, only 1% of users click through to the original source (Pew Research, 2025). Traffic from Google to news sites fell 26% in a single year (Nieman Lab/WAN-IFRA). 60% of Google searches now end without any click at all.

These numbers describe a world where AI summaries aren't changing how people read. They're routing around the reading entirely. The crisis has shifted. It's no longer only about whether reading produces lasting understanding. It's about whether reading survives as something people do at all.

Reading for fun has declined sharply too. In 1984, 35% of 8th graders read for enjoyment. By 2023, that figure was 14%. One documented pattern in student populations is a shift in language around reading itself: "Reading is a time waste that makes things harder rather than more understandable." That's not students being lazy. That's students who've learned they can get the answer without reading, concluding the reading has no value. The tool shaped the belief.

The concern, at a public level, is real and rising. A Pew Research survey of 5,023 U.S. adults in September 2025 found 50% were more concerned than excited about AI, up from 37% in 2021. 53% believe AI will erode creativity. The public is noticing something the research is still trying to measure.


The Research on What Actually Works

The cognitive offloading research, taken seriously, leads to a clear set of principles for tools designed to build knowledge rather than replace it. These principles have been studied for decades. They don't require AI to implement. But they explain why most reading tools fail and what a different design would look like.

Retrieval practice: Across more than 1,215 studies with an effect size of g = 0.50, testing yourself on material outperforms re-reading it. Every study replicated the core finding: the act of retrieving information from memory strengthens the memory trace. Passively re-reading does not. The implication for reading: finishing an article and closing the tab produces much weaker retention than finishing it and immediately trying to recall the main points from memory before checking.

The generation effect: Across 86 studies with an effect size of 0.40, information you actively produce is substantially more memorable than information you passively receive. When you write something in your own words, explain it, answer a question about it, your brain encodes it more durably than when you read someone else's summary. Brain imaging confirms this: generation activates the hippocampus and prefrontal cortex more strongly than passive reading.

This is why AI summaries are particularly problematic from a retention standpoint. A summary you read is passively received information. The generation effect would require you to produce the summary yourself, or at minimum to produce your own response to it. Reading a summary and remembering it are very different things.

Desirable difficulties: Bjork's research (1994, replicated across decades) shows that conditions that slow immediate performance typically improve long-term retention. Spaced practice. Interleaving. Reduced feedback. Retrieval with less cueing. These interventions feel harder. They produce worse performance during learning and better performance on long-term tests.

The design implication is uncomfortable: tools that make reading feel easiest often produce the weakest retention. Ease during learning and learning that sticks are frequently in tension. Every tool optimised for removing friction is potentially working against comprehension.

Scaffolding that fades: Static support creates dependency. When tools always surface key points, always produce summaries, always answer questions, with no reduction in support as the reader develops skill, they prevent the reader from developing independent competence. Support that fades as competence grows builds capability. Support that stays constant builds reliance.

The Pearson study (Fall 2025, approximately 400,000 students, 80 million interactions) found that a single AI tool interaction increased active reading threefold. But the critical condition was design: the AI was built to prompt engagement and questions rather than deliver answers. The same technology, oriented differently, produced the opposite effect.

The 85% rule: Research from Princeton (Wilson and Shenhav, Nature Communications, 2019) derived mathematically that the optimal error rate for learning is approximately 15%, meaning success rates around 85%. Below 70% accuracy and learners are in frustration territory with no memory benefit. Above 95% and the task is too easy to produce meaningful encoding. Productive difficulty is not random difficulty. It's calibrated difficulty.


What Tools Designed for Understanding Look Like

The difference between a tool built for consumption and a tool built for understanding isn't always visible from the outside. Both can involve audio. Both can involve highlighting. Both can involve AI. The distinction is in what the tool is asking your brain to do.

A consumption tool gives you the output and asks nothing in return. You listen, you skim, you get the summary. Cognitive effort stays low. Memory formation stays minimal. The tool has done the work. You haven't.

A tool built for understanding keeps you in the loop. It might read with you rather than read to you, keeping your visual and auditory processing engaged simultaneously. It might extract knowledge in a way that prompts you to verify what you understood, not just receive what the system generated. It might bring material back before you've forgotten it, asking you to retrieve rather than just review.

This is what engaged reading, built on the principles above, looks like in practice. Not more friction for its own sake. Friction in the right places. The kind that builds memory rather than bypassing it.

Mayer's modality principle (17 experiments, effect size d = 1.02) provides some of the clearest evidence for why listening while reading with synchronized highlighting produces better comprehension than reading alone. Processing the same content through two simultaneous channels, visual and auditory, uses more of the brain's processing capacity in parallel. The dual-channel processing also reduces the likelihood of mind-wandering, which is the point at which passive reading loses most of its retention.

For people who've been using AI summaries to get through content faster, the gap isn't just missing information from a particular article. It's a gradually accumulating deficit in the foundational knowledge that makes all future reading and all future AI use more productive. The choice to bypass reading today is a choice to make yourself less capable of benefiting from reading tomorrow.

Alexandria is built on this principle: that the tools worth using for reading aren't the ones that get you through content fastest. They're the ones that keep your brain engaged in the act of understanding, extract and structure what you've learned, and bring it back before it fades. If that sounds like more work than asking for a summary, it's because it is. But it produces something a summary doesn't: knowledge you can actually draw on.

Try Alexandria free


The Practical Question: What Should You Do Differently?

The research doesn't argue for avoiding AI. It argues for using it in ways that preserve rather than replace cognition.

A few specific changes the evidence supports:

Read the original before you read the summary. Form your own view first. Then use AI to pressure-test it, find gaps, or go deeper. This sequence keeps your cognition active. Reversing it makes you a consumer of someone else's thinking.

Practice retrieval after reading. Close the tab after finishing an article. Write down the three most important points from memory. Then check. The gap between what you remembered and what was actually there is the most useful data you'll get about whether something actually landed.

Use AI to generate questions, not answers. Asking AI to summarise material keeps you passive. Asking AI to generate hard questions about what you just read, then trying to answer them, keeps you active. One builds comprehension. The other mimics it.

Build before you augment. The expertise duality is the central finding. You cannot use AI well without foundational knowledge to evaluate it against. Building that foundation means reading, thinking, struggling with hard material, and retaining what you learn. That work cannot be outsourced without also outsourcing the competence.

The passive consumption problem is older than AI and larger than any single tool. But the solution isn't screen time restrictions or digital detox. It's tools and habits designed for understanding rather than consumption. The distinction is available to anyone who looks for it.


Frequently Asked Questions

What is cognitive offloading?

Cognitive offloading is using external tools to handle mental tasks: calculators for maths, GPS for spatial navigation, AI for writing and research. A consistent finding across decades of research: offloading improves immediate performance while reducing memory for the offloaded content. You get the answer. You don't build the knowledge.

Does AI actually make you dumber?

It depends on how you use it. Experts who bring prior knowledge to AI interactions, evaluate the output, and do their own thinking often get better results. Novices who bypass the reading and accept answers at face value learn less and build no foundation. The tool doesn't determine the outcome. Your relationship to the underlying work does.

What is the Google Effect on memory?

The Google Effect (Sparrow, Liu, and Wegner, 2011, Science) is the finding that when people know information is searchable, they don't encode it. They remember where to find it, not the fact itself. A follow-up found 30% of people had stopped attempting to answer simple questions from memory. The reflex to search had automated completely.

Is AI replacing reading at scale?

Yes. When Google's AI Overviews appear, only 1% of users click through to the source. Traffic from Google to news sites fell 26% in one year. 60% of searches end without any click. AI summaries aren't just changing how people read. They're routing around the reading entirely.

What is the expertise duality with AI?

Experts use AI as an amplifier: bringing foundational knowledge, evaluating output, doing their own thinking. Novices use it as a bypass: skipping foundational work, accepting outputs uncritically, building nothing they can draw on. The variable isn't the tool. It's whether the user has enough knowledge to evaluate what the tool produces.

How does GPS use relate to cognitive offloading?

A 2020 Nature Scientific Reports study (Dahmani et al.) tracked GPS reliance and spatial memory over time and found a longitudinal correlation of r = -0.68. The more reliably people used GPS, the weaker their internal navigation became. It's the clearest longitudinal evidence for how outsourcing a cognitive task degrades the underlying capacity over time.

What did the MIT brain connectivity study on ChatGPT find?

A pre-print from MIT Media Lab (Kosmyna et al., 2025) found that participants who used ChatGPT showed lower neural connectivity compared to those who wrote without AI assistance. The study is N=54 and not yet peer-reviewed. Treat it as suggestive rather than conclusive. But the direction is consistent with the broader cognitive offloading literature: when tools do the thinking, the brain does less of it.

What is the illusion of knowing created by AI and search engines?

People report feeling significantly more knowledgeable after getting an AI answer, even when they haven't absorbed anything. They blur the line between "I know this" and "I can find this." One is knowledge you can reason with. The other requires a connection and a tool. Storm et al. (2016) found 30% of participants had stopped attempting memory retrieval from their own minds entirely because reaching for a device had become automatic.

Why do students who use AI score higher on tests but know less?

AI-permitted assessments measure access to information, not ownership of it. When students can reference their tools, they score higher. When tests go offline, the knowledge isn't there. The Corvinus University RCT (2025) documented 20-40 percentage point declines on offline tests for students with unrestricted AI access compared to students without it.

What is the difference between a consumption tool and a tool designed for understanding?

A consumption tool delivers processed output and asks nothing of your cognition. A tool designed for understanding keeps you in the cognitive loop: it prompts retrieval, requires active engagement with material, structures knowledge for review, and builds the foundational knowledge you need to evaluate further input. The distinction is whether the tool does the thinking or helps you do it.


Related reading: How to Actually Remember What You Read | Why You Forget Articles Within a Week | The Science of Reading Retention | Why Your Brain Gives Up After 3 Paragraphs | Screen Time and The Aim Problem