The Enflattenning
This week we explore the mathematical and behavioral proof that making information "free" and using AI to synthesize it actually destroys cognitive diversity and traps us in local optima. AI can be amazing; let's not ruin it by letting it ruin us.
Research Roundup
The Cost of Free Information
“Information must be free!” Free to compete in “the Marketplace of Ideas”! [1] Did anyone ever check if these founding philosophies of the internet truly increase our knowledge and wellbeing?
A new PNAS article has done just that. Strip out the trolls, the bots, the bounded rationality, the ad-driven algorithms. Give the crowd every epistemic advantage you can invent—it turns out that unconstrained information exchange reduces the accuracy of group beliefs…even among perfectly cooperative, perfectly Bayesian agents. As information flows faster, even perfectly rational agents get dumb.
This is a new insight into the Information-Exploration Paradox I've been documenting for years. Whether you model it as rising information rates or falling information costs, without countervailing friction, it degrades collective intelligence. Homophilous networks—the default on every platform—make it worse.
The founding dogma of the internet is that information should be free and frictionless. This paper shows that the foundation is fundamentally flawed. It's also why our MOOC work found that breaking massive courses into small cohorts improved learning.
If a crowd of perfect AI agents can't survive zero-cost information sharing, your Twitter feed doesn't stand a chance. Your LLM-summarized-everything stack doesn't either.
Build productive friction back in.
[1] We probably should have been suspicious of “thought leaders” pitching a marketplace of free information. Next up are hugs of infinite sadness.
AI in the Loop
AI learns from us. We learn from AI. Then AI learns from AI, and we learn…from…and then AI… I wonder if anything might go wrong here?
Provably so. A new paper shows that a fast, global AI aggregator (i.e. an agent that reads everything and quickly updates its model) is structurally incapable of reliably improving collective learning. Not poorly designed—incapable by theorem.
The setup extends a standard model of social learning in which agents learn by averaging their neighbors' beliefs. It inserts an AI that trains on the population and feeds a synthesized signal back. If the aggregator updates too quickly, it amplifies transient noise before it can wash out.
In contrast, local agents trained on topic-specific or proximate data robustly improve learning everywhere. Replacing these specialist agents with a single global aggregator guarantees worse learning in at least one dimension.
This is another example in a growing list of less information paradoxically leading to better learning. This one deserves some additional thought, both because massive LLMs may not solve all our problems and because those local experts sharing information sound much more like human collective intelligence than AI.
One caveat the authors don't address: their agents are passive. Real humans, properly trained, are not. The architecture can be broken and humans can still resist the damage, but only if we design for that resistance instead of assuming it.
Build slow. Build local. Keep humans weird.
A Standardized Mind
What happens to human cognition when hundreds of millions of people route their language and reasoning through the same handful of models?
A new review across linguistics, psychology, cognitive science, and computer science argues that LLMs reflect and reinforce dominant styles while eroding alternative voices and reasoning strategies. The mechanism is straightforward: models are drawn by weight of data toward the dominant patterns in their training data.
When users rely on the same models across increasingly diverse contexts, convergence compounds and the cognitive landscape flattens.
Cognitive diversity is the substrate of innovation and collective intelligence. Different framings, different reasoning strategies, different linguistic structures are what let groups solve problems no individual can. Flatten the inputs to human thought and you lose the property of the group that made the group valuable.
Outsource language and reasoning, and what comes back is the centroid of the training distribution. The model does not produce your thought; it produces the average thought that resembled yours. Scale that across every writing task, coding task, and analytical task, and variance in the human population itself begins to collapse. [1]
Falling information costs produce less exploration in absolute terms and compound the winners. The new winners are the modal outputs of a few foundation models, reinforced every time a user accepts the suggestion instead of writing the sentence.
Defend diversity of thought. A collective intelligence is only as smart as the variety it can still draw on.
[1] I’ve written about this in previous newsletters. Look up “Slowed canonical progress in large fields of science”, for example.
Media Mentions
The wonderfulness continues! Though I've done my best to read nothing about 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇 online, I'm told the review in the financial times was very positive. And an interview with me by my undergraduate alma mater, UCSD, topped their news releases :)
But perhaps most important to me is how many people have told me that 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇 is funny! They felt like they were in the audience of one of my talks: a firehose of big idea, realist hopefulness, and more a little absurd snarkiness.
Here’s the interview: https://today.ucsd.edu/story/what-skills-do-humans-need-to-become-robot-proof-in-the-age-of-ai
Also, I have a podcast out with Rita McGraw. We first connected back in my Gild days studying what makes great employees. (Hint: it isn't being pliant rule-followers). Here's the tease:
Every productivity gain from AI carries a hidden tax: the friction you removed was doing cognitive work you didn't know about. The struggle to find the right word is how you learned to think. The second draft is where the argument got sharper.
I joined Rita McGrath on Thought Sparks to get into why productive friction is the feature that makes humans robot-proof — and why the data on Automators vs. Cyborgs should worry anyone betting on frictionless AI.
SciFi, Fantasy, & Me
𝗦𝗰𝗶𝗙𝗿𝗶𝗱𝗮𝘆: I previous recommended the book “There Is No Antimimetics Division” based on the SCP collaborative write website. I recently stumbled across this cool video on YouTube from Dust: Sci-Fi Short Film "There Is No Antimemetics Division" | DUST | Starring Jasika Nicole
Check it out!
Stage & Screen
- April 28-30, Paraguay: More fun with Singularity University.
- May 12, Online: I'll be reading from Robot-Proof for the The Library Speakers Consortium.
- May 12, SF: We'll talk about collective intelligence, the neuroscience of trust, and how dumb I have to be to be launching my 13th company.
- May 14, Miami: TEDxMiami
- June 9-10, London: London Tech Week!
- June 11, Luxembourg: How Europe (and even some of it smallest states) compete and grow in a trade environment dominated by zero-sum leaders
- June 12, Denver: GlobalMindEd
- June 18, Stockholm: The Smartest Thing on the Planet: Hybrid Intelligence
- October, Toronto: The Future of Work...in the Future