Disrupting A Downward Political Spiral [RR]
We treat political polarization like a permanent feature of the modern landscape—a fundamental fracture in society that is beyond repair. But is it? Or is it just a variable in a system we haven't bothered to tune?
In this week’s Research Roundup, we’re looking at three blockbuster papers that stop speculating and start measuring. They prove, with empirical rigor, that we can dial political polarization up or down at will, and that AI is already a highly effective persuader.
The results are a double-edged sword. On one hand, they confirm that human beliefs are not immutable; we can change them. On the other, they reveal that our brains are alarmingly susceptible to the appearance of logic—so much so that we are easily swayed by "fact-sounding" lies if they come at us fast enough.
Let’s look at the data.
Research Roundup
Magic Glasses Correct Political Myopia
We’ve long debated the chickens and eggs of angry social media: Do angry people curate angry feeds, or do angry feeds create angry people? An amazing new paper shows that your social feed doesn’t just reflect your mood; it creates it. And is likely even influencing your vote.
Some gutsy researchers built a browser extension that intercepted users' Twitter (X) feeds in real-time “during the weeks leading up to the 2024 presidential election.” Using an LLM, they scored posts for "partisan animosity" and re-ranked the feed to either amplify or suppress that content. The results were immediate and strictly causal:
- Turning it Down: Users exposed to less animosity felt significantly “warmer” toward the opposing party.
- Turning it Up: Users exposed to more animosity grew “colder” and more polarized.
The magnitude of the shift in just 1 week “were comparable in size to 3 years” of natural polarization change in the US, “with no detectable differences across party lines.”
In some ways the human brain is a simpler input-output machine than we’d like to admit. Feed it a diet of rage, and it gets inflamed. Feed it a balanced diet, and it calms down. The most profound insight here isn't simply that the algorithms are "evil". The platforms do have the knobs to turn down the heat; the question is whether the ecosystem—and our own clicking habits—will ever incentivize them to do so.
Lie to me, Mr. Roboto
James Carville’s new book is titled Winning Is Everything, Stupid! Apparently, LLMs got the message…and they want your vote.
In a massive experiment across recent elections in the US, Canada, and Poland, researchers assigned participants to chat with an LLM advocating for a specific political candidate or policy.
The bots were significantly more persuasive than traditional video ads. They moved the needle on candidate preference and even convinced skeptical residents of Massachusetts to support legalizing psychedelics.
Despite the common narrative, the AI didn’t use Jedi mind tricks or emotional manipulation. It used “relevant facts and evidence”. It bombarded users with data.
One minor problem: “Not all facts and evidence presented” were true. And in a disturbing trend: “across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims”.
Scifi writers have worried for decades that superintelligent AIs would manipulate us with sophisticated psychological warfare. Instead, they just offer up plausible lies for the willfully naive on every side of an issue. The only difference from everyday demagogues is that the AI is polite, infinitely patient, and available 24/7. It’s the ultimate "Gish Gallop" machine.
Robotic Truthiness
If AI is persuasive, why? A massive study with 76,000 participants took apart the machine to find the levers. It tested model size, personalization, and rhetorical strategy. Conclusion: the louder and faster the AI talks, the more we believe it. [1]
The findings are a manual for modern sophistry:
- Personalization is Overrated: Tailoring the message to the user didn't matter much.
- Information Density is King: The most powerful lever was packing arguments with a high volume of factual-sounding claims. [2]
- The Accuracy Trade-Off: There was a direct, linear trade-off between persuasiveness and accuracy. The techniques that made the AI more convincing (like fine-tuning for persuasion) also caused it to hallucinate more false information.
Bigger models didn’t help nearly as much as fine-tuning, or just strongly prompting, models to pack in the “facts”...without caring too much about their factfulness.
Tuning an LLM for persuasion tunes it away from truth. This isn't because the AI is malicious—it's incapable of caring—but because truth is friction. For so many of us, so much of the time, belief is about comfort and identity, while reality is often messy and contradictory. The AI simply learns that to win our agreement, it must smooth out the rough edges of the truth until it fits the shape of our bias.
[1] Sound like any “thought leaders” you know?
[2] Sound familiar? Like maybe my post yesterday? Two papers on AI persuasion—one in Science, one in Nature—in the same week with complementary findings. Take notice.
Media Mentions

I don’t think I ever shared this interview with The Future of Storytelling podcast. The title was appropriate: “Vivienne Ming: Science is a Story, Revisited”
SciFi, Fantasy, & Me
𝑻𝒉𝒆 𝑺𝒑𝒂𝒄𝒆 𝑴𝒆𝒓𝒄𝒉𝒂𝒏𝒕𝒔 by Frederik Pohl and C.M. Kornbluth (1953)
I don’t usually go this far back with my recommendations, but this one was to obviously relevant the themes of AI and politics I’ve written about this week. In a future ruled not by governments but by advertising agencies, Mitchell Courtenay is a star-class copywriter tasked with the ultimate sales job: convincing people to emigrate to Venus. The problem? Venus is a hellhole. It’s unlivable, barren, and lethal. But that’s just a branding challenge.
Written in the conformist 50s, this satire envisions a society where reality is entirely malleable, where "truth" is just a function of budget and ad copy, and where the feedback loop between consumer desire and corporate manipulation has spiraled into absurdity. It shows what happens when you optimize an entire civilization for "engagement" rather than reality.
Good thing it’s just a story :)
Stage & Screen
- January 20, Davos: After all these years, they are finally allowing me to speak in Davos at the World Economic Forum.
- February 2, NYC: My latest research on neurotechnologies for cognitive health and more.
- February 10, Nashville: Shockingly, I haven't visited Nashville since I was a little kid. On this trip I'll be looking at why Tennessee and North Carolina appear to have more entrepreneurship than all over their neighboring states combined.
- March 8, LA: I'll be at UCLA talking about AI and teen mental health at the Semel Institute for Neuroscience and Human Behavior.
- March 14, Online: The book launch! Robot-Proof: When Machines Have All The Answers, Build Better People is will finally be inflicted on the world.
- Boston, NYC, DC, & Everywhere Along the Acela line: We're putting together a book tour for you! Stay tuned...
- Late March/Early April, UK & EU: Book Tour!
- March 30, Amsterdam: What else: AI and human I--together is better!
- plus London, Zurich, Basel, Copenhagen, and many other cities in development.
- April, Napa: The Neuroscience of Storytelling
- June, Stockholm: The Smartest Thing on the Planet: Hybrid Collective Intelligence
- October, Toronto: The Future of Work...in the Future