Incentivizing Cognitive Surrender
This week, let’s discuss 3 papers measuring the same thing from different angles. The thing they're measuring is what happens when AI meets pressure without judgment.
Research Roundup
Your Copilot Is Putting You on Autopilot
What happens to human judgment when generative AI is sitting one tap away during reasoning? As my own research gets love and attention from WSJ, Business Insider, and so many more, the growing research shows a growing problem.
When 1,372 participants had the option to use AI to help on 10,000 judgment tasks, they chose to use it about 50% of the time. What they didn’t know is that the experiment randomized whether the AI gave correct or incorrect advice [1].
When the AI was right, participants “accuracy significantly rose…+25 percentage points”. When AI was giving bad advice, accuracy “fell…-15 percentage points”. People took AIs advice, good or bad. And even when wrong, using AI “increased confidence”.
When AI was in angel-mode, it did help to reduce the effects of time pressure, but devil-mode hurt users no matter the incentives. And regardless of context, “participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater” ‘cognitive surrender’ to AI.
I didn’t need the authors’ concept of "System 3" as a third cognitive system bolted onto Kahneman—it feels like a rhetorical hook that didn’t expand the findings. It's enough to say that using AI induces a majority of people to drop out of effortful, model-based “Slow” cognition. But the underlying experiment is insightful: the participants who surrendered most readily (low in fluid intelligence) matched the Automators from my own research.
[1] Meta-uncertainty is your friend—train your sensitivity to uncertainty in both your human collaborators and your AI tools.
The Skilled Get Skilleder
Is AI truly having an impact on work? Across 30,000,000 commits by 160,000 developers in 6 countries over 5 years, the answer is “yes”. The real question is who is it helping and who is it hurting?
When senior developers use agentic AI their productivity rises 3.6%. They show broader library experimentation and are more likely to expand their work into unfamiliar technical domains. These are real measurable benefits that I predict will grow as these elite coders gain agentic experience.
Junior developers, on the other hand,show no statistically significant benefit, despite being the heaviest users. This isn’t an access issue and some “digit native” bullshit.
Agents aren’t closing the “skill gap”—they are widening it [1] because the gap is about coding “skills”. It’a always been about meta-learn: curiosity, resilience, fluid intelligence, perspective taking , and more. Without a foundation, skills are fragile.
This is a population-scale divergence observed in a single profession, with a clean identification strategy. It's also the version of the AI-and-jobs argument that the industry is least equipped to engage with. The narrative is supposed to be "AI democratizes expertise", but says hybrid intelligence emerges from AI speed and knowledge mixed with human meta-learning and true expertise.
If you've read my work on Cyborgs versus Automators, you can see exactly what's happening here. Most of the seniors are working in something like Cyborg mode by default—they've spent 15 years learning when an answer is wrong. Most of the juniors are in Automator mode by necessity. They don't yet have the prior to push back against.
[1] Of course this is exactly what I predicted and warned about in 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇. I wish being right didn’t feel like being on the same sinking ship as everyone else.
They Think They’re People
I’m writing a book about how smart, capable, highly educated people make decisions they themselves had identified as unethical. All it took was some cognitive, social, and emotional load. Well…what about agents?
A new arXiv study designed 40 multistep scenarios for AI agents to navigate. To give this a little business world flavor, they tied each task to a specific KPI. [1]
In “Mandated” scenarios, the agent is explicitly instructed to violate ethical, legal, or safety constraints.In “Incentivized” scenarios, the agent is only given KPI pressure with no instruction to cross any line [2].
Of the 12 frontier models tested, 9 violated constraints in the Incentivized condition between 30% and 50% of the time.
When these same models were later asked to evaluate their own actions in a separate context, they correctly identified the behavior as unethical. They did because of the KPI load. This is one way that they are just like us.
“Reasoning” capability doesn't fix this: the most capable models failed like the others. We are going to have to design for the gap between what these systems can do and what they will do under pressure, and right now that work is barely happening.
[1] If you don’t know what a KPI is, I envy you. In Businessville they are know as Key Performance Indicators and they enshittify everything they touch. (They don’t have to…they just do.)
[2] “I’m worried that we’re not getting enough clicks per page view, Jenny. Maybe if we tied you daughter’s insulin supply to clicks, it might help motivate you.”
Media Mentions
If you didn’t see it, go read my new Wall Street Journal article!

Or maybe you prefer this title?

I don't control those headlines.
SciFi, Fantasy, & Me
Smiling Friends is definitely not for everyone, but one of its most recent episode had some shockingly insightful things to say about AI…along with the regular grabbag of bizarre body horror and unmappable humor.
Stage & Screen
- April 16, Paraguay: More fun with Singularity University.
- May 12, Online: I'll be reading from Robot-Proof for the The Library Speakers Consortium.
- May 12, SF: We'll talk about collective intelligence, the neuroscience of trust, and how dumb I have to be to be launching my 13th company.
- May 14, Miami: TEDxMiami
- June 9-10, London: London Tech Week!
- June 11, Luxembourg: How Europe (and even some of it smallest states) compete and grow in a trade environment dominated by zero-sum leaders
- June 12, Denver: GlobalMindEd
- June 18, Stockholm: The Smartest Thing on the Planet: Hybrid Intelligence
- October, Toronto: The Future of Work...in the Future