Personalization: I don’t think it means what you think it means

Personalization: I don’t think it means what you think it means

The tech industry loves to sell us on 2 concepts: "Personalization" and "Intelligence". But if you actually look at the data, the tools we are building are often just highly efficient mirrors. They reflect our personalities back to us, they speed up our busywork without actually teaching us anything, and when we try to test how "smart" they are, we just give them the hardest trivia tests we can find.

This week, we look at 3 papers that reveal the gap between the marketing brochures of the AI revolution and the messy reality of cognitive science.

<<Support my work: book a keynote or briefing!>> Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!

Research Roundup

Reading Digital Tea Leaves

AI can now accurately assess your personality from your casual writing. (Spoiler: I did this 10 years ago with LinkedIn profiles).

Contemporary personality assessments usually rely on long, self-report psychometric surveys, but a new study shows that commercial LLMs (like ChatGPT and Claude) can accurately score your "Big Five" personality traits just by reading your spontaneous streams of thought or video diary transcripts. The models performed as well as, or better than, “established benchmarks”. The best performance didn’t come from any one model but from averaging across all of them [1].

First of all, welcome to the party! I was doing this with LinkedIn profiles way back in 2014. My models required bespoke engineering for every construct instead of a simple zero-shot prompt, but the principle is the same: your personality is ubiquitously woven into the fabric of your daily digital exhaust.

But just because an AI can perfectly map your psychological context doesn't mean it will use that information to help you. A machine that knows exactly who you are is just as likely to optimize for your weaknesses (sycophancy, engagement) as it is to optimize for your growth. "Understanding" you is not the same as augmenting you.

And of course "knowing" the nonexistent average person doesn’t mean it truly knows you.

Students Trapped in an EdTech Exoskeleton

AI tutors make students faster and more engaged—do they actually learn anything?

A new study from a major EdTech platform examined what happens when you give thousands of students an AI tutor to help them debrief math problems. The results look great on a slide deck: student engagement grew (they “completed 36% more math problems”) and efficiency slightly improved (“they spent 3.9% less time per problem” with higher accuracy).

But then comes the actual purpose of education: learning…not not learning. The authors claim to find "evidence of" long-run skill development, but in academic speak "evidence of" translates to, "It wasn't statistically significant, but we really want it to be true."

This is the classic "Efficiency Lie". Efficiency, engagement, and a low-end performance boost are exactly what you expect from an LLM. But if the student cannot replicate that performance without the AI, you haven't taught them a skill. You've just strapped them into a cognitive exoskeleton.

When you remove the productive friction of learning, the student doesn't change. If there is no clear boost in non-agent performance, then this isn't a pedagogical breakthrough. It’s just an automated crutch.

This has been found in educational technology again and again. Learning isn’t an engagement problem to be gamified away. AI can play a huge role in building an exceptional mind, but that promise will never be met by learning tools that only create the illusion of knowing.

AIs are good at Jeopardy—now what?

The ultimate test for AI reveals we still don't understand intelligence.

LLMs are maxing out all our standard benchmarks, so some researchers published "Humanity’s Last Exam" (HLE) in Nature. It consists of 2,500 expert-level, closed-ended questions across dozens of subjects (math, humanities, science). The questions have unambiguous, easily verifiable answers that can't just be Googled. As intended, state-of-the-art LLMs performed terribly.

I hate this benchmark. It represents everything wrong with how we view cognition.

By relying entirely on "well-posed problems"—”each question has a known solution that is unambiguous and easily verifiable”—this benchmark forces both human and artificial intelligence into the realm of the well-posed. It actively de-incentivizes meta-learning and hybrid intelligence. We are testing these vastly complex associative engines as if they are contestants on an expert-level episode of Jeopardy.

If brilliance is whatever fits in a fully measured box, we ignore so much of what makes human and artificial intelligence special.

True intelligence, especially the kind that changes the world, lives in the ill-posed problems. It lives in the messy, uncertain spaces where there is no right answer, only hypotheses and exploration. If we only measure AI by its ability to regurgitate expert consensus, we will never learn how to use it to explore the unknown.

Media Mentions

New love for 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇…this time in #India! “Raising “robot-proof” kids: Why creativity and curiosity matter more than ever”.

No wonder I got a bunch of new Indian followers this morning :)

Don’t forget to read the Robot-Proof: When Machines Have All The Answers, Build Better People!

Follow me on LinkedIn or join my growing Bluesky! Or even..hey whats this...Instagram?

SciFi, Fantasy, & Me

Looking for a dose of the most wonderful weirdness? The West Passage by Jared Pechaček t trusts its own strangeness completely. In a crumbling, impossible castle tended by women who have forgotten what they're guarding, Pechaček has built a world with the logic of a dream and the texture of a tapestry. If you find yourself uncertain whether you understand it, lean in.

Stage & Screen

  • April 13, Online: Ethical Tech, Realist Management
  • April 14, Seattle: Ill be keynoting at the AACSB Business School Conference.
  • April 15, NYC: It's a public "book reading" for Robot-Proof at P&T Knitwear.
  • April 16, NYC: A private event in Brooklyn. The setting is boxing, topic is AI, but I'll make it about us.
  • April 16, NYC: Yes, its a busy few days in NYC. This time I'm joining The Ethical Tech Project for a fireside chat.
  • April 16, Paraguay: More fun with Singularity University.
  • May 12, Online: I'll be reading from Robot-Proof for the The Library Speakers Consortium.
  • May 12, SF: We'll talk about collective intelligence, the neuroscience of trust, and how dumb I have to be to be launching my 13th company.
  • May 14, Miami: TEDxMiami
  • June 9-10, London: London Tech Week!
  • June 11, Luxembourg: How Europe (and even some of it smallest states) compete and grow in a trade environment dominated by zero-sum leaders
  • June 12, Denver: GlobalMindEd
  • June 18, Stockholm: The Smartest Thing on the Planet: Hybrid Intelligence
  • October, Toronto: The Future of Work...in the Future

Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Possibility Institute Optoceutics
Kennedy Human Rights Center UCSD Cognitive Science
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley UCL Business School of Global Health