Welfare & Revealed Preference [RR]
Standard economics assumes your "preferences" are stable. They aren’t. You are a messy, path-dependent trajectory.
This week: why Facebook is an addiction (not a "social good"), the death of the cover letter, and why your "soul" is a wave function. 🔬🧪
Research Roundup
Not very revealing
Does social media erode or lift wellbeing? It depends on who you ask…and by “who” I mean “when”? Let’s talk about “revealed preference”.
An influential paper from 2019 used “willingness to pay” to reveal that “the median user needed…about $48 [per month] to forgo Facebook”. For most economists, (rational) people paying for something is a "revealed preference” that measures that thing's welfare value. So, Facebook must be a social good because people's behavior reveals its positive value.
But is willingness-to-pay stable or are we creatures of context and history? Recent papers show that people's histories matter, often reversing willingness-to-pay from positive to negative. For example, “deactivating Facebook for the four weeks before the 2018 US midterm election…reduced…valuations of Facebook” and “increased subjective well-being”.
People are not simply their behaviors in the absence of context. As I’ve said before we are a superposition of selves, with our behavior being a much more malleable function of history that many economists, psychologists, or market researchers often appreciate.
It’s worth noting that the Facebook-fasting experiment had other findings. Less social media leads to more TV and more time with friends. It also simultaneously reduced “factual news knowledge and political polarization”, a rather complex trade-off.
The Substitute & The Cyborg
Will AI help you land a job? A better question might be what happens when everyone uses AI to get a job? Chaos!
Unsurprisingly, access to an AI tool for tailoring cover letters to job posts increased callbacks, particularly for those with “weaker pre-AI writing skills”. But with everyone writing solid (and solidly similar) letters, “correlation between cover letter tailoring and callbacks fell by 51%”.
Back in my days using early deep neural networks to analyze LinkedIn profiles (2012-2014-ish), I found that the most freeform, user-generated parts (e.g., “Bio” or posts) were the best predictors of future work performance, out-predicting “skills” and often even education experience.
The new research found a similar effect: as cover letters lost their value “workers’ past reviews…became more predictive of hiring”. Still, “greater time spent editing AI drafts was associated with higher hiring success”, but now it was less that the AI was doing the work than the cyborg was making it their own.
You Are a Wave Function
Recently I've written that people are like a wave function, a superposition of many different personality "traits" and behaviors that make us different people under different contexts. It turns out we're even different in our differences.
Context and psychological traits appear to have “independent and multiplicative effects on decision-making”, but one group was the most sensitive to context: “individuals with higher trait apathy”.
While apathetic people were the most sensitive to context, highly (trait) motivated people were “more willing to exert effort in future” in response to experience with rewards. Different people are are different people in different ways.
Even "moral preferences" (like altruism) are contingent on which version of us collapses out of the waveform. A study found that if you numb your own pain with medication, you numb your "preference" to help others…even when the pain medication was a placebo.
Here’s the meta-metacognition answer: create the contexts that in turn create the best version of you (and that it likely different for different yous).
Media Mentions
I'm speaking at Davos today. Stay tuned for more!
SciFi, Fantasy, & Me
I very much enjoyed Catherine Webb’s The First Fifteen Lives of Harry August and The Sudden Appearance of Hope, fascinating modern fantasy worlds, a big specific idea injected into a world we all recognize. Go enjoy them.
Her latest, Slow Gods, is none of that. It is undeniably far future science fiction, though with a healthy dose of metaphysics that more than flirts with fantasy. It structurally shares a protagonist with a big “specific idea”, but he’s injected into space opera (though one with some clear metaphors for today). I’m still percolating it, but I certainly recommend the read.
Stage & Screen
- February 2, NYC: My latest research on neurotechnologies for cognitive health and more.
- February 10, Nashville: Shockingly, I haven't visited Nashville since I was a little kid. On this trip I'll be looking at why Tennessee and North Carolina appear to have more entrepreneurship than all over their neighboring states combined.
- March 8, Basel: I'll be giving a keynote at the Health.Tech Global Conference 2026: "Robot-Proof: How Human Agency Drives Hybrid Intelligence & Discovery"
- March 8, LA: I'll be at UCLA talking about AI and teen mental health at the Semel Institute for Neuroscience and Human Behavior.
- March 14, Online: The book launch! Robot-Proof: When Machines Have All The Answers, Build Better People is will finally be inflicted on the world.
- Boston, NYC, DC, & Everywhere Along the Acela line: We're putting together a book tour for you! Stay tuned...
- Late March/Early April, UK & EU: Book Tour!
- March 30, Amsterdam: What else: AI and human I--together is better!
- plus London, Zurich, Basel, Copenhagen, and many other cities in development.
- April, Napa: The Neuroscience of Storytelling
- June, Stockholm: The Smartest Thing on the Planet: Hybrid Collective Intelligence
- October, Toronto: The Future of Work...in the Future