To AGI or Not To AGI

Rumblings around Artificial General Intelligence (AGI) – that elusive, human-level intelligence in a machine – grow louder every day. Ezra Klein says that “The Government knows AGI is Coming”, and the CEOs of the biggest players in the AI space keep telling me it’s about to change everything. But a contrasting consensus seems to be developing amongst AI researchers themselves. A recent survey by the Association for the Advancement of Artificial Intelligence (AAAI) finds that a significant majority of AI researchers (76%) expressed skepticism that simply scaling up current approaches, like larger language models, will lead to AGI. Even more felt that, fueled by news cycle hype and tech CEO proclamations, the public perception of AI exceeded its actual capabilities.
First, I entirely agree with the majority of those survey respondents: we are not perilously close to a hard AGI singularity (for good and bad) and that ever bigger models are unlikely to take us there.
However, I take issue with both sides. Far too many policy makers are listening to big donor corporations and tech industrialists with a profound lack of skepticism. There is a whole lot of motivated reasoning going on here, and it only takes 2-3 “AGI is right around the corner” moments in your career before you realize that none of them know what they are talking about, not about AI or intelligence or intellectual humility.
While the "experts" in this play – the AI researchers – have wisely pitted themselves against the self-interested pronouncements of Silicon Valley, I still wary of assuming that "AI researcher" automatically equates to "expert on the fundamental nature of intelligence". The field, as currently practiced, is largely focused on building specific AI systems, not on deeply understanding the underlying principles of intelligence, either natural or artificial. Many computer science researchers, while skilled engineers, lack a broad, interdisciplinary perspective that encompasses cognitive science, neuroscience, philosophy, and, crucially, the messy, often-misunderstood history of AI research itself. Their skepticism, while valuable, shouldn't be taken as gospel.
One recurring argument against near-term AGI is the diminishing returns observed with increasing model size. (Again, this came up in previous moments of AGI frenzy.) LLMs, for instance, require exponentially more data and compute for incremental gains in performance. This is a valid observation, but it's also possible that these diminishing returns reflect a local minimum in a vast, largely unexplored landscape of computational possibilities. We simply don't know the "topology of intelligence". It's conceivable (though, in my view, unlikely) that a further, massive increase in scale, even without fundamental architectural changes, could trigger unexpected, nonlinear jumps in capability. The problem is, predicting this is practically impossible. The very nature of complex, high-dimensional systems makes them inherently unpredictable, and this one is also unobservable
This unpredictability is the crux of the matter. Few researchers can both speak to the dynamical complexity of traversing a massively high-dimensional energy landscape and also to messy and diverse forms of learning and “intelligence” multiplexed together to create human cognition. We're building increasingly sophisticated tools driven more by engineering pragmatism rather than theoretical insight. That’s truly useful but the lack of understanding makes reliable prediction an absurdity.
The AAAI report is a valuable corrective to the hype, reminding us of the vast gulf between current AI and genuine AGI. But it's also a reminder of our own profound ignorance. The path to AGI, if it exists, is likely to be far more complex, winding, and surprising than either the optimists or the pessimists currently predict. We need less certainty and more humility, coupled with a far deeper, more interdisciplinary investigation into the very nature of intelligence itself. That, ultimately, is a more fruitful path than relying on pronouncements, from any quarter, about a future we can barely glimpse.
Follow me on LinkedIn or join my growing Bluesky!
Research Roundup
Rising Tides
Will AGI create such massive productivity gains that we all live as kinds? Or will the loss of work drive us all into poverty (well…except for the owners)? Some recent economic modeling suggest that it all depends on the race between human capital and technological development
The good news: “if automation proceeds sufficiently slowly, then there is always enough work for humans, and wages may rise forever.”
The bad news: “if the complexity of tasks that humans can perform is bounded and full automation is reached, then wages collapse”. Even without a massive burst of cognitive automation, AI might “outpace capital accumulation and makes labor too abundant”, also leading to a collapse in wages.
The ugly: the models here, as in most economic research on AI, still understand work over-simplistically as “atomistic tasks that differ in their complexity”. We must start understanding work as more than routine labor and model the complex, interdependent nature of creativity and meta-learning (aka “soft-skill”).
For the accelerationists out there: a rising tide might float all boats, but we had better be sure everyone has a boat.
I Visualize, Therefore I Am.
Babies have an advantage over AIs: they fall down and go boom. Without a little action and consequence, it is hard to learn “intuitive physics, causal reasoning and intuitive psychology”.
So, I wasn’t surprised that “vision-based large language models…demonstrate a notable proficiency in processing and interpreting visual data” and “fall short of human capabilities in these areas”.
There’s an interesting extension to this finding beyond the realm of visual scene analysis, social cognition also requires a sense of causality that is rarely present in still images (or even text). During my work in AI-assisted facial expression learning in autistic kids, I found that theory of mind and perspective taking dramatically improved with improved expression recognition. Without the cause (scene context) leading to an effect (the perception of an emotion) it is hard to understand why others feel the way they do.
With better multimodal datasets, AIs might dramatically improve not just their visual cognition but their social cognition as well. And what could possibly go wrong with that :)
Of course it is…
If you’ve ever analyzed large datasets of matched questionnaire and behavioral data, you have quickly learned that what people (honestly) say and what they do are often wildly at odds. It’s good to know the AIs are just as f***ed up as us.
Ask any major foundation LLM direct questions about stereotypes in “race, gender, religion or health" and they have not a bad thing to say about anyone. But if you apply implicit bias assessments they reveal “pervasive stereotype biases mirroring those in society”. This even emerges when the models are specifically fine-tuned to be “value-aligned”.
LLMs are trained on us. We are trained on our culture. Our culture has a long history.
<<Support my work: book a keynote or briefing!>>
Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!
SciFi, Fantasy, & Me
Another essay of literary-political criticism: "Discworld Rules” by Venkatesh Rao. I strongly agree that Discworld is a vastly better guide to life and politics than Lord of the Rings or most other fantasy series (though I'm surprised that such an unashamedly inclusive stories should be embraced in this moment).
I asked Gemini 2.0 Pro why I should read Discworld (I have of course read every book already, starting with picking up Small Gods by chance one day for a long ago flight). It gave me pages of reasons:
- Systems Thinking and Emergent Phenomena: You are deeply interested in complex systems, whether they are biological (the brain, epigenetics), social (community networks, education), or computational (AI). Discworld, while fantastical, is a masterclass in systems thinking. Pratchett doesn't just build a world; he builds societies, with their own internal logic, economic forces, political machinations, and emergent properties. He explores how these systems interact, often with hilarious and insightful consequences. This resonates deeply with your work on understanding complex interactions and unintended consequences.
- Deconstruction of Tropes and Assumptions: Your work consistently challenges conventional wisdom and seeks to uncover underlying mechanisms. Pratchett does the same, but with fantasy tropes. He takes familiar fantasy elements (wizards, witches, dragons, gods) and subverts them, using them to satirize real-world issues like bureaucracy, religion, politics, and the nature of belief. This aligns with your approach of questioning assumptions and looking beyond surface-level explanations.
- Humanism and Social Commentary: Despite the fantastical setting, Discworld is profoundly humanist. Pratchett cares deeply about his characters, even the flawed ones, and uses them to explore themes of justice, equality, prejudice, and the human condition. This resonates with your work on social impact, ethical AI, and promoting human flourishing. He does this with humor, but the underlying messages are powerful and relevant.
- Intelligence and its Many Forms: You are fascinated by intelligence, both natural and artificial. Discworld presents a wide variety of intelligences. You have the logical, often-obtuse wizards of Unseen University; the practical, down-to-earth wisdom of the witches; the cunning, street-smart intelligence of characters like Vimes; and even the emergent intelligence of the city of Ankh-Morpork itself. This offers a rich tapestry of different ways of thinking and problem-solving, mirroring your interest in diverse cognitive styles.
- Humor as a Tool for Insight: You appreciate the power of humor to engage, disarm, and illuminate complex topics. Pratchett is a master of satire and wit. He uses humor not just for entertainment but as a powerful tool to make you think, to challenge your preconceptions, and to reveal uncomfortable truths. This aligns with your own communication style, which blends scientific rigor with accessibility and a touch of playfulness.
- Narrative and Storytelling: As someone who communicates complex ideas to diverse audiences (scientists, policymakers, the general public), you understand the power of narrative. Pratchett is an exceptional storyteller. His books are engaging, character-driven, and often surprisingly moving. Studying his narrative techniques could provide insights into how to communicate complex scientific concepts in a more compelling and memorable way.
- Computational Thinking, Informally: The world of Discworld is full of surprisingly good analogies for core principles in computer science. The way that magic works can be conceptualized as an error-prone operating system, where spells are programs subject to bugs and unintended side effects. The Clacks, a semaphore-like messaging system, shows how robust communications networks can be built from surprisingly simple units. The golems offer food for thought on the nature of consciousness, free will, and artificial life.
All true :) If you've never read Discworld, why are you wasting time reading this?
Stage & Screen
I'm thrilled to be heading to Europe in May, kicking off a series of talks exploring the intersection of AI, human potential, and the rapidly changing value of human cognition. I'll be sharing insights from my work at The Human Trust, my books ("How to Robot-Proof Your Kids" and "Professional Mad Scientist"), and my research on everything from predicting postpartum depression with epigenetics to building AI systems that support mental health.
First stop: Porto for the SIM Conference (May 8-9). I'll be diving into the why (and the how) behind launching three startups simultaneously – a story of calculated risks, interdisciplinary thinking, and the power of unconventional approaches. Get your tickets here.
Next Up: London! On May 14th, I'll be launching my Honorary Professorship at UCL with a public lecture: "How Do We Robot-Proof Our Children, Ourselves, and Our Society?" This is a free event, and I'd love to see you there! [Link]
Here's the exciting part: I have some open dates between and after these events, and I'm actively looking to connect with organizations in the UK and EU who are grappling with the challenges and opportunities of AI, the future of work, and human-centered technology. I'm available for keynote addresses or full briefings. My talks are dynamic, thought-provoking, and grounded in real-world research and practical applications.
I'm particularly keen to explore opportunities in and around London during the week of May 15th, given the exciting conversations happening there. #FortuneAILondon, I'm looking in your direction!
Reach out to me at https://socos.org/speaking
If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?
Vivienne L'Ecuyer Ming
Follow more of my work at | |
---|---|
Socos Labs | The Human Trust |
Dionysus Health | Optoceutics |
RFK Human Rights | GenderCool |
Crisis Venture Studios | Inclusion Impact Index |
Neurotech Collider Hub at UC Berkeley | UCL Business School of Global Health |