"Collective Artificial Intelligence and Evolutionary Dynamics"

"Collective Artificial Intelligence and Evolutionary Dynamics"

We often think about Artificial Intelligence in singular terms: a powerful model, a lone agent tackling a problem, but the future of intelligence—machine and human—is undeniably collective. How do groups of agents learn to cooperate or compete? How do the messy dynamics of real-world interactions shape emergent intelligence? These are no longer just questions for evolutionary biologists or game theorists; they are at the very heart of how we will build and interact with AI in the coming years.

This week, I’m doing something a little different with my newsletter. We will dive entirely into the recent special issue of PNAS titled "Collective artificial intelligence and evolutionary dynamics". This collection of research is fascinating because it directly tackles the intersection of my own "mad science" obsessions: game theory, multi-agent AI systems, and the fundamental challenge of cooperation. It’s a glimpse into the foundational science that will shape our future, and it resonates deeply with my own work on hybrid human-AI collective intelligence. So let’s explore a few of these papers and see how they illuminate the path forward.

💡
Follow me on LinkedIn or join my growing Bluesky! Or even..hey whats this...Instagram? (Yes, from sales reps and HR professionals to lifestyle influences and Twitter castaways, I'll push science anywhere.)

The Echoes of Evolution in AI Behavior

Several papers in this special issue use AI to explore classic questions from evolutionary theory, with some startling results. One study, “Tabula rasa agents display emergent in-group behavior”, investigates the emergence of in-group bias, a long-standing puzzle: are we hardwired to favor our own or is it a learned behavior? Using "blank slate" deep reinforcement-learning agents, the researchers found that biases based on arbitrary group differences (like color) emerge simply from familiarity and patterns of interaction, without needing any built-in prejudice. This suggests that some of our most challenging social behaviors might be byproducts of general learning processes, a powerful reminder that the environments we design for both humans and AI have profound consequences.

Another paper dives into the messy reality of partner choice, “Perceptual interventions ameliorate statistical discrimination in learning agents”. In complex environments, how do we decide who to cooperate with? In a multi-agent AI simulation biases can emerge when agents rely on spurious correlations; however, powerful intervention can make a difference by simply making outcome-relevant features more salient. By making it easier for the agents to see what actually matters, these interventions reduce emergent biases and foster fairer, more productive cooperation. This is a direct parallel to building better human systems: it’s not always about changing the individuals, but about designing the environment to make good choices easier.

Building a “Theory of Mind” for Machines

Perhaps the most direct link to my work comes from “Evolving general cooperation with a Bayesian theory of mind”, which explicitly builds AI agents that possess perceptive taking. These "Bayesian Reciprocators" don’t simply follow rule-based AI game theoretic strategies but instead incorporate a "theory of mind", allowing it to infer the beliefs, preferences, and strategies of others through interaction. These agents value the payoffs of others, but only to the extent it believes they are also cooperative. This more sophisticated, inferential approach to reciprocity proved far more robust and adaptable in evolutionary simulations, sustaining cooperation across a wider range of scenarios.

This is precisely the kind of thinking we need to build truly effective hybrid human-AI systems. For an AI to be a valuable partner in a collective intelligence task—medical diagnostics, scientific discovery, business strategy—it can't just be a powerful calculator. It needs to have some model of its human collaborators: their goals, their potential biases, their likely responses. It’s heartening to find evidence that these qualities do in fact increase cooperation and alignment. "Bayesian Reciprocators" is a step towards AI that can be imbued with rational prosociality, a crucial component for any system designed to augment, not just automate, human capabilities.

AI as a Tool for Designing Better Human Systems

Finally, the special issue explores a truly transformative idea: using AI not just to participate in our systems but to help us design better ones. The field of mechanism design flips game theory on its head: instead of asking what outcomes will arise from a given set of rules, it asks what set of rules will produce a desired outcome. This is incredibly complex (which is why engineering traditionally works in the exact opposite direction), but deep reinforcement learning is making it possible. “Deep mechanism design: Learning social and economic policies for human benefit” explores using AI simulations to design more efficient auctions, fairer resource redistribution policies, and even optimal tax structures.

This expression of hybrid human-AI collective intelligence is about leveraging the immense computational power of AI to explore vast possibility spaces, simultaneously guided by human values and goals and guiding them. The ambition of the paper is to create systems that are more equitable, efficient, and beneficial for everyone, ambitions that aligns directly with my vision for The Human Trust and projects like “The Human Tapestry Initiative”: using AI to understand the complex dynamics of human systems so we can intervene more effectively for human development.

This PNAS special issue is a reminder that the future of AI is not about creating a single, monolithic intelligence. It’s about understanding and engineering the dynamics of collections of intelligent agents—some human, some artificial. It’s about building systems that foster productive cooperation, mitigate self-limiting bias, and solve problems together that neither humans nor machines could solve alone. This is the real mad science of our time, and it’s where the most exciting work is yet to be done.


<<Support my work: book a keynote or briefing!>>

Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!

SciFi, Fantasy, & Me

I’ve been rewatching* Twin Peaks with my son.

It so establishes the mode of 3 decades of WTF drama, from Lost many years later to Severance today. Was there a predecessor? What are other series in this “genre”?

An important truth: the James Dean wannabe biker kid remains the worst part of the show. I can only stand him as satire.

* On Pluto TV…how are ads for “shares” in “gold” not a financial crime?

Stage & Screen

  • August 18, Qatar: What's next: Human-AI Collective Intelligence
  • September 18, Oakland: Reactive Conference
  • Sep 28-Oct 3, South Africa: Finally I can return. Are you in SA? Book me!
  • October 6, UK: More med school education
  • October 11, Mexico City: maybe...I hope so!
  • October 29, Baltimore: The amazing future of libraries!
  • November 4, Mountain View: More fun with Singularity University
  • November 21, Warsaw: The Human Tech Summit
  • December 8, San Francisco: Fortune Brainstorm AI SF talking about build a foundation model for human development

If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?


Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley UCL Business School of Global Health