Context, Context, Context

Context, Context, Context

Everyone's thinking about AI & Work entirely wrong. When I say "everyone", of course, I'm just being lazy. And that's a hint about the nature of this problem.

I read two articles this morning that perfectly encapsulated the public's current, fractured understanding of AI. The first warned of a coming recession, driven by AI displacing tech workers, particularly software developers. Yes, they spoke about tariffs. And about economic uncertainty, but they also explicitly cited jobs numbers related to AI displacing tech workers and the cascading hollowing out of demand for university and eventual social unrest. It was dark.

The second, looking at essentially the same data, celebrated. "It's an amazing age! A handful of brilliant people can now do what massive organizations once did, all thanks to AI as a co-pilot!" It envisioned an elite core of creatives, unburdened by drudgery, while everyone else lived off Universal Basic Income. Both perspectives, of course, spoke about "the Chasm"—my term for the growing divide between the creative elite and the rest—but they saw it as either an unavoidable tragedy or a glorious outcome.

Both of these perspectives are wrong, though not because they're wrong about AI's potential to substitute for people. Despite the underwhelming launch of GPT-5, LLMs and other commercial AI are only going to grow in impact. The technology is already ready to completely transform work today. We're just not using it right.

The central flaw in both narratives is that they fundamentally misunderstand hybrid collective intelligence—what emerges from human-machine collaborative intelligence. All AI benchmarks are about them operating autonomously. And while everyone understands that people are different from one another, current AI implementations are overwhelmingly substitutive and low-impact and so human differences don’t matter. It's this idea that AI "does the boring stuff so humans can do the creative stuff," except most of the current workforce doesn't have the training, discretion, or motivation to do the creative stuff. They're marketing a wand of wonder to users who just want a charm against boredom, and buyers who want Fantasia brooms.

We're witnessing the rise of the Jiffy Lube Economy, where work that once required years of expertise is deprofessionalized, performed by a lower-skilled workforce augmented by AI. This isn't just about jobs; it's about the erosion of human capital itself. When AI only does the "boring work" more efficiently, it doesn't free us for creativity—it just creates more boring work. Imagine AIs flinging emails back and forth: soon, we'll all be drowning in email summaries.

Hybrid collective intelligence is an emergent quality of the interaction of people and AIs. It is context. If organizations building and implementing AI are thinking in terms of this emergent capacity, then they are missing the true transformative use case. It’s not in automating the well-posed problems—the tasks with explicit right or wrong answers that machines now effortlessly master. The true transformation lies in augmenting our capacity to explore ill-posed problems—the messy, intractable unknowns where the questions themselves are undefined. AI knows everything, but it understands nothing. That's our job.

My terrible pitch to leaders requires a massive leap of faith: instead of laying off workers and automating routine tasks, they should eat the cost to retrain and reorient them towards elite-level creative work. We're talking about reconceptualizing human talent not as a static skill set, but as a dynamic force of explorers. This is a messy, complicated transformation. It's not about producing deliverable code or marketing resources on a schedule, but about fostering continuous exploration, turning entire teams into "mini-startups" searching for product-market fit or simply ideation. A few of these internal startups go live; most are just part of discovery.

Let’s nerd out and call this cognitive alchemy: the synergy that emerges from hybridizing creative, ill-posed human work with well-posed machine capabilities. Rather than simply automating routine, well-posed tasks humans imbued with meta-learning and comfort with uncertainty, work collectively with AI to explore the intractable. We don't want AI to do our work for us; we want it to make us better at our work. We want it to make our work harder in ways that make us better.

A Concrete Example: The InnovateCo 2.0 Collective

Let's imagine a company, InnovateCo, a software firm that has embraced this vision. Their goal is not just to build products but to continuously explore the frontier of what's possible, driving what I will call Augmented Collective Intelligence.

Their R&D teams don't operate like traditional product lines. They're structured as a Cyborg Collective, a fluid organism of human and AI intelligences orchestrated by a central AI Matchmaker.

Phase 1: The Spark Forest (Ill-Posed Human Exploration)

The process begins with a deliberately ill-posed problem, playing to human strengths. Not "Design a new feature," but "How might we combat developer loneliness and foster a sense of creative play?" Human teams—engineers, designers, marketers—contribute raw, unfiltered, and anonymous thoughts into the system: personal stories, metaphors, fragments of ideas, "what if" questions. The LLM is strictly forbidden from offering solutions. Instead, it acts as a "Librarian of Weird Sparks," injecting orthogonal concepts—an ethnographic study of 19th-century Parisian salons, a technical paper on ant colony optimization, a poem about connection—actively preventing premature convergence and pushing human thought into genuinely novel territory.

Phase 2: The Greenhouse (AI-Mediated Matchmaking & Incubation)

This is where the Matchmaker AI goes to work. It continuously updates a model of the organization's implicit social network, inferring personality traits, communication styles, and measures of trust (e.g., "Who constructively critiques whom?"). The Matchmaker identifies nascent ideas and promising human contributors from Phase 1. It then intelligently gates connections, forming high-potential "incubation pods". It might create a temporary, private channel for Sarah, a junior designer whose ideas were abstract but highly novel, and Ben, a quiet senior engineer known for his patient, constructive feedback. Simultaneously, it prevents Sarah's raw idea from being immediately exposed to David, a lead engineer known to be skeptical of unproven concepts, predicting a high-friction, low-success interaction at this delicate stage.

Phase 3: The Forge (Human-AI Co-Refinement of Well-Posed Problems)

Once an idea within an incubation pod solidifies, the problem shifts from ill-posed to well-posed. The LLM's role flips. It is now tasked with convergent, normative work. The human team gives it a prompt like: "Take our incubated concept of a 'daily code-pairing lottery' and generate three detailed feature specifications, a draft project timeline, and a risk analysis report." The AI excels at this structured, optimization-focused task. The humans then act as expert editors and critics, evaluating the AI's well-structured output, using their domain expertise and intuition to select the best path forward, blending the AI's logical rigor with their own strategic judgment.

Phase 4: The Launchpad (Strategic Hub Engagement)

Only now are the human "hubs" (the VP, Lead Engineer) brought in. They are presented not with a single vision to approve, but with 3-5 battle-tested, diverse, and resilient concepts that have already demonstrated their value within the collective. The sentiment analysis AI provides data on which concepts generated the most excitement and constructive engagement during the exploration phase. The hubs' role has been transformed: they are no longer the source of the initial idea, but the strategic selectors and champions who decide which proven concept to pour resources into for exploitation. The final product is far more innovative and has buy-in from the entire team, because they co-created it.

This model is a symphony of human and machine intelligence, orchestrated in real-time. It moves beyond the simple "co-pilot" to a system where AI is the very medium of collaboration, constantly reshaping the network to maximize collective intelligence. The result? Not just a company that survives the age of AI, but one that actively invents it.

I’m only scratching the full vision of the cyborg collective I’ve been piecing together—hybrid RL-LLM systems for hypothesis exploration, concept evolution tracing and dynamic incentive matching for productive failure, realtime brainstorming collaborative prototyping with variant superposition, dynamic collective intelligence optimization—but the nerd salad isn’t required to launch, It just requires the courage to try something completely different.

💡
Follow me on LinkedIn or join my growing Bluesky! Or even..hey whats this...Instagram? (Yes, from sales reps and HR professionals to lifestyle influences and Twitter castaways, I'll push science anywhere.)

Research Roundup

The Context That Launched 1000 Ships

We believe we read faces, seeing fear, joy, or sincerity in our friends, family, coworkers, and strangers. Perhaps we’re just reading the room.

In a series of 12 experiments, participants were shown authentic videos of people in intensely frightening situations. When they saw only the person's isolated face, they could not reliably identify the expression as fear. However, when they saw the context without the face—the body language, the environment—they “clearly and robustly perceived” fear.

It’s not just that context matters; it's that context is doing almost all the work. We believe are reading faces, when in fact, our brains construct a holistic perception of "fear" from myriad clues. More importantly, we then retroactively attribute these decisions to the most obvious focal point: the face. We are not aware of how we are making our own choices.

Our "gut feelings" about a candidate's confidence in an interview or a colleague's sincerity in a meeting are likely being driven by a host of contextual factors we aren't even aware of.

It fundamentally challenges any system that relies on an individual's ability to "read" a situation accurately, from law enforcement to judicial review.

And, it exposes the deep flaw in any model trained on isolated faces to infer job candidate quality. Such a system is not just inaccurate; it is attempting to replicate a fundamentally flawed human assumption.

We are a black box to ourselves. The stories we tell ourselves about how we make judgments are often just convenient fictions. True intelligence, both human and artificial, begins with the humility to question the very nature of our own perception.

Billboards fuck with your head

We believe our judgments are our own—when we make a choice or form an opinion, it's the result of our own internal logic and stable preferences. In reality, the very categories in our minds are being continuously and invisibly recalibrated by the world around us.

When young women were exposed to a “higher prevalence of thin bodies”, their concept of overweight “expanded to include bodies that would otherwise be judged as ‘normal’”, including their own.

Prevalence-induced concept shift does more than ruin the lives of many young women. Our brains constantly, and unconsciously, update their internal prototype of what is "normal" based on the statistical regularities of our environment.

Our perception of a "risky" investment, a "qualified" job candidate, or even a "fair" political argument is being subtly warped by the media diet we consume. We make choices within a cognitive landscape that is being continuously terraformed by the information to which we are exposed.

This is a direct challenge to the notion of the purely rational, autonomous self. So much of what we believe to be conscious choice is, in fact, an echo of the context that has shaped our perception.

There were “significant individual differences in sensitivity” to the effect of context on body judgements. Understanding those who were more resilient can point to means of inoculating others against these effects. Understanding this is a step toward reclaiming agency—not by ignoring the context but by consciously choosing the inputs that build the kind of mind we want to have.

Emotion Without Emotion

Our aesthetic judgments, our sense of beauty, our emotional response to a landscape—these come from an ineffable place beyond mere computation. …and if you believe that I have 180 soul-imbued AIs to sell you..

Those “180 state-of-the-art deep neural network models”—AIs trained only on standard computer vision tasks, with no knowledge of emotion, beauty, or human experience—"looked" at a variety of images. Then, researchers decoded the network's internal activity, much like a neuroscientist reading brainwaves.

Without any additional training, the internal activity in these “purely perceptual models” could predict a majority of the variance in shared* human ratings of “arousal, valence, and beauty”. 

* The models didn’t predict variation between individuals, but I’m confident they could if each person's perceptual history could be used to fine-tune separate models.

A significant portion of what we feel about an image—beauty, peace, or excitement—may be a direct, almost automatic byproduct of our visual system's effort to efficiently represent the world. The features our brains evolved to care about for survival (e.g., patterns that are complex but not chaotic) are the very same features that we find aesthetically pleasing.

This doesn't diminish the human experience of art or beauty. Instead, it suggests that the roots of our deepest feelings may be elegantly intertwined with the fundamental mathematics of perception.

Emotion isn't just an overlay on top of senses; it may be an emergent property of the senses itself.

💡
<<Support my work: book a keynote or briefing!>> Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!

SciFi, Fantasy, & Me

?

Stage & Screen

💡
If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?

Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley UCL Business School of Global Health