Information Foraging

For too many AI utopianists, the conversation is only about what they can imagine people doing with AI, never about what people actually do. Only the most transformative, idealized use cases interest their “AI thesis”. To be fair, the AI dystopianists are just as obsessed with these imaginary worlds—they just focus on the other end of the distribution. Both sides suffer the imagination disease. The messy, complex, and far more interesting reality lies in the vast and heterogeneous middle where human psychology collides with machine intelligence.
The real story isn't about AI superintelligence or human choice; it's about the emergence of a new kind of hybrid intelligence. And like any intelligence, it is a product of both nature and nurture. For me the question is: how does the nature of a human information forager interact with the nature of the algorithmic environment in which we now forage?
Let’s start with our own nature. We humans have a peculiar aversion to genuine exploration, especially when the territory is our own mind. Visiting a new city might be an adventure to many, but revisiting our own uncertainty is an acquired taste…to say the least. But to truly explore is to confront one's own misconceptions and uncertainties, and we are deeply wired to avoid this discomfort. Perhaps it’s a high-level form of "inhibition of return"—the cognitive bias that steers us away from re-examine something we've already processed. Why explore again when I already have an answer, however flawed? This “narrow search effect” doesn’t just nudge us to ask biased questions; it drives us to actively avoid new questions that might challenge our safe answers. I see this clearly in a recent Anthropic report on how university students use Claude, their LLM. While students happily used Claude to get answers, create new work, or just have fun, very few used it to evaluate and critique their own thinking. (For me, using LLMs to critically evaluate my thinking—productive friction—is the killer use case in my own work.)
Now, consider the “nurture” of an LLM-mediated ecosystem. AI is not the simple villain it’s often made out to be. Real world recommendation algorithms seem largely neutral, such as the YouTube recommendation algorithm described in the Research Roundup. Left to its own devices, it pulls users toward the moderate center. But it is not its operation in isolation that matters to hybrid collective intelligence; it’s its interaction with human choice and psychology. Whether YouTube or GPT, these systems today are inherently responsive. Their “prime directive” is to give us what we want. When our psychologically-driven, self-narrowing questions meet a hyper-literal, immediacy-obsessed algorithm, the result is a feedback loop that induces a shallowly exploitative hybrid intelligence. The algorithm doesn't create this bias; it emerges in that new hybrid intelligence.
Understanding this allows us to move beyond imagination disease and toward intentional design. If the problem lies in the emergent ecosystem of human and machine interactions, then that is where we must innovate. Years ago my wife and I studied student-to-student conversations using pre-LLM natural language models. We analyzed how students’ discussions in online course forums predicted everything from course grades to career outcomes. The most fascinating finding was that the learners who engaged in exploration, exposed their ideas to others even when they knew they were likely wrong, were the ones who thrived in their career.
We can design our AI systems to encourage this. Instead of a tool that merely shallowly confirms, design ones to provoke exploration. Years ago, I prototyped this idea at Gild. As the chief scientist of one of the first companies using deep machine learning in hiring, I prototyped a talent search engine that, after meeting a user's core criteria, would deliberately return the maximally diverse set of candidates across all other, unstated dimensions. It was an engine for challenging confirmation bias, for actively exploring uncertainty, not just exploiting the stereotypes about what makes a great employee. Hybrid intelligence must move in this direction: not just giving us what we want, but challenging us with what we need…even when we are hiding that need from ourselves..
The frontier isn’t just building smarter machines. It’s understanding and designing for the messy, emergent reality of our hybrid cognitive ecosystems. It's building wiser systems and, in the process, building the kind of people who are up for the profound challenge of genuine exploration.
In hybrid collective intelligence we are the foragers, AI is the ecosystem, and intelligence emerges from our latent dynamics.
Follow me on LinkedIn or join my growing Bluesky! Or even..hey whats this...Instagram? (Yes, from sales reps and HR professionals to lifestyle influences and Twitter castaways, I'll push science anywhere.)
Research Roundup
Are Students AI Explorers or AI Exploiters?
Every intelligent system, from a bee colony to a startup, faces a fundamental choice: explore for new possibilities or exploit existing knowledge for immediate results. This tension is the story of progress and survival from individual lives to entire civilizations. So, what happens when students are handed a magic AI genie that can grant their every request: will they use it to explore or exploit?
New findings on students as "information foragers" analyzed this new hybrid intelligence. In the experiment, exploration using LLMs was driven by ideas shared by other people. Exploitation, in contrast, was a narrowing process heavily influenced by ideas suggested by the AI itself. The choice of where to get your next idea—from the human collective or the AI—largely determines a learner’s path.
The hyper-efficient helpfulness of the LLM drives a potent dynamic: suggesting its own narrow, relevant keywords attracts users to the path of exploitation. This accelerates completion of the tasks but also leads to less diverse essay topics. And that narrowing of student generated ideas in turn reinforces the narrow focus of GPT’s next round of recommendations.
Students aren't just getting answers; they are entering a cognitive partnership where the AI's efficiency shifts the workflow toward (too often) shallow exploitation.
Yet in all human experience heterogeneity reigns. Students with higher "trait critical thinking" skills were better at resisting the pull of their own biases, while those with higher "trait cooperativity" were more likely to exploit the AI's suggestions. We are not witnessing a single, uniform effect of AI on learning.
Most exciting to me was the emergence of hybrid collective intelligence, where “the interests of other [students]...played a major role in…exploration prompts” fed back by the system to learners. The combination of “exploration prompts and…socially-sourced keywords effectively increased the diversity of topics within the participants’ interactions”.
We are seeing the emergence of a diversity of hybrid cognitive systems. Their tendency to explore or exploit is not a fixed feature of the human or the AI, but a complex, emergent property of their unique interaction. The future of learning isn't just about the power of our tools, but about our awareness of the cognitive dynamics engaged in every prompt we type.
A New Nature vs Nurture
The great Nature vs Nurture debate has a new spin for the digital age: are our online information diets a product of our own "natural" preferences or are we "nurtured"—and potentially radicalized—by the platforms we use?
The YouTube algorithm, in particular, has been cast as a prime villain in this story, a force pushing users down extremist rabbit holes. But the real story isn't a case of human versus machine, but a complex, dynamic dance between the two.
Digital twins of real users ("counterfactual bots") allowed researchers to compare what users actually watched with what they would have watched if they only followed algorithmic recommendations. Surprisingly, “relying exclusively on the YouTube recommender results in less partisan consumption,” particularly “for heavy partisan consumers”.
YouTube’s algorithm actually acts as a moderator, not an amplifier. The counterfactual bots, always following algorithmic recommendations, drifted back toward the center as the “recommender ‘forgets’ their partisan preference within roughly 30 videos regardless of their prior history”.
This doesn't absolve platforms of responsibility, but it does clarify the "nurture" side of our equation: the algorithmic environment itself isn't inherently radicalizing.
So if the algorithmic current is pulling toward the middle, why do so many people end up on the extremes? This is where "nature"—the user's endogenous motivation and preferences—collides with the system. Reaching and staying in a niche of hyper-partisan content isn't a passive slide; it requires active, persistent effort. The algorithm responds to this demonstrated preference, but it does not create it.
Our information diet is the result of a dynamic tension between our own foraging goals and the system's architecture. Who we are is the interaction of nature and nurture, preferences and algorithms.
The critical question is no longer "is it the human or the algorithm?" but "what is the nature of this hybrid cognitive system we've created?" The study shows us a system that is surprisingly responsive—when users change their viewing habits, the algorithm "forgets" their old preferences relatively quickly.
We are not merely passive consumers being fed by a machine; we are active participants in a cognitive ecosystem that we shape, even as it shapes us. The real frontier is understanding the new reality of hybrid collective intelligence.
Digging Your Own Rabbit Hole
We've all done it: we have a hunch, and we go to Google or ChatGPT not for an answer but for confirmation. This isn't just a quirk of human psychology; it's the engine of a powerful feedback loop between our minds and our search algorithms that quietly insulates us from new ideas.
This “narrow search effect” is a simple, two-step dynamic:
1) Our prior beliefs shape the questions we ask. If we think a new policy is dangerous, we search for "problems with X policy," not "effects of X policy."
2) The algorithm obediently gives us exactly what we asked for: a list of problems.
The algorithm isn't being biased; it's being literal. But its obedience to our narrow questions creates an amplifying feedback loop.
This effect “persists across various domains (e.g., beliefs related to coronavirus, nuclear energy, gas prices, crime rates, bitcoin, caffeine, and general food or beverage health concerns…), platforms (Google, ChatGPT, AI-powered Bing), and extends to consequential choices“.
So, what's the solution? The study tested two approaches. “User-based nudges”, like prompting people to be more open-minded, didn't change their behavior. Our biases are too ingrained. In contrast, “algorithm design” to specifically provide broader, more diverse results was highly effective at encouraging belief change.
These findings highlight the critical importance of designing for hybrid collective intelligence. The problem isn't just our psychology or the machine's logic; it's the messy complexity born of their merging. The future of intelligence isn't just about building AIs that give us the right answers, but about designing wiser systems that challenge us to ask better questions..and building the people who are up for the challenge.
<<Support my work: book a keynote or briefing!>>
Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!
SciFi, Fantasy, & Me
I just (re)experienced a glorious nerd trifecta: City of Death
1. It is this arguably the best episode* of the original run of Doctor Who.
2. It was written/rewritten by Douglas Adams.
3. Julian Glover gave it a great villain (as he did in “Indiana Jones and the Last Crusade”, “For Your Eyes Only”, “The Empire Strikes Back”, ”Harry Potter and the Chamber of Secrets”, and so much more).
#bonus: an absurd cameo from John Cleese!
[Thanks to the “Douglas Adams” episode of Imaginary Worlds for inspiring the rewatch.]
* Back in the day, Doctor Who ran as 5-episode arcs across a single storyline. This meant that for a little kid watching them on PBS in California, they ran as 2.5 hour ultra-low budget scifi movies starting near midnight on Sundays.
Stage & Screen
- August 18, Qatar: What's next: Human-AI Collective Intelligence
- September 18, Oakland: Reactive Conference
- Sep 28-Oct 3, South Africa: Finally I can return. Are you in SA? Book me!
- October 6, UK: More med school education
- October 11, Mexico City: maybe...I hope so!
- October 29, Baltimore: The amazing future of libraries!
- November 4, Mountain View: More fun with Singularity University
- November 21, Warsaw: The Human Tech Summit
- December 8, San Francisco: Fortune Brainstorm AI SF talking about build a foundation model for human development
If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?
Vivienne L'Ecuyer Ming
Follow more of my work at | |
---|---|
Socos Labs | The Human Trust |
Dionysus Health | Optoceutics |
RFK Human Rights | GenderCool |
Crisis Venture Studios | Inclusion Impact Index |
Neurotech Collider Hub at UC Berkeley | UCL Business School of Global Health |