UCLA Book Club Q&A

UCLA Book Club Q&A

UCLA Semel Institute Book Club answers to questions we didn't have time for

I recently gave a virtual book talk for UCLA’s Semel Institute with the amazing Prof. Kieth Holyoak as my interviewer. (One version of “you’ve made it” is when the people you read about in text books are interviewing you…or maybe it just feels unreal.). My self-indulgent, long-winded answers in the Q&A section meant that I ran out of time to answer all of the submitted questions. So, I foolishly promised to answer them all here in my newsletter.

I've tried to keep these tighter than my usual professional bloviator persona (a known failure mode of mine), but please reach out if you'd like me to go deeper on any of them.

Q1. What are the biggest changes you foresee in fields like psychology and neuroscience as a result of recent AI gains? Will we always need 'humans in the loop' when studying problems about human experience?

The biggest near-term shift is that we can finally do science at the scale and resolution our theories have always demanded. In my own work I'm now running dynamic topic models across the full arXiv/PsyArXiv/bioRxiv corpora and tracking innovation diffusion in ways that would have taken a postdoc army a decade ago. Similar things are happening in neuroscience: pose estimation, automated behavioral phenotyping, foundation models on EEG and fMRI, mechanistic interpretability borrowed straight from LLM research. The deeper change, though, is conceptual; AI is forcing us to ask which of our cognitive constructs are real and which were artifacts of the limited measurements we could afford.

As for humans in the loop, always…but in the way or for the reason most people imagine. It isn't that humans are a safety net for AI mistakes; it's that the questions worth asking about human experience are themselves ill-posed human questions. An automated pipeline can give you fast, efficient, scaled answers, but it can't tell you which answers matter and which of literally infinite questions are worth exploring. My own research on hybrid intelligence shows that the best version of this emerges from deep human-machine interaction.

Q2. What does 'model-free' or 'model-based' mean?

These terms come from psychology, reinforcement learning, and computational neuroscience, and they describe two different ways an agent, whether biological or artificial, can learn to act. A model-free system learns statistically by trial and error: it tries something, gets a reward or a punishment, and over time builds up a feel for which actions are good in which situations, without ever representing why. Habit learning in the basal ganglia is the classic biological example. A model-based system, by contrast, builds an internal map of how the world works—”if I do X, then Y happens, and Y leads to Z”—and uses that map to plan. It's slower and more cognitively expensive, but it generalizes to novel situations in a way model-free learning cannot. Humans use both (and they likely interact in complex ways), and one of the most interesting findings of the last 15 years is that we flexibly arbitrate between them depending on stress, time pressure, and how stable the environment is. Most of the AI you interact with today is a hybrid that’s heavy on model-free pattern matching but with thin slivers of something model-like either layered atop the system or (some believe) emerging in the largest systems.

Q3. Have you connected with Cameron Berg (Yale researcher) who just did a documentary called AM I? questioning whether AI has consciousness?

I haven't connected with Cameron, see that he’s started a nonprofit to measure AI consciousness. My own view, for what it's worth, is that consciousness is an ill-posed construct making it nearly impossible to measure or even discuss in a productive way. I fully believe that (1) existing AI such as agentic LLMs are intelligent, (2) that intelligence overlaps and differs from our own, and (3) the squishy ill-defined idea I have of “self”, “consciousness”, “awareness” doesn’t meaningful existing in any modern AI system, nor will it if we simply make them bigger. I do believe that artificial consciousness is a realizable concept…if only we knew what the hell it was. For now, I look at the internal states of any model of interest and I see no evidence of stable preference, model-based understanding, or clear internal theory of mind. Don’t believe the last, play D&D with any agent as your DM and watch it simultaneously to the superhuman without seeming effort then immediately struggle with basic theory of mind.

Q4. How do you envision the human-AI interface evolving? Will it stay conversational, like two colleagues, or will fundamentally different modes of interaction emerge?

Conversational interfaces will stick around because language is the most general-purpose tool humans have ever invented, but I don't think that's where the most important interactions will live in 10 years. The deeper shift is toward what I'd call ubiquitous distributed cognition—the AI is present continuously, watching your work, offering friction at the right moments, asking the question you weren't going to ask yourself. Whether this is a dystopian nightmare or human lift probably depends on whether we shift away from proprietary models to fiduciary AI that explicitly serves its user’s interest. Nonetheless, artificial general luck is in our eventual future.

The Socrates model I've been experimenting with (a Llama instance prompted never to give a direct answer) points at one version of this: an interlocutor whose job is to make you think harder rather than to save you the trouble. Eventually I expect we'll see neural interfaces that bypass the language bottleneck entirely for narrow tasks—not telepathy, but high-bandwidth coupling for things like motor control, complex visualization, or scientific exploration. The dangerous failure mode in all of this is interfaces designed to maximize engagement rather than human capability. Most current products are on the wrong side of that line.

Q5. Can you speak to the way your bipolar tool is used? Is it used in conjunction with therapy with a human, or does it come with a personalized program used instead of treatment?

The tool I described in Robot-Proof was an early warning system. My collaborators and I wanted to know if we could detect the evidence of an incipient manic episode based only on mobile phone-derived data before the user or their loved ones realised it was happening. It turns out that mobility data—GPS, gyroscope, accelerometer—were very sensitive in that changes in patterns strongly and quickly predicted episodes. The only additional feature to the project was a text message system that altered both the user and prechosen confidants (e.g., family, spouse, doctor, therapists, etc.). We were about to show that messy data could help predict bad episodes in some sufferers, but the next step in the project is to learn what you might do about it. It is absolutely not (nor intended to be) a treatment of any kind—only a single.

Q6. Can you speak about the tension between developing intellectual humility and AI making 'knowing answers' seem easier? (I teach intellectual humility and curiosity.)

The risk isn't that AI gives people answers—humans have always outsourced answers, that's what books, libraries, experts, and even Google are for. The risk is that AI gives people the feeling of having understood something they haven't actually thought about (which Google can do as well). Intellectual humility isn't simply a metacognitive mantra that “I might be wrong”. It developed capability that emerges from the lived experience of having been wrong, repeatedly, in ways that lead to greater insight and deeper rewards. Productive friction, the discomfort of not knowing and the cost of working through a hard problem to final insight, is how that experience gets encoded. Frictionless AI strips it out. The pedagogical move I'm experimenting with is forcing the friction back in: Socratic AI that refuses to answer, assignments that require students extend beyond their agent's answer, evaluation rubrics that reward the quality of someone's confusion before they sought help.

Q7. Where is Paraguay in the development of AI?

Paraguay is in a fascinating position, and I came back from my engagement there far more optimistic than I expected. It's small, it doesn't have the talent density of São Paulo or the capital flows of Santiago, but the country is small enough to run national-scale experiments in education, health, and economic inclusion that would be politically impossible in larger systems. (City-states like Singapore and Cusacou are even better positioned.)

The advantages of the US and China are talent density, capital, infrastructure, and existing development of both foundation models and chip technology. Europe and India should be able to compete on similar terms, but simply haven’t been. Most of the rest of the word doesn’t have those sort of resources, but this leaves open smaller scale sovereignty foundation models and local human capital development as core priorities. A my book suggest, the best way to get more out of AI it to develop human capital.

Q8. Reflection: 'Questions to ask AI regarding intuition are those that treat the AI as a mirror for your own subconscious patterns rather than an oracle of truth.'

I like this framing. Much of my own work has leveraged it. Every head line that reads “AI is biased” or “AI is kind” should read “Humans are biased, and AI has revealed it”. These systems don't understand anything in the sense we mean when we say a person understands something, and treating their outputs as ground truth is how you get the worst failure modes of automation. The mirror metaphor is healthier and, I'd argue, more accurate to what's actually happening computationally: when you prompt a model, you're injecting yourself into the system, the launching point of a trajectory across a massive, high-dimensional manifold. Outputs reflect your own implicit structure back to you in a form you can examine.

Q9. I'm a software engineer using AI intimately for my day-to-day work, and your cyborg observation is exactly right. Should we embrace this approach and teach people to think this way early in their education?

Yes, with a giant caveat. We should “teach” cyborging early [1], but the prerequisite for being a good cyborg is having something to bring to the partnership. The Automators in my data, the people who outsource cognition to the machine, are mostly people who never built the underlying capacity in the first place. The Cyborgs are people who can think on their own and choose to think harder by collaborating with the machine. So, the educational implication isn't “give kids AI from day one”; it's “build deep human capacity first, then begin interleaving AI as soon as the capacity exists”. Show a 7-year-old how to ask Claude to write their book report and you've broken something. Show a 12-year-old who has already learned to read closely and argue clearly how to use agents as cognition augmentors and you've created leverage. The skill we should be teaching across the whole curriculum is metacognition: knowing what you know, knowing what you're outsourcing, and knowing why.

[1] “teach cyborging” is a very not very good wordage.

Q10. Are we concerned about the dehumanization and lack of trust in instruction? Kids are already saturated with screens and games. Now we're pushing more tech into schools, families, and relationships. AI is being shoved at us without learning how to monitor its impact.

I’m deeply concerned and this concern is empirically well-founded, not just nostalgia. The K-shaped cognitive divergence I write shows up in the data already: a small fraction of people are using AI to become extraordinary, and a much larger fraction are using it in ways that net nothing or even measurably erode the very capacities that make them human: attention, working memory, curiosity, meta-uncertainty. The deployment pattern you're describing, where tools arrive in classrooms and homes before anyone has built the evaluation infrastructure to know what they're doing to kids, is exactly how we social media entered our lives, and we are repeating the mistake in fast-forward. To make it more challenging, the effects are clearly heterogeneous, lifting some and harming many. What I'd say to a worried parent or teacher is this: the answer isn't fear-stoked abstinence or unbridled enthusiasm; it is how each of us interacts with it that matters. As a brief bit of practical advice, I actually have an agent prepare a weekly review of my interactions with it and other agents (i.e. analyzing the full transcript of my interactions) that highlights trends of cognitive outsourcing or low resilience, along with some quick meta-cognitive strategies to reduce those patterns.

Q11. AI will replace entry-level white-collar jobs. But high-level white-collar people need entry-level training before they can do higher-level work. How do we train the next generation in the AI era?

This is the single most consequential labor-economics problem to which no one has an answer [2]. The traditional career ladder works because junior roles are simultaneously productive labor and apprenticeship—you get paid to make the mentor's slides/code/data/etc and in making them you learn to think like our mentor. Strip out the junior labor with AI, and you strip out the apprenticeship pipeline along with it. The senior generation is fine for now; the generation after that has no path. There are 3 things I'd push for:

  1. Employers have to treat training as a line item rather than a side effect of cheap junior labor, and that means paying for genuine apprenticeship even when AI could do the surface task faster.
  2. Education has to shift from credentialing toward what I call ill-posed problems: the messy, ambiguous, no-right-answer work that AI is worst at and that defines senior judgment.
  3. Individuals have to be relentless about working on the things the machine can't do for them, even when it's slower and harder.

The people who refuse to skip the hard parts are the ones who will still have careers in 20 years.

[2] One possible exception is the “creator economy” of YouTube and other mediums that is arguably slowly displacing traditional education. The problem is that it is closer to Lord of the Flies than a healthy, sustainable industry. Maybe it's a guide to what not to do.

Q12. You mentioned ideas for young kids — what about teenagers or college students? Are their brains already too formed, or is there still time? (Asking because my child is in the next room memorizing AP exam facts.)

Tell your child to put the flashcards down for a minute and breathe, and then tell the College board and ACT that they have built an assessment exquisitely optimized for the one thing humans no longer need to do. Memorizing facts in 2026 is about the most automatable cognitive activity after adding numbers together; we have machines that do it instantly, for free, and more accurately than we do it.

The teenage brain is NOT too formed. Adolescence and early adulthood are highly plastic times in our lives. Some deep cognitive developmental trajectories are well set by this point, but a huge number of other factors remain open to change. The interventions are different from what you'd do with a 6-year-old: insist on hard ill-posed problems where there's no answer to memorize; reward the quality of their reasoning and exploration, not the correctness of their output; force them to attack positions they deeply believe to identify the parts they don’t fully understand; have them teach something to someone else (peer tutoring and role modeling are the most underused learning technologies we have); and model intellectual humility yourself by visibly confronting your own misconceptions in front of them. None of this requires special tools. It requires adults willing to be uncomfortable alongside them.

Q13. My high-school-senior son asked me to buy a licensed AI app for studying. I'm not sure it will help him study or just help him 'cheat' on homework and lose his curiosity. Suggestions for parents?

The question I'd put back to you isn't “should I buy the app”; it's “what would I want him to be doing with the app if I did”. Most study apps marketed to high schoolers are explicitly designed to minimize friction: ask a question, get an answer, move on. Despite genuine best intentions, these products are genuinely bad for learning, and the research on this is not new but holds strongly for modern AI tools. However, a general-purpose agent used well—a tutor who asks him to explain his reasoning back, a sparring partner who argues the opposite of his thesis, a quiz-maker who keeps testing him until he can teach the concept—can be transformatively good. The difference is entirely in how it's used, which means the real intervention isn't the app, it's what you role model to your son about what you are actually trying to get out of learning. If the honest answer is “a grade”, then yes, the app will help him efficiently increase his grade (up to a hard limit) and he'll graduate emptier than he started. If the answer is fostering a love of learning then the same tool can become one of the most powerful learning aids ever built.

Q14. Will there be an audiobook version?

My publisher has confirmed that an audiobook is in the works.  I'll share more as soon as I can confirm a release date. If you're on my newsletter you'll hear about it first; if not, this is a good moment to sign up :)

Follow me on LinkedIn or join my growing Bluesky! Or even..hey whats this...Instagram?

Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Possibility Institute Optoceutics
Kennedy Human Rights Center UCSD Cognitive Science
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley UCL Business School of Global Health