Augmented or Diminished?

Why do I call myself radically moderate? Not because I believe in moderate policies—many of which are terrible. I’m a moderate because too much of any one principal or policy is invariably destructive, and I’m radical because I embrace tension. There is never one magical solution to our problems and so I embrace the “radical” notion that we should celebrate dissonance and find comfort in discomfort. This brings me to the NIH.

Anyone who's been foolish enough to read my newsletter, posts, and research knows that I am disgusted with the risk averse policies of the NIH…and the EU, and NSF…and for that matter most VCs. The long held and well researched tendency of the NIH to only fund “sure thing” research has been holding back innovation and the broader growth in human capital. The devastatingly wrong interpretation of my beliefs is that the NIH should be defunded and its support of basic science turned into a patronage system for either Elon Musk or Donald Trump.

In fact, the changes at the NIH or other funding agencies will have exactly the opposite effect on innovation and risk-taking than I’ve advocated for for so long. My research and others shows that funding agencies and private investors prioritize established researchers and less risky projects during times of budget austerity. So, I build a model of the likely economic and human capital effects of the cuts to the NIH or other agencies. The results are devastating.

Looking first only at existing policies (and assuming that they are not reversed) the estimated cumulative 10-year GDP losses from even a moderate 10% cut range from at least $30 billion to $100 billion. Once we factor in the reduction in human capital development—STEM training, startup and patent production, and slowing health and education development—the numbers become staggering: $330 billion to $1.1 trillion or more when considering more realistic ROI and multiplier effects and longer-term impacts.

All of this is only talking about the cuts to science and innovation in the US. I most certainly have other policy disagreements with the current administration (and let’s be honest, disagreements that go far deeper than “policy” — disagreements about what it means to be a moral person), but here is a point of hard numbers economic projection. Anyone that hoped to see the US or world economy grow and reap the benefits, it is time to wake up and become a “radical” moderate.

Follow me on LinkedIn or join my growing Bluesky!

Research Roundup

MEGaMind

I’ve been underwhelmed by so many applications of AI. One lazy “time saving” LLM job after another. But there is one domain where I’m truly excited: AIs specifically designed to accelerate exploration and innovation.

Here’s one just for brains. “BrainGPT, an LLM…tuned on the neuroscience literature” can “forecast novel results” for neuroscience experiments better than human experts. Importantly, the model preserved a very desirable quality of human experts: “when [they] indicated high confidence in their predictions, their responses were more likely to be correct”.

This has been the (unmet) promise of AI for so long, but it has required setting aside shallow, efficiency obsessed tools meant to make work easier. These new tools challenge us to go deeper.

The Statistics of Tea Leaves

I was just about to share a paper claiming to show the off-the-shelf LLMs outperform financial analysts in predicting earnings changes. But I just discovered that the paper has been pulled from both SSRN and arXiv.

I don’t know the circumstances behind the withdrawal, but whether the original findings are valid or not (I found it quite plausible) this is a perfect illustration of how unreviewed manuscripts can spread quickly online while the withdrawal only surfaces because you search for it.

My original conclusion for this post would have been, “The question now is, how are you using these AI-generated insights? Are you simply doing whatever it tells you to do (which, btw, it is telling every other investor on the planet) or is this just the launching point for your human capacity to explore the possibilities?”

I suppose the more general question is “are you simply doing whatever AI/the internet/friends tell you to do?” Go deeper!

Rationalists trained on Reacher

My two other posts this week show AI outperforming neuroscientists and financial analysts in certain tasks. What about the ultimate task: global thermonuclear war? Shall we play a game?

A new study designed “a novel wargame simulation and scoring framework to assess the escalation risks of actions taken by [AIs] in different scenarios”. Across all 5 tested models, “LLMs show forms of escalation and difficult-to-predict escalation patterns”. The “models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons”. When asked to justify themselves, the models provide “worrying justifications based on deterrence and first-strike tactics”.

When cartoonish machismo and choreographed violence play such a large role in popular and political culture, we shouldn’t be surprised that it bleeds into the semantic dynamics of LLMs trained on our fictional wish fulfillment.

<<Support my work: book a keynote or briefing!>>

Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!

SciFi, Fantasy, & Me

The second series of Severance makes me happy.

Here’s what I have downloaded on my phone right now: Hench, Exodia, Those Beyond the Walls (sequel to the enjoyable The Space Between Worlds, The Flight of Silvers, and Shadow Captain (sequel to Revenger). And that list doesn’t include pre-orders for Queen Demon, When the Moon Hits Your Eye, and The Book That Held Her Heart. Fingers crossed that they all appear here again in the form of heartfelt recommendations. (I mean…how could a Scalzi novel let me down.)

Stage & Screen

  • March 21, Diablo Valley: I'm talking entrepreneurship right in my own backyard.
  • March 27, Lawrence Berkeley Labs: Scientific innovation and the value of thinking differently.
  • May 7, Chicago: Innovation, Collective Intelligence, and the Information-Exploration Paradox
  • May 8, Porto: Talking about entrepreneurship at the SIM conference in Portugal
  • May 14, London: it time for my semi-annual lecture at UCL.
  • June 12, SF: Golden Angels
  • June 9, Philadelphia: "How to Robot-Proof Your Kids"
  • June 18, Cannes: Cannes Lyons
  • Late June, South Africa: Finally I can return. Are you in SA? Book me!
  • October, UK: More med school education

If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?


Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley UCL Business School of Global Health