Robo-Research, Human Heart
This week:
- Will AI accelerate science?
- Can it foster altruism and win-win compromise?
Research Roundup
Robo-Research
AI isn’t “better” than humans, but in an increasing number of highly complex yet still routine roles, LLMs and augmented RL systems are forcing us to confront where human labor truly shines.
Ask GPT4 to analyze corporate financial statements and it “outperforms financial analysts in its ability to predict earnings changes”, performing particularly well “in situations when the analysts tend to struggle”. Trading strategies based on its analyses show higher (alpha) returns than alternate strategies. (The authors tout not needing any “industry-specific information” but they also standardize the information—this sort of “zero-short/0-parameter…except for all of the transformative preprocessing” is always worrying.)
A wildly different domain is research chemistry, but here as well the “routine” tasks might soon be automated. A system for “closed-loop experiments with physics-based feature selection and supervised learning” accelerated hypothesis-space exploration. This system for augmented science combined “interpretable supervised learning models and physics-based features with closed-loop discovery processes [to] rapidly provide fundamental chemical insights”.
These tools that directly augment discovery, rather than simply automating the mundane, must be the future of AI. Don’t let them sell you a lazy substitute!
Altruistic Cocktails & Democratic AI
What happens when altruism meets computational science? Discoveries about ourselves!
For example, if we take data from 100 different situations in which third parties had the chance to help or punish participants in an economic game, a combination of factor analysis and culturing reveals the broad forms of “altruism”.
- “justice warriors” (35%): high probability to intervene, especially when the transgressor–victim inequality was high…cost was relatively low”
- “pragmatic helpers” (18%): “high probability to intervene, but were insensitive to inequality or cost, and preferred helping over punishment”
- “rational moralists” (47%): “barely intervened unless their intervention cost was minimal”
Given other research findings, I strongly suspect the Pragmatic Helpers may not punish often but when they do…well, just don’t piss off pragmatic helpers.
In another domain of human prosocial behavior (yes, it truly can be prosocial) we find “Democratic AI”. A reinforcement learning (RL, thing AlphaFold but for public policy) “is used to design a social mechanism that humans prefer by majority”. People played a “game…keep a monetary endowment or to share it with others.” Shared revenue. Here’s where the AI came in, the money was either shared according to a plan “designed by the AI and…by humans.” In the end, the democratic AI’s plan “redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote.” AI helps humans reach consensus.
Using machines to understand ourselves is why I became a scientist decades ago, and why I still tell everyone that I build cyborgs.
<<Support my work: book a keynote or briefing!>>
Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!
Weekly Indulgence
It was quite a wonderful surprise to have my undergraduate Alma Mater invite me to join their advisory board. I am more than happy to say, "Hell Yes!"
Stage & Screen
- December 10, NYC: It's that time of year again: RFK Human Rights's Annual Gala. Another year to support our amazing work in defending journalists and civil rights defenders around the world.
- It's sad that I must say this but...we have nothing to do with RFK Jr.
- January, LA: I wish I could say...some day :)
- January, Minnesota: Women in AI...in frozen places.
- January, Toronto: University of Toronto AI Day!
- February, Dublin: A private event but I know we'll do much more
- February, Athens: Medical school education
If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?
SciFi, Fantasy, & Me
I read the Silo novels (Wool, Shift, & Dust) back in 2011+ when they were self-published on Amazon. Now they are an AppleTV series that just started its second season. If you like your scifi with a big dose of wtf mystery, this one is a fun read/listen/watch (take your pick—I did listen→watch).
Vivienne L'Ecuyer Ming
Follow more of my work at | |
---|---|
Socos Labs | The Human Trust |
Dionysus Health | Optoceutics |
RFK Human Rights | GenderCool |
Crisis Venture Studios | Inclusion Impact Index |
Neurotech Collider Hub at UC Berkeley | UCL Business School of Global Health |