AI in the UK
This week we look at AI policy recommendations for UK Parliament.
<<Support my work: book a keynote or briefing!>>
Want to support my work but don't need a keynote from a mad scientist? Become a paid subscriber to this newsletter and recommend to friends!
Research Roundup
There's no research roundup this week. Instead you can find a working draft I am preparing on what the AI policy should be for the UK's new Labor government. Read the whole draft below. (I'm sure there will be much cutting and polishing before it goes live.)
Weekly Indulgence
Stage & Screen
- August 14-15 Napa: Mandrake Capital Partners
- August 23, virtual: more fun with BCG đ
- September 8, Athens: ESOMAR
- September 25, SF: BCG Australia
- September 26, Wyoming: Tetons Leadership Counsel
- September 27, NYC: well...the teaser is right above this.
Upcoming this Fall (tentative)
- October 1-4, Manila & Singapore: Hyper Island and more! (Book me!!!)
- October 24, Toronto: Metropolitan University
- October 29, Rome or Rio: We'll all know which soon enough
Find more upcoming talks, interviews, and other events on my Events Page.
If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?
SciFi, Fantasy, & Me
What can I say, Adrian Tchaikovsky writes a whole bunch of books, and more than a few are bound to show up here. Cage of Souls is very reminiscent of Gene Wolfeâs novelsâtime itself feels worn out in this dying remnant of civilization clinging to a dying world. (As an RPG nerd, it also reminds me of Numenera: fallen civilizations built atop fallen civilization built atopâŠ) A fascinating and terribly dark bit of world building.
Next up from Adrian: Alien Clay.
Excerpt: UK Policy and Me
When it comes to artificial intelligence (AI) policy, I have one recommendation for both the White House and 10 Downing St: no more lazy AI. Do not allow our astounding strengths in machine learning talent and computing resources to be squandered away in efficiency-obsessed products meant only to make life easy but which add limited value to the world. It is already the trend that large language models (LLMs)âGPT, Claude, Geminiâare being increasingly applied to the banal work we cannot be bothered to do ourselves. This will achieve only limited, if any, productivity gains for the UK and US economies. Instead of AI that does the easy things for us, invest in AI that makes us better when life is hard.
Many leading companies and investors in the space believe that AIâs value will be found in freeing us from all of the emails, tweets, and images we must both consume and produce every day. I recently sat in on an elite companyâs training session with its executives and top managers. On the top of their list of AI use cases was (1) save time by having GPT read and summarize your emails and (2) save time by having GPT write your emails. Iâm fairly sure this is a scene from the movie Brazil, where Jonathan Pryce deals with his overflowing inbox by jamming the incoming pneumatic tube into the outgoing tube. Iâm also quite certain we can cut out the middleman here by training employees to stop writing so many useless messages.
AI isnât free. Every training run and API call comes to a substantial energy cost. Given the limited budget, do we truly wish to spend it writing social posts and business memos instead of diagnosing cancers or forecasting hurricanes?
I would never allow an LLM to write for me and rob me of my own voice or the chance to understand my own thinking more deeply. Instead, I tell it, âYou are my worst enemy, my nemesis. Find every flaw in my new research and explain to me why Iâm wrong.â We shouldnât want AI to make our work easierâwe need AI to make it harder in the ways that make us better. AI policy should support this under-funded and under-explored domain of augmenting our own capabilities.
Beyond this broad policy measure, the UK and US must focus on several specific immediate issues. For example, the biggest trend Iâve noticed over the last couple of years is every startupâBioTech, EdTech, HRtech, AItechâis pitching their unique and proprietary dataset as defensible IP. Because no one else has access to their magic data, no one else can build the AI driving their company. When I hear this argument I hear, âIt doesnât matter how mediocre our product is because we have a monopoly on the data.â
The market for AI goods and services has become wildly dysfunctional because we have conflated the value of AI with the value of data. Too many new startups in the UK and US are building value based on monopolistic hoarding of data, protecting themselves from true competition and robbing the world of the full value of AI. I believe in well-regulated markets. Let the best tools and services win. The endless squabbling over data ownership (a classic market coordination failure) is slowing both innovation and economic development all while compromising civil rights by inducing companies to abuse customerâs data.
The potential of AI to revolutionize fields like healthcare, education, and scientific research is undeniable. But this frustrating paradox holds us back: the very data needed to fuel these advancements is often locked away, guarded by businesses and academics fearful of losing control and competitive advantage. I support policy initiatives, such as data trusts, that offer a framework for innovation and collaboration that takes arguments and fears over data ownership off the table. Best product, rather than most aggressive monopoly, wins the market.
In order to support all of that data and computation, we also need to rethink the world's AI infrastructure. Too much of it is currently controlled by only a handful of companies and, largely, two countries. The EU, India, and UKâall possessed of exceptional human capital and engineering resourcesâshould invest in expanding their AI infrastructure. Startups and academic researchers have very few choices of where to build their tools or conduct their research. The UK, for example, could turn the UK Biobankâalready one of the great public goods of the entire worldâinto a resource of profound human and economic development if it was transformed from a repository of biomedical data into an AI-enabled living data trust for research and development.
Of course, any AI policy reaching so deeply into the lives of every UK citizen must take civil rights more seriously than just trusting smart people to be good. As I have previously written, âEvery day, across some of the deepest, most private parts of our lives, algorithms are making decisions in milliseconds that no face-to-face human could ever justify.â Policy-makers must take greater ownership of this profound reality. My research applying AI and epigenetics to diagnose postpartum depression makes it clear that no mother should have to choose between a lifesaving medical test and the privacy of her own body. But civil rights canât exist in a world of hidden calculations. Policy makers must guarantee access to AI working solely on each citizenâs behalf.
The other great AI policy issue of our day, and for the foreseeable future, is jobs. The relationship between AI and labor is complicated. I agree with many labor economists that AI will increase demand for labor, but the important question is, âFor whom?â Displacement and deprofessionalization are real, and lazy reskilling programs trap workers in short-term jobs without a future. Further complicating this policy domain, the future of work will be very different across society.
Forget the illusion of training everyone into âAI jobsâ. Most existing workers around the world will simply need a living wage and the dignity of concrete work. No more false promises of transitioning the entire existing workforce into a new AI-drive economy. Like all of the previous promises, we will fail to keep them and drive our betrayed neighbors further into the arms of demagogues. If this means we take some of the huge margins achievable from AI and simply rebuild the aging infrastructure of the UK and US, that would be a vastly better and more effective policy than lying to voters about ânew jobs in AIâ.
In contrast to the existing workforce, we must embrace raising the next generation for a world of augmented intelligence. Inflicting lazy AI on students, such as AI tutors that answer all of their questions, will rob most of the opportunity to learn and grow. Empowering AI in education and early childhood development doesnât focus on making learning easyâit challenges them. The UK and US need a generation of students who explore and ask good questions. A generation that is resilient, pushing through failure to find success. The generation that invests hard work in themselves because their lives have taught them that hard work actually pays off. AI today can answer all of our well-posed questions but can give us none of these qualities.
Rather than using AI to make my life easier, I use AI to make myself betterâwhat I call productive friction. Good tech policy starts with a simple question: Am I better when Iâm using AI? Great tech policy asks an even more important question: am I better than where I started when I turn it off again? Lazy AI will never pass that second test. Instead, the UK must invest its world class talent in the astonishing capabilities of humanity. AI is just a paintbrush; we are the artist.
Vivienne L'Ecuyer Ming
Follow more of my work at | |
---|---|
Socos Labs | The Human Trust |
Dionysus Health | Optoceutics |
RFK Human Rights | UCL Business School of Global Health |
Crisis Venture Studios | Inclusion Impact Index |
Neurotech Collider Hub at UC Berkeley | GenderCool |