The Future of Creativity

The Future of Creativity
You suck, Bing.

Well, that was embarrassing. As a regular reader you know that I was in NYC last week for so many talks, meetings, and bagels. I made the last minute call to delay the newsletter until this week...but nobody told the automated publishing feature Ghost so kindly provides. So, an empty email can from me to you. That sucks.

I apologize both for being a week last and a newsletter short...of content. Hopefully this weeks stories with help make up for That said, like dive into the emerging reality of creativity, centaurs, and cyborgs..

Mad Science Solves...

An undergrad in 1999 at UC San Diego, I programmed a 3-layer backprop neural network that solved XOR. It was my first AI. By the end of that year I had nearly completed my honors thesis project, “A SNoW-based Facial Feature Detector”. That network found pupils and philtrums in face images, feeding into a broader neural network that could distinguish between real and fake smiles. The project not only taught me about multiple approaches to machine learning, but also the psychological research literature on facial expressions (Paul Ekman was a PI on that project) and even how to program—I learned Matlab by building that neural network.

From a productivity perspective my approach was a disaster. I was contributing to a realtime machine vision project within the same year I learned of the existence of machine learning, with no expertise in the domain (facial expressions) or the programming language. All of that ignorance created friction, the bane of productivity acolytes, product designers, and Silicon Valley entrepreneurs. Friction slows everything down and kills engagement. The value add on nearly every famous startup of the past 3 decades has been to remove friction from people's lives, making it easier to do…well, whatever the VCs gave you money to make easier: cleaning, commuting, connecting, dining, hiring, searching, reading, farting…anything.  For those passing quickly and slowly through the world the enemy is friction.

In 1999, I was nothing but friction, and it was amazing. Because instead of moving shallowly through my project, I went deep. In a year I went from a novice to my first machine learning publication. My work contributed to the earliest development of the science that would go on to become Facebook's facial recognition technology. It also became the foundation for my own projects exploring autism and aiding orphan refugees. In one year I launched an astonishing career by willfully creating friction and using that traction to propel my growth.

Every new wave of AI comes with promises to lift the human experience, but over and over again it is used simply to increase efficiency and reduce friction. The research below from MIT and BCG highlights how even elite workers are seduced into using the cutting edge of AI (LMMs, Large Multimodal Models) into simply reducing friction in their work.

In contrast I strongly advocate for productive friction, using AI and other tools to intentionally create new barriers and burdens that in turn produce dramatically better work than AI or humans could have produced alone. In productive friction, authors don’t ask GPT to write their essays; they ask it to critique their essays. They ask it to take the role of varied audiences, offering feedback and perspective. “You are a Nobel-prize winning neuroscientist and also my most persistent critic. What flaws do you cite in rejecting my latest work? How would you have corrected them?”

I don’t ask Bard or GPT to do my work. I use them to explore my work—to help me understand it better than I could by myself. I don’t use generative image models in place of artists; I use them to explore my own internal aesthetic, my daydreams and data fevers, and find a new visual language with which to communicate. Every time I touch an LMM, I intentionally create new frictions in my life. But I’m not interested in GPT’s answers to my questions. I want to understand my own.

I’m actively working on a complete draft of “Robot-Proof” right now. The new chapter, “The Future of Creative”, drives deep into the practice of productive friction.

Stage & Screen

While speaking has mostly wrapped for the year, some reverberations from Dr. Ming's recent international talks continue to ring! (See below for some info about her recent events in Aberdeen and virtually for the Philippines!)

People-centric approach needed in adoption of AI — experts - BusinessWorld Online
By Miguel Hanz L. Antivola, Reporter Philippine organizations should prioritize a people-centric approach when adopting artificial intelligence (AI) and other emerging technologies for growth, according to experts. “The biggest challenge in [harnessing] new technology is understanding human beings and how humans make choices,” Vivienne L’Ecuyer Ming, neuroscientist and founder of policy think tank Socos Labs, […]
AI expert underscores role of soft skills in utilizing the technology - Back End News
Dr. Vivienne Ming emphasized the necessity of emotional intelligence, resilience, and creativity in the AI era.

We are also beginning to book dates for 2024! With engagements already secured in Paris, Stockholm, Maryland, Toronto, and more!

I would love to give a talk just for your organization on any topic: AI, neurotech, education, the Future of Creativity, the Neuroscience of Trust, The Tax on Being Different ...why I'm such a charming weirdo. If you have events, opportunities, or would be interested in hosting a dinner or other event, please reach out to my team below. - Vivienne

<<If you are interested in pursuing an opportunity with Vivienne in or around these locations, please reach out ASAP!>>

Research Roundup

Centaurs vs Cyborgs

If we treat AI as a cheap productivity boost it will never expand our capacity, much less improve our economy.

Don’t get me wrong, AI boosts productivity; it certainly has mine over the years. And since the release of ChatGPT, study after study has shown LLMs lift productivity in programmers, call center workers, writers, and more. (They could lift the productivity in medicine, but most doctors still ignore AI recommendations.)

I’ve written about all of these findings, but I’ve also shared that in every case the boost overwhelmingly benefits inexperienced and lower-skill workers. Some have described this low skill lift as a feature; however, I see it in context of current research showing that when AI tutors give students answers it disrupts learning. These junior employees may never learn their jobs if GPT or Gemini does it for them.

Do my fears apply to a highly educated, highly motivated workforce? The answer comes from a new study from MIT and BGC. Consultants in the company analyzed data, generated reports, and prepared presentations, all similar to their everyday work, but only some were allowed access to advanced AI. “For each one of a set of 18 realistic tasks…, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed tasks 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group).”

In keeping with the earlier findings, however, AI boosted productivity of below average consultants nearly 3 times more than above average consultants. And while the AI improved the average quality of ideas generated by consultants, it also reduced the diversity of ideas.

Even more importantly, while the tasks above required elite skills (e.g., analyzing data sets and preparing professional reports) the skills were still routine. When asked not simply to conduct routine tasks but to draw creative conclusions from the data and advise clients on the future, “consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI.”

AI harmed creative labor even in this elite workforce. Why?

A big hint may come from the appendix of that report. In it, the authors describe two distinctly different approaches to working with AI: “Centaurs & Cyborgs”. The centaurs broke up the work “between human and AI for each of the …sub-tasks”, assigning parts entirely to themselves or their AI tool. For example, centaurs might write their own report and then ask the AI to refine it, accepting or rejecting the output. Centaurs are maximizing their efficiency by assigning the “busy work” to the AI.

In contrast, cyborgs “use AI for each of the sub-tasks throughout the whole workflow.” They “continually question [the] AI and experiment to reach a better output.” For example, cyborgs would “instruct [the] AI to simulate a specific type of personality or character.” This is a strength of LLMs I specifically leverage in my own work. Note how it rejects the idea of AI=efficiency and instead embraces AI for productive friction, generating better performance through increased collaboration.

Also distinctly different from centaurs were cyborgs’ willingness to “push back: disagreeing with the output and asking AI to reconsider”. This behavior is particularly valuable in creative labor, where the machine supplies the knowledge and the human brings the understanding, contact, and insight.

Learn to love the cyborg!

LLMs Deprofessionalize

Will AI free up workers from mundane labor so that they can focus on more creative, value-added tasks? The partyline answer from every AI startup CEO with whom I’ve ever shared a stage is, “Hell yes! Hallelujah.” Unfortunately, they are full of shit (as I told them at the time).

The first labor market impacts of LLMs (GPT or BARD) and generative image models (Stable Diffusion) are in and they tell a story of deprofessionalization. Rather than empowering freelance writers and artists to create even better work, “generative AI” has caused “freelancers in highly affected occupations [to] suffer…reductions in both employment and earnings.” In fact, rather than augmenting the elite, the evidence suggested “that top freelancers are disproportionately affected by AI” as employers shifted to hiring lower-skilled freelancers. Why pay top dollar when AI makes all routine labor boringly the same?

This is the clearest evidence of the deprofessionalizing effect of AI to date.

Don’t get me wrong. Deprofessionalization isn’t inevitable. AI tools can absolutely augment workers, but the evidence above shows that this isn’t the current trend. A future in which AI makes us better won’t come from lazy promises. We must fight for it.


Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley