I'm Turning My Son Into a Cyborg
How will yours ever keep up?
Imagine if everyone spoke a language that you don’t understand. It’s not a foreign language–it’s been spoken around you since the day you were born–but whereas everyone else understands it immediately, for you it means nothing. Others become frustrated with you. Friendships and jobs, just being “normal,” seem to assume fluency. For many with autism, this is the language of emotion. For those on the spectrum, fluency in facial expressions doesn’t come for free as it does for “neurotypicals”–reading facial expressions is a superpower. So, when my son was diagnosed, I reacted not just as a mom–I reacted as a mad scientist and built him a superpower.
But no mad scientist mother would stop there. When he was diagnosed with type 1 diabetes, I hacked his insulin pump and built an AI that learned to match his insulin to his emotions and activities. I’ve also explored neurotechnologies to augment sight, hearing, memory, creativity, and emotions. Tiger moms might obsess over the “right” prep schools and extracurriculars for their child, but I say, “Why leave their success up to chance?” And if I turn my son into a cyborg and change the definition of what it means to be human, how will your child ever keep up?
Years ago, on my very first machine learning project as an undergrad, I helped build a system for real-time lie detection off of raw video. (It was, of course, sponsored by the CIA. Did you think I was kidding about the “mad scientist” thing?) The AI we developed learned to recognize the facial expressions of people on camera and infer their emotions. Before this project, I assumed I’d spend a long neuroscience career sticking electrodes into brains, but watching our algorithms learn such a foundationally human task hooked me on studying natural and artificial intelligence together.
Fast forward through 10 years of my academic career (neural coding and cyborgs) and my first few startups (AI for education and jobs), and I had built a reputation as the crazy lady seeking to “maximize human potential”. When the ill-fated Google Glass, a wearable smartphone masquerading as a pair of glasses, was launched by throwing some guys out of a blimp, I was invited to explore ideas for what could be done with it beyond social media posts and family videos.
For a woman that wanted to build cyborgs, there was so much potential. Along with its computing power, Glass had a live camera, a heads up display, and a combination of voice and head-motion controls. Drawing from that old CIA project and my years of machine learning research, I built face and expression recognition systems for Glass. (In truth, the crappy little processor would heat up like a bomb, and so the system required an extra computer strapped to the user’s back–not exactly Iron Man.)
I could read people's faces with these augmented reality glasses—and so many more terrible things. I could scan a room reading expressions and flag false smiles (LA and DC, I’m looking at you), access their credit scores, or pull up their Facebook or Grindr accounts (or Ashley Madison for CFOs). The scene could play out like an episode of Black Mirror with Glass cuing my actions to exploit the emotional vulnerabilities of others. But I wasn’t interested in exploring the questionable or downright terrifying applications of Glass. I just wanted to give kids like my son greater insight into the people around them.
In 2013 I built a proof of concept system. Based on research from one of my academic labs, our system recognized the expression of the face in front of it and then wrote the emotion on the little heads-up screen, allowing an individual with autism to more easily perceive whether the person before them was happy, sad, angry, and more. Simply wearing Glass while continuing everyday social interactions with others allowed these kids to learn that secret language of facial expressions. The state of the art in emotion recognition training for individuals with autism remains cartoon faces on flashcards. Here’s a smiley face; here’s a frowny face. But our brains process the contextual information of the natural world differently than these static, artificial signals.
The research has continued over the years and overcome many of the original limitations. For many kids, these systems are more than a prosthetic—they actually advance their learning of the secret language of emotion. A team at Stanford has shown that these technologies can improve their expression recognition, even when not wearing them. And most surprising was my discovery that it helped develop their empathy and perspective taking. Learning that a smile means happiness from a flashcard teaches nothing about why people are happy. Learning the same from natural social interaction actually helps build theory of mind, another secret language thought to be missing in individuals with autism.
My son is rather amazing, and the more I experimented, the more I realized that I didn’t want to “cure” his autism. I didn’t want to lose him and his wonderful differences. SuperGlass became a tool to translate between his experience and ours, a tool to help these kids navigate a sometimes alien world. In an era where jerks like me are building AIs to do an increasing array of complex and sophisticated human tasks, your value to the world is what makes you different. The more different you are, the more valuable. My son is priceless.
I want to build a world where everyone has superpowers[2]. I went to grad school telling fellow students, “I want to build cyborgs.”[3] I explored what it would mean if prosthetics could directly interface with your brain, a field now known as “neuroprosthetics”. Already today many neuroprosthetics transform people’s lives: cochlear implants for deafness, retinal implants for the blind, motor neuroprosthetics for the paralyzed, and deep brain stimulation for a rather extraordinary array of disorders, including depression and Parkinson's.
My first project in neuroprosthetics came during grad school at Carnegie Mellon. My advisor and I developed a machine learning algorithm that learned how to hear just by “listening” to the world around it. I’d stroll through Schenley and Frick Parks in Pittsburgh[4]. As the system “listened” to the sounds of birds, breeze, and babbling brooks, the algorithm slowly learned to hear more and more, subtly adjusting millions of internal calculations to make greater sense of its auditory world.
Inspired by one of my dissertation advisors’ research, I began to wonder if we could build an AI-driven cochlear implant, a neuroprosthetic ear that restores hearing in some forms of deafness. Our experiments showed that the algorithm greatly improved speech perception for those using these implants. I loved both my pure science research and my applied work in facial analysis, but this was the first time I’d built something that could transform someone’s life. That experience, in turn, transformed my own life, because I knew that this was how I wanted to spend the rest of it.
My introduction to neuroprosthetics was also an introduction to the messy complexity of what makes a “better” life. As a naive hearing person, it never occurred to me that some would experience deafness as an identity rather than a disability, with its own languages and culture. In some communities, however, cochlear implants are seen as genocide: of their unique languages, of their way of life, of who they are. And yet, there is also evidence that infants receiving implants not only learn to hear, but that it even increases broader cognitive function. Much like autism, I’m often confronted with the dilemma of “curing” people of who they are versus giving them the tools to share those rich differences with the world.
This dilemma takes on an even greater urgency when I confess that my particular area of research and development is cognitive neuroprosthetics: devices that directly interface with the brain to improve our memory, attention, emotion, and much more. For many, the idea of computers being jammed into our brains evokes science fiction nightmares like the Borg from Star Trek or the human-like machines of The Terminator. While my own work takes me in very different directions than these dark stories, it’s true that neuroprosthetics are already beginning to change the definition of what it means to be “human”, and the end result of these explorations of humanity are not at all clear.
In many cases, what it means to be human is tragic. Children with traumatic brain injuries (TBI) are often devastated by their injuries and suffer long-term mental and physical challenges. Clinical videos of kids and adults tearfully struggling with tasks that used to be trivial are heartbreaking. Many of those with TBIs have trouble with their working memory span, which is roughly how many “chunks” of information a given person can remember in any given moment. Working memory plays a sizable role in education attainment, lifetime income, and even health and longevity. When we know we can make a difference, allowing a car accident or a fall or even poverty to steal a child’s future is just as morally perilous as augmentation run amok.
At my mad science incubator, Socos Labs, we collaborate with neurotech startups like HUMM, Cognixion, and Optoceutics to make a difference in fighting traumatic brain injury, cerebral palsy, Alzheimer’s, depression, and so much more. HUMM has developed a small, rechargeable patch that uses transcranial alternating current electrical stimulation (tACS) to enhance the connection strength (“coupling”) between areas in the prefrontal cortex (crucial for working memory) and more posterior cortical regions. The stimulation specifically enhances theta-band synchrony, cortical oscillations that play a role in synchronizing cortical activity. The enhancement in turn drives increases in multitasking performance, attention, and working memory span.
In one recent experiment, adults playing the old “Simon” game, where you must remember the order of increasingly long patterns of lights and sounds, increased the length of sequence they could regularly remember by 20% compared to a sham stimulation. In another recent experiment, similar stimulation improved working memory in seniors experiencing cognitive decline. Our whole ability to multitask seems to be related to these theta-rhythmic oscillations, and their augmentation.
Theta isn’t the only interesting frequency band for cognitive neuroprosthetics. Gamma-band tACS augmentation, increasing synchrony in the range of 30-40 hz, has also shown surprising impacts on cognitive function and aging. For example, groups at MIT are using synchronized visual or auditory strobing in the gamma range to reduce Alzheimer's symptoms. Optoceutics is pursuing a similar treatment, but using masked light technologies. It is astonishing that a strobing light—much less one whose strobing is hidden by color invariance—could affect the physical and behavioral symptoms of Alzheimer’s, especially given the heterogeneity of our brains. But the findings suggest that with precise matching to individuals’ intrinsic patterns, these lights may someday become a standard treatment or even prophylactic treatment for neurodegenerative diseases. Broader research also shows benefits for major depression and hippocampal communication in basic memory encoding and retrieval.
Emerging technology, like adaptive gamma- or theta-band tACS, could have a tremendous impact on a kid with a TBI and others struggling with working memory challenges. Better yet, non-invasive devices paired with old school intensive cognitive therapies could improve their chances of living longer, richer lives. No loving society could deny them this opportunity. A sustained 20% increase in working memory would change academic and economic life outcomes for so many. As with all of Socos Labs’ projects, we contribute our efforts for free and give our discoveries away—a reasonable investment toward a world in which every child has the chance to write their own life story.
It would be willfully naive to think that neuroprosthetics research ends with children overcoming TBI or those suffering from dementia. If these technologies can augment function in injury or disease, they will inevitably one day do the same for the rest of us neurotypicals[5]. Students in the US already experiment with drugs like Ritalin and Adderall to improve their academic outcomes, and parents are often willing participants, even though the benefits might be an illusion. The pressure to perform is so great, and the fear of losing our privileged place in the socio-economic hierarchy so intense, that it drives hyper-competition in academics and the workplace.
At Socos Labs, we don’t just invent “superpowers” (as fucking cool as that is!), we investigate how these technologies drive inequality and other unintended consequences. The core question for many inventing AI and neuroprosthetics is usually, “What most idealized use for this technology can I imagine…that will also make me rich?” But the real question must be, “What will actually happen when this technology impacts the real world?” Neuroprosthetics will fundamentally change what it means to be human over the next two decades, but for whom? Though anyone might have access to new neurotechnologies in theory, in reality those most able to take advantage of them are the ones who need them the least. Having wealthy parents already dramatically impacts a child’s outcomes, even affecting working memory. By contrast, simply being born into poverty and stress robs children of their cognitive potential. Imagine these advantages not subtly embedded in the life experience of the wealthy, but directly for sale and turned up to 11. Intergenerational social and economic mobility disappears.
Working memory is just the start. Researchers have augmented creativity and emotional control, as well as modulating honesty, pleasure, and numerous other foundations of self. As with autism, those with “abnormalities” in these areas can have profound challenges engaging with the world. I’ve worked on systems to predict manic episodes in bipolar sufferers and developing perspective taking in kids with autism. Cognixion, another neurotech company I advise, is focused on mobility and communication neuroprosthetics by combining EEG with augmented reality to create a closed-loop brain-computer interface for day-to-day life. Their headgear allows people trapped by cerebral palsy or stroke to engage with their world under their own control.
Diverse advances in neuroprosthetics beyond HUMM, Optoceutics, or those I developed for my son might allow people to become much more than a tragic diagnosis. In a world that values difference, they might become even more "valuable" than the neurotypicals around them. And if their augmentation lifts them above the crowd, soon everyone will want to be more than human. Or if these augmentations become a norm, who will choose to be less than human?
You probably have an equalizer on your phone that allows you to amplify the bass and treble in music. Adjusting the slider controls around doesn’t fundamentally change the song, but it emphasizes differing elements, from the clarity of voice in an opera to the big bass drop of the best EDM (is there such a thing?).
Now imagine the app equalizes you[6]. Instead of adjusting the power at different sound frequencies, sliding a controller on this app boosts your attention or dampens your creativity. Add in a boost for memory and you are ready to cram for an exam. Hit the “Date Night” preset to stimulate emotion and focus while dampening cognition. If there’s a bad romantic comedy in your near future, why be too smart to enjoy it[7]? We might augment any of the richly diverse aspects of the human brain. I love the idea of a cyborg who’s augmented to explore their emotions. That’s definitely not the Borg.
Though no one has yet combined these many forms of neurostimulation into a prosthetic, we couldn’t resist experimenting with simulated versions that dynamically adapt to an individual's environment. Combining historical EEG ("brainwave") data with eye movements and heart rate, our AI orchestrates its multidimensional neurostimulation to produce a version of you more suited to the task at hand, dynamically adapting you to problem solving or relaxation or public speaking. Though it's only a theoretical simulation, the system follows my most important technology design rule: You should not only be better when you’re using it, you should be better than where you started when you turn it off. Neuroprosthetics shouldn't replace what we can do for ourselves, they should augment who we aspire to be.
The capabilities of these neurostimulation systems only scratch the surface...literally. They work via wearable devices on the scalp, but invasive neuroprosthetics act inside the brain. This has traditionally been done with ultra-thin wires implanted chronically through the skull[8] but new technologies go far beyond that. These include injectable self-expanding “neural nets”, optogenetic (light-based) interfaces, and one of my favorite up-and-coming technologies, nanodevices, including neural dust and neurograins.
The “dust” consists of nanodevices the size of a grain of sand and powered by ultrasound to stimulate individual nerve fibers. Spreading thousands of these throughout the brain could allow us to create ultra-precise closed-loop prosthetics, simultaneously decoding neural activity and encoding new signals into them. It may sound rather mundane, but combinations of technologies like those above might someday decode visual and speech center patterns allowing you to think up a multi-sensory email, with “attached” emotional content, all while simply walking down the street.
As science fictional as neuroprosthetic telepathy might sound, profound possibilities are emerging today, from restoring movement to the paralyzed to restoring memory function to injured soldiers. Even autism might be transformed. One of autism’s most salient features in the brain is a lack of connectivity and communication between different cortical regions, such as the frontal-posterior communication in working memory. Unable to communicate with one another, each cortical region develops in a kind of isolation, compromising neurotypical brain function. With more invasive technologies like neural dust we might be able to restore some degree of intra-cortical communication, bypassing the biology via AI-mediated wireless communication (like the way much of Africa skipped landlines and went straight to mobile phone networks). Projects like SuperGlass can only help the kids “high-performing” enough to engage with the world around them; however, bringing intracortical communication to life in profoundly autistic brains could completely transform lives. Differing patterns of disrupted cortical connectivity also plays a role in both psychotic illness and emotional disorders. Invasive neuroprosthetics might transform these lives as well.
[1] Personal note: anyone shocked or offended by the title and subtitle of this piece, please read up on the role of irony in human communication. It is beyond the scope of this article to help you.
[2] According to Pixar, that does make me a supervillain.
[3] They would literally scoot away from me for fear that my crazy was communicable.
[4] It’s one of the strangest and most under-appreciated cities in the US, often by its own residents (which include the highest native-born population of any US metro). The parks themselves are the remnants of massive Gilded Age estates born out of Pittsburgh’s coal and steel wealth. I felt a bit like exploring the ruins of an ancient and more elevated civilization—though one born from the ruthlessness of men like Frick and his boss, Carnegie. It’s hard not to hear Tennessee Ernie Ford singing in my head as I walked across the campus at CMU. Still, Pittsburgh has become something much more than the industry town it once was.
[5] A technical term meaning “your brain is boring”.
[6] That sounds like the punchline to a Yakov Smirnov joke for Futurama, “In the Borg, the app equalizes you!”
[7] Your date likes reality TV? Sorry, our app can only make you so dumb without risking permanent damage.
[8] I need a neuroprosthetic like I need a hole in my head.