We’re All Going to Die, Thanks to AI

TED 2023: An Ode to Joy, Grief, and the Future

Tim Leberecht
11 min readApr 25, 2023
Image: Midjourney

The dream: AI will exponentially enhance our productivity and creativity. It will optimize everything that can be optimized, freeing us humans up to do work that truly matters. It will lead to new breakthroughs in science, scale up mental health services, detect and cure cancer, and more. It will finally enable humanity to realize its full potential.

The nightmare: present and future harm. Estimates suggest AI will eliminate 300 million jobs worldwide, with 18 percent of work to be automated, over-proportionally affecting knowledge workers in advanced economies. (As Rest of World reports, image-generating AI is already stealing the jobs of China-based video game artists and illustrators.) AI will destroy the very fabric of our societies as we know it, threaten the core of our work-based identities, exacerbate social divisions and discrimination, blur the lines between truth and fiction, unleash unprecedented waves of propagandistic misinformation (imagine Elon Musk launching an AI company named TruthGPT and training its AI based on Twitter data — oh wait…), impose a dominant, all-encompassing, all-knowing universal operating system, an aiOS, on us, start wars, and ultimately, as AI morphs into AGI, go HAL or M3GAN-style rogue and extinguish the human race.

“Reality hitting us in the face”

Eliezer Yudkowsky is sure about the latter: “We’re all going to die,” he declared on the TED stage last week. The founder of the Machine Intelligence Research Institute delivered his dystopian outlook straight into the astonished faces of the 1,800 attendees of the TED conference in Vancouver, many of whom are staunch tech-optimists who only think of positive outcomes, of “making the world a better place,” when they hear “Possibility” — the theme of this year’s program.

The next day at a private dinner on the topic of AI, the 150 or so guests were asked by the host to participate in an impromptu poll: raise your hand if you are “only concerned” about AI, he said. One person, the person sitting next to me, raised his hand. The next question? Raise your hand if you are “only excited” about AI. This time, half of the room raised their hand. I turned to my neighbor and said: “now I’m only concerned, too.”

Listening to the TED talks in Vancouver it became clear to me why many outside of Silicon Valley aren’t enthusiastic about AI. As a doomsayer, Yudkowsky seemed to already be in a state of grief. The enthusiasts, on the other hand, suffer from the Zuckerberg problem: they are not very compelling. None of the AI creators on the TED stage seemed particularly trustworthy. Their concern came across as casual — perhaps something to do with the obvious commercial incentives. It’s like we haven’t learned anything from history, and the new ostentatious enlightenment the AI masters exhibit is just the same old disruptivism.

When Greg Brockman, president, chairman, and co-founder of Open AI, advocates putting AI out in the world as a form of live lab to learn by “reality hitting us in the face,” as he did in Vancouver, it sounds an awful lot like the Zuckerbergian “move fast and break things.” At least Zuckerberg was the known enemy, dorky somehow in his all-too-obvious world conquest guised as connecting-the-world idealism. The Open AI founders, however, and other AI pioneers such as Tom Graham, founder of deep fake video tech firm Metaphysic, are more elusive and their organizations more opaque. The reason they cite for forging ahead with AI development, no matter what, is usually along the lines of ‘it can be a huge upside to humanity, and if we don’t explore it, others (read: bad actors) will do it.’

Robert Oppenheimer’s famous adage comes to mind: “Technology happens because it is possible.” Oppenheimer, of course, is the physicist who led the Manhattan Project and is known as “the father of the atomic bomb.”

“Mysticism to honor the unintelligible”

So what shall we do? I spent time with several AI scholars and practitioners over the past few weeks, and their stances can be roughly summarized as follows: the defeatist (all over now, baby blue), the alarmist (!!!let’s please hit pause!!!), the reasonably concerned (weighing benefits and pitfalls, asking for guardrails and “alignment with human values”), the “like any technology, AI is neutral sui generis and can be used for good or bad” agnostic, and the “materialist” engineer who doesn’t understand all the fuzz and demystifies generative AI as a mere autocompletion tool, brandishing most media coverage as “sensationalist.”

The one thing, it seems, that everyone can agree on is that no one has a clue where this is all headed, not even the makers of ChatGPT themselves, as Sam Altman, CEO of OpenAI, readily admits.

Obviously, I don’t have a clue either, and it is for this very reason that I’m more comfortable with a mystical rather than a materialist view. The mystical view is more expansive and accounts for a wider option set. Asking for empirical evidence is not very useful for phenomena that appear to exceed our cognitive apparatus. When humans no longer fully comprehend the direction and speed of advances in AI development, when the territory has outgrown the map, as Yudkowsky argues, then the only ones still holding the key to the black box, to the cathedral, are the priests, not the scientists.

Stephen Wolfram of Wolfram Alpha contends that the success of ChatGPT suggests that “we can expect there to be major new implicit ‘laws of language’ — and effectively ‘laws of thought’ — out there to discover.” Scientists will always want to explicate them, while mystics will stress their implicit quality. Researcher and author Karen Bakker wowed TED attendees with a demo of how AI could help detect and decipher formerly inaudible, unintelligible sounds of animals to enable “interspecies communication.” Some might welcome this as an act of empathy and kinship; others might worry about it as an extension of digital surveillance, a further intrusion into and exploitation of nature.

Instead of making things intelligible, mysticism allows us to honor their inherent unintelligibility.

Let’s look at the four criteria William James established for mystical experiences: passivity, transiency, ineffability, and “noetic quality”: “states of insight into depths of truth unplumbed by the discursive intellect. They are illuminations, revelations, full of significance and importance [and] carry with them a curious sense of authority.”

The most ordinary universal human experience that matches James’ criteria is dreaming.

From hallucinating to dreaming

To date, no one really knows why we dream. Despite extensive research and myriad theories, a rational explanation is still missing. That, of course, does make sense, as dreams are the realm of the irrational, defying our very attempt to make sense of them.

In 2020 the Dutch neuroscientist Erik Hoel presented a new theory: the so-called “overfitted brain hypothesis.” He suggested we view “dreams as a form of purposefully corrupted input likely derived from noise injected into the hierarchical structure of the brain.” In layman’s terms, and grossly oversimplified, this means that our brains tend to get overwhelmed by daytime information and are prone to taking it at face value — as the equally weighted, accurate representation of the world. Dreams, Hoel postulated, insert an intentional distraction that helps the brain zoom out and generalize (basically, seeing the forest again despite all the trees): “Dreams are there to keep you from becoming too fitted to the model of the world.”

Understanding, at least the basics, of how dreaming works, matters to how we think about AI.

As early as 2015, Google (with Deep Dream) and others began to feed neural networks with input that makes them spin out surreal imagery, kind of like an AI that is having hallucinations after taking psychedelics.

In contrast, in light of the Generative AI “shock,” we now widely use the term “hallucinate” dismissively when we describe an AI system giving us implausible or overtly inaccurate information (for instance, Microsoft’s Bing claimed that Google’s Bard had been shut down, incorrectly citing a news story.)

But what if hallucinating, as a core quality of dreams, is actually desirable? AI systems mimic the deep neural networks of our brains, so AI developers have recently started to apply Hoel’s “overfitted brain hypothesis” to AI systems, suggesting that hallucinating — dreaming — is a feature, not a bug when it comes to preventing AI from “overfitting” — in other words, suffering from bias due to its conditioning by existing data.

You could even argue that dreaming will become the main generative power of generative AI as deep neural networks advance.

It is striking that media coverage frequently speaks of AI “dreaming up” images or proteins to describe generative AI’s exponential creative power. And it seems to be an apt metaphor as ever more powerful AI models will generate an abundance of creative options that seem to have sprung from some sort of subconscious exceeding the human intellect. Transformer networks, the “T” in Generative Pre-trained Transformer Networks, better known as GPT, are already superior to so-called Convolutional Neural Networks and Recurrent Neural Networks, since they apply a fast-evolving set of mathematical techniques, called attention or self-attention, to detect ways in which even distant data in a series influence and depend on one another. This means that they understand context immediately; they process data and learn “everything, everywhere, all at once.”

It sounds as if they’re dreaming.

There’s another reason dreaming might become the new MO of AI. Mark Rolston, co-founder and chief creative officer of argodesign, predicts “ephemeral AI” that will eventually render mobile apps obsolete and serve users by creating AI on the fly, super-tailored to their needs. Like dreams, these kinds of AI apps will be “transient” and extremely context-aware, and they will run in the background, not always clearly indicating which purpose they serve, seemingly tapping from and serving as our subconscious. Initially, we will still understand their raison d’etre, but increasingly, they will puzzle us, simply because they know more about us than we are able to express (“ineffability”), relegating us to “passive” users. Yet they will direct us with absolute authority (“noetic quality”).

“It’s the end of explanations, a great increase in reality is here.”

No matter how you look at it, AI will further blur the lines between reality and dream, and create entirely new worlds, as fiction does. But fiction won’t only be the end product — it will be the starting point. Imagine a whole new genre called co-fiction: humans writing with AI co-authors who do not understand what they are saying and are occasionally hallucinatory.

The human (yes, we need to get used to this qualifier) author and TED speaker K Allado-McDowell, who established the Artists + Machine Intelligence program at Google AI, welcomes this confusion: “It’s the end of explanations, a great increase in reality is here,” they proclaim.

Allado-McDowell collaborated with GPT-3 for their book Pharmako K, a hybrid creation, in which they describe the hallucinatory effect the AI has had on their thinking, blurring the lines between the two notions of authorship. Rather than just switching back and forth, with a clear line of demarcation, the two minds enmesh with each other in a symbiotic act of co-creation.

The authors write:

“The question is not how can machines or artificial intelligence take our place in the world. It is whether there is a place for the world itself. There are only worlds, and the question is what is in these worlds. There are no things, only semiotic movements of semiosis, only matter as an expression of semiosis, as symbols.”

And further:

“Machines are part of the evolution of life. In this view, machines can never lose. Life wins and machines win. The question is with what can machines contribute. The answer is that machines can create, in the image of life, and for the life of life. Machines cannot live without us. They cannot win without life. There is no question of winning. It is a question of symbiosis, of living together or nothing.”

Beyond the fuck-it

Speaking of life, it was not a coincidence that the two themes that dominated this year’s TED program were AI — and death.

And it is perhaps not a coincidence either that after having lost a loved one recently, I began to read more poetry — the manifestation of absence; the more “noetic,” the better — and exchange regular letters (without the help of ChatGPT, delivered via email) to another human living in another city, in a communion of the grief both of us have been carrying with us, and beyond it. The old-fashioned asynchronous form of correspondence gives us time to listen and to think, without interruption, judgment, or expectation (aside from a reply).

If there ever was a form of nonviolent communication, it’s this. The writing is not entirely agenda-less, but it is aimless. It is not transactional as it has no goals. When I told a friend of mine at TED about the letters, he observed: “You are not writing to each other; you are each writing to yourself.” I protested and insisted ours was a true dialogue, to which he gently replied: “Exactly. You serve as windows into each other’s soul precisely because you write to yourselves first. If you simply wrote to the other, you would merely project onto one another, but you wouldn’t have the same intimate connection.”

It occurred to me that this is exactly the difference between AI and humans (and journalists and artists, for that matter).

AI would never write to it itself or for itself. AI writes to serve, to convey information, for the benefit of the reader; it is never aimless, it cannot reflect, it cannot reveal, there are no windows because there is no interior. Ironically, writers who primarily write for others will be replaced by AI; writers who write for themselves will not.

AI autocompletes, it will always fill in the blank after the last word, but we humans are unique experts at filling in the blank before the last word. We don’t dream to___; we chase the ____dream. We imagine, we make choices, often poor ones. To borrow from Leonard Cohen, we are the flaw in our plans through which life comes in.

We can embrace joy as “unmasked sorrow,” as Kahlil Gibran, the Lebanese poet, put it in his poem Joy and Sorrow. We can write the story of our life. We can forgive ourselves, even preemptively. We can appreciate that “we are all dying,” as death doula Alua Arthur reminded us on the TED stage in her rapturous talk, so we can “stop the diet and eat that cake!”

We can ask AI to craft a strategy to solve our grief. But as humans, we recognize that grief is not a problem to be solved. Grief is how we experience life. It is the strategy.

***

My favorite moment at TED took place at the very beginning. After an AI-created opera performance, Boston Philharmonic Orchestra musical director Benjamin Zander enlisted the audience to collectively intonate Beethoven’s Ode to Joy, in German. He did several runs, and after a tentative start, the crowd got into it more and more with every round. Beethoven’s music, Zander impressed on us, was “the most affirmative ever written. So be affirmative! Lean into it as if the whole world depended on it!” “Beyond the fuck-it!” he exclaimed. Voices grew louder and louder, there were smiles and tears, and eventually, everyone was singing their heart out, full of joy and sorrow.

--

--

Tim Leberecht
Tim Leberecht

Written by Tim Leberecht

Co-founder and co-CEO of the House of Beautiful Business; author of “The Business Romantic” and “The End of Winning”

No responses yet