Previously: On a war footing
The war on the most vulnerable is ramping up and many voices in the press and wider media are complicit.
If the Bible is true, then I'm Christ.
— David Koresh on a tape recorded for his followers in the Branch Davidian, as quoted by The New York Times in March 1993 while the Waco Siege was ongoing.
‘David Koresh’ was a product; the creation of a brilliant marketing mind. Until 1990, he was legally known as Vernon Wayne Howell. When Koresh submitted the paperwork, he indicated that he was doing away with 'Vernon’ “for publicity and business purposes”.
In contrast, he gave his followers a soaring, rhetorical explanation: ‘David’ because he believed he was the head of the biblical House of David, and Koresh, a Hebrew transliteration of Cyrus, the name of the Persian emperor who, according to the Ketuvim, let captive Jews in Babylon return to Israel.
Robyn Bunds — who was 23 at the time she gave this quote to The New York Times, one of Koresh’s 19 ‘wives’, and mother of one of his children — said she was drawn to him because…
He had this amazing ability to recite verse. He just had a good way of interpreting the Scriptures. He is very believable.
An unfinished text by Koresh, his exegesis on the Book of Revelation (‘The Decoded Message of the Seven Seals’) made it into the public domain after his death at 33. He perished alongside 85 other Branch Davidians, in the murderous conclusion of the Waco Siege, which ended with tanks storming the compound, a gunfight and a huge fire which totally destroyed the buildings.
It was the work that Koresh had told negotiators he wanted to complete before he intended to surrender. The FBI claimed, before the final assault, that it believed Koresh had not even started the document. The disc containing the dense 13-page text — which was carried by Ruth Riddle, the group member who had taken dictation, when she jumped from a second-storey window as the fire took hold — proved otherwise.
Though peers and teachers claimed Koresh’s school days were “unspectacular”, he had the ability to memorise Bible passages from an early age and turn that memorisation into extemporising on themes to make his own interpretations and arguments. In an alternate universe, Koresh might have become a big-money televangelist or the author of bestselling airport books written to provide suits and mediocre management consultants with ‘big’ ideas to pass off as their own.
Equally, when I encounter the pop psychology and cod-profundity of books like Yuval Noah Harari’s Sapiens, Jared Diamond’s Guns, Germs and Steel, Johann Hari’s Chasing the Scream and Stolen Focus, and Malcolm Gladwell’s entire bibliography, I see potential Koreshes. Had they not found their way into the partially-fenced zones of academia and the media, they too might have been tempted to turn their ‘skills’ to the construction of an alternative religion or sect. After all, Charles Manson was nearly a pop star.
I watched Netflix’s Waco: American Apocalypse — one of several new accounts of the siege timed for the 30th anniversary — earlier this week and when I read You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills, a New York Times guest essay by Yuval Noah Harari, former Googler turned self-promoting ethnicist Tristan Harris, and writer/entrepreneur/tech nepo baby Aza Raskin, the messianic tone was obvious.
The column begins with a flashlight-under-the-chin moment designed to make the reader scared from the get-go:
Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane.
That’s an immediate distortion of the survey data. A.I. Impacts, the project which conducted the survey contacted 4271 A.I. researchers, of whom 738 responded — some partially — leading to a 17% response rate.
Harari, Harris, and Raskin lean on the stat that half the respondents believe that the metaphorical plane has a 10% chance of crashing. But they don’t tell the reader that the A.I. Impacts report says:
The median respondent believes the probability that the long-run effect of advanced A.I. on humanity will be “extremely bad (e.g. human extinction)” is 5%.
While 48% of respondents said they believed there was a 10% chance of an ‘extremely bad’ outcome, 25% put the risk at 0%. Remember, the majority of the A.I. researchers contacted did not reply at all.
In her 2022 deconstruction of Yuval Noah Harari’s tactics for Current Affairs, Darshana Narayanan nails his desire to stoke fear:
Using the opportunity to promote a false crisis—another core trait of a science populist—Harari gave dire warnings of “under-the-skin surveillance” (admittedly a worrisome concept)…
… If we let people like Harari convince us that surveillance technologies can “know us far better than we know ourselves,” we are in danger of letting the algorithms gaslight us.
… By echoing the narratives of Silicon Valley, science populist Harari is promoting — yet again — a false crisis. Worse, he is diverting our attention from the real harms of algorithms and the unchecked power of the tech industry."
Tristram Harris, Google’s prodigal son, would probably argue that The Social Dilemma, his 2020 documentary for Netflix, was focused on “the real harms of algorithms” but, in truth, it was more about absolving Harris and other ex-Silicon Valley players of responsibility for the state we’re in and lifting them up as our potential saviours: a gaggle of clickbait Koreshes preaching.
At one point in The Social Dilemma, Justin Rosenstein, who led the team that developed the Facebook Like button, claims they were motivated by a desire to “spread love and positivity in the world.” Netflix itself, which tinkers with its own algorithms endlessly, went unmentioned and uncriticised in the film.
Aza Raskin is the co-founder of the Center for Humane Technology with Harris, and Harari has been a guest on the organisation’s podcast. The trio’s heavily-marketed hyperventilating is summed up by a paragraph early in the New York Times op-ed:
The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images.
It’s unclear whether the first line is a deliberate or accidental echo of Marx and Engels’ opening to The Communist Manifesto (“A spectre is haunting Europe — the spectre of Communism. All the Powers of old Europe have entered into a holy alliance to exorcise this spectre…”). If it’s deliberate it suggests that the writers don’t really understand what Marx and Engels were getting at; if A.I. is a spectre, it is one being welcomed hungrily by power and capital.
Harari, Harris, and Raskin also sketch a very partial history of A.I. for the casual reader. A.I. has not “haunted humanity since the mid-20th century”; in sci-fi, its concepts have been deployed both utopian and dystopian visions, while in reality, there have been large periods of time when A.I. research has stumbled, stalled and dropped completely out of view.
There’s a name for those dead times — A.I. winter — and there have been several A.I. winters since 1955 when study in the field began in earnest with a proposal authored by John McCarthy of Dartmouth College, Marvin Minsky of Harvard, Nathaniel Rochester of IBM, and Claude Shannon of Bell Labs.
Their document — ‘A Proposal For The Dartmouth Summer Research Project On Artificial Intelligence’ — kicked off a sustained period of research by the US Defense Advanced Research Project Agency (DARPA) into machine translation; ‘thinking machines’ that could play games like checkers; and neural networks. There was initially a lot of hype around these early experiments but it didn’t last.
In 1969, Minsky, working with fellow A.I. pioneer Seymour Papert, published Perceptrons, which considered the flaws and limitations of neural networks (it was named after ‘Perceptron’, an artificial neural network1 built by Frank Rosenblatt in 1958, based on an algorithm created by Walter Pitts and Warren McCulloch). Claims that Rosenblatt made at the time — that neural networks would be able to recognise images and beat humans at chess, for instance — have come to pass but Minsky and Papert's scepticism functioned like a bucket of cold water to take the heat out of the research; DARPA withdrew its funding.
A further gut punch for A.I. researchers came in 1973 when the Lighthill Report concluded that the field had failed to live up to the grand claims made for it and UK funding for research in the area also ceased. That marked the start of the first A.I. winter which continued until around 1980 and was followed by another A.I. winter from the late-80s until the mid-90s.
The notion that the ethics of A.I. have not been the subject of serious scientific debate is ludicrous and suggested by the New York Times op-ed writers merely to bolster their spooooky argument. It’s true that serious debate about A.I. among politicians is still absent but if you watched the paranoid rantings by members of Congress interrogating TikTok’s CEO you probably know why.
When Harari, Harris, and Raskin say, “it is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools,” they’re being disingenuous. In fact, they think they understand A.I. very well indeed but are using false modesty to condescend to you, the New York Times-reading dumb-dumb who might pick up a copy of Harari’s next book.
They continue:
In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.
“Language is the operating system of human culture,” is the kind of line that is catnip to the Airport Brain Book writer; it sounds profound but does not hold up to even the most cursory detail. It is also an idea that both Harari and Harris have picked over in the past. The latter told Wired, “language shapes reality,” while the former made all sorts of wild claims about language in Sapiens. In her critique for Current Affairs, Darshana Narayanan writes:
Harari claims that “[many] animals, including all ape and monkey species, have vocal languages.”
I have spent a decade studying vocal communication in marmosets, a New World monkey. (Occasionally, their communication with me involved spraying their urine in my direction.) In the Princeton Neuroscience Institute, where I received my doctorate, we studied how vocal behaviour emerges from the interaction of evolutionary, developmental, neuronal, and biomechanical phenomena… we discovered that monkey babies learn to “talk”, with the help of their parents, in a fashion similar to the way human babies learn.
Yet, in spite of all their similarities to humans, monkeys cannot be said to have a “language.” Language is a rule-bound symbolic system in which symbols (words, sentences, images, etc.) refer to people, places, events, and relations in the world — but also evoke and reference other symbols within the same system (e.g. words defining other words). The alarm calls of monkeys, and the songs of birds and whales, can transmit information; but we — as German philosopher Ernst Cassirer has said — live in “a new dimension of reality” made by the acquisition of a symbolic system.
The difference between Harari’s claim and Narayanan’s explanation is the gulf between confident charlatanism and the complexity of expertise. The first can be a lot more compelling because it appeals to our mental sweet tooth — a brightly coloured and easily consumed Skittle versus the layered flavours of a sophisticated salad of fibrous facts.
By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.
I asked my wife, Dr Kate Devlin — Reader in A.I. & Society at King’s College London and the author of Turned On: Science, Sex and Robots — to read the New York Times piece and her response to the line above was sharp:
If an undergrad student wrote this, I'd tell them to stop making dramatic assertions unless they can back it up with some solid citations.
The red pen demand for citations could be scrawled all over the op-ed. The idea of a “master key to civilisation”, for example, can be filleted in so many ways; for one, it assumes that all civilisations are the same, that all civilisations look like the one in which I’m typing this newsletter.
The undergraduate overexcitement continues:
What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
We already live in a world where “the weaknesses, biases and addictions of the human mind” are exploited endlessly. That’s why the advertising industry exists; why politicians hire spin doctors and other proponents of the dark arts; why the same political parties buy up huge amounts of data to shape election strategy — the Tesco Clubcard is a window into voters’ souls.
A.I. now is not a “nonhuman intelligence”; it is a machine fed on a vast buffet of the products of human intelligence. Its power is scale — the ability to chew through huge corpora of data and refashion what it finds into plausible answers and ‘creations’ — but A.I. does not feel, think, or care; it is a soulless mimic.
The threat of disinformation from A.I. is, as Kate says, present in two forms: the accidental (A.I. giving out plausible but incorrect answers) and the deliberate (A.I. explicitly tasked by humans to create huge volumes of false information). A.I. is a creation of and a tool for people; it is not an unknown or unknowable alien invader, however much Harari, Harris, and Raskin want it to play that role in their scary story.
The next paragraph reveals how little the New York Times op-ed desk subjected the piece to scrutiny and, you know, editing:
A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans.
Here are the thoughts it provoked in my mind, in order: 1) “A.I. could rapidly eat the whole of human culture…” So? And “eat” implies “consume and destroy”, which it doesn’t and won’t 2) “… holy books for new cults.” — humans have been creating alternative religions forever; is an A.I. god any worse than a bunch of tablets some bloke claimed to have found buried in Bumfuck, New York? 3) “By 2028, the U.S. presidential race might no longer be run by humans.” — How? Tell me HOW? Because I know you can’t without composing a very tedious short story that wouldn’t get anywhere near winning a Hugo.
The abject lack of challenge or editing only gets more apparent and egregious with the next paragraph:
Humans often don’t have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?
What is objective reality? It doesn’t exist and tackling that takes far more words than a New York Times op-ed can offer or that I can fit in a single email. Even a lone human, born and immediately abandoned in the woods, disconnected from culture would experience reality through a number of lenses.
If A.I. comes to play a major role in art, religion and politics, it will be another factor to consider. Do the writers assume that we will simply agree with the A.I. and be bamboozled by it; marks in an endless confidence trick? Perhaps that says more about how the trio of writers see the rest of us: a bunch of gullible rubes easily taken in by ‘big’ ideas and oversized confidence.
For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.
What the fuck does that all even mean, really? It’s a burlesque of brilliance; a stupid person’s idea of profundity. It’s also an elite Western conception of how people live. Jump back just three generations in my family and you find men doing hard manual labour on the land and women doing hard manual labour in ‘service’. Keep going back and it only gets grimmer.
Those ancestors were not “pursuing ideas of beauty” or “dedicating [their] lives to causes… [from] the imagination of some prophet”; they were surviving. Of course, they were as capable of having rich internal lives as anyone but they had no time to stroke their chins like Harari et al.
A.I. does not hallucinate; A.I. produces output that seems bizarre to us and, like seeing a face in the shape of a house or the form of a door knocker, we seek to find meaning in it. It seems like a hallucination because it is reminiscent of the garbled thought we would experience in a hallucination.
The “Terminator” franchise depicted robots running in the streets and shooting people. “The Matrix” assumed that to gain total control of human society, A.I. would have to first gain physical control of our brains and hook them directly to a computer network. However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.
Kate says, “Invoking Terminator or The Matrix is the A.I. reporting equivalent of Godwin’s Law.” The writers are excited by the thought of dystopia and the idea that they might be the ones to warn about it; it is Koresh mindset incarnate. And it also ignores the reality we live in now; my dad joined the Royal Navy aged 16, and my mum joined up when she was 17. How have we persuaded young people to fight and die ‘for their country’ for thousands of years? Propaganda that had no need for A.I. input to persuade.
“A.I. could make humans pull the triggers, just by telling us the right story.” What do they think a drill sergeant does? And why don’t they invoke Full Metal Jacket or Platoon? Those are stories about real horror and brainwashing that don’t need an invented dystopia or cold metal hands.
The radicalisation hotbeds of the internet are not even algorithmically driven; Anders Breivik frequented Stormfront — a bulletin board then website that has inspired countless murders in its 33-year existence; the Christchurch mosque killer frequented 8chan; the Oregon mass shooter spent his time on 4chan; the Tree of Life Synagogue attacker used Gab. Facebook or YouTube might lead people into darker territory, but the darker depths of the internet are humans posting without the barely hidden hand of A.I. having any role.
The next paragraph of the op-ed is an example of what I think of as reference chaff; it’s where a writer scatters a series of names or quotes to distract the reader from interrogating the argument; confusing the radar of scepticism.
The specter of being trapped in a world of illusions has haunted humankind much longer than the specter of A.I. Soon we will finally come face to face with Descartes’s demon, with Plato’s cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there.
It’s Harari, Harris, and Raskin — that’s starting to read like the name of a bad 60s folk rock group to me — trying to rope in Descartes, Plato, and Buddha as co-signatories to their argument. And then they grab for that favourite word of snake oil sellers, marketers, and politicians: Could.
“A curtain of illusions could descend over the whole humanity…” I could glue a Cornetto to a Shetland pony’s head and persuade credulous people that it’s a unicorn. Illusions swallowing humanity is the writers’ dark fantasy; it has no connection to anything that looks like a “fact”.
Next, comes an example of the growing media tendency to point to anything that worries it and ask plaintively: “Is that A.I.?”
Social media was the first contact between A.I. and humanity, and humanity lost. First contact has given us the bitter taste of things to come. In social media, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds is still choosing which words, sounds and images reach our retinas and eardrums, based on selecting those that will get the most virality, the most reaction and the most engagement.
This is terror at automation. Our news feeds are just news automated and organised. Yes, that is a problem but it is not an A.I. demon making us dance to its tune. A.I. “choosing words, sounds and images [that] reach our retinas and eardrums” is just what journalists, editors, and producers have done throughout the mass media era. The A.I. does not make decisions; it executes commands as it has been programmed to do. The desire to anthropomorphise algorithms, especially when they have been given a ‘persona’ as a framing mechanism, is understandable but it is foolish. Alexa is not a tiny woman trapped in your speaker; Siri is not a Jinn.
After several more huge steaming dollops of scaremongering, the op-ed ends:
We can still choose which future we want with A.I. When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises.
We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for an A.I. world and to learn to master A.I. before it masters us.
There’s that disingenuous use of “we” again and it’s wrapped up with the scare tactic language of the wannabe prophets. A.I. is not “an alien intelligence” and ‘we’ did not summon it in some Lovecraftian ritual; it has been built and how it continues to be built is open for discussion, debate, and regulation. Because Harari, Harris, and Raskin want to seem like gurus, they have to pretend that there have been no real debates about this, despite an entire field of A.I. ethics filled with people passionately debating it and advocating for care.
David Koresh believed god wanted him to interpret and reveal the secrets of the Book of Revelation; the Bible’s most vivid visions of revolution and utopia, which inspired turmoil and revolution in the Middle Ages. In the New York Times op-ed, Harari et al. are selling a Book of A.I. Revelation; they use the language of the guru and the sect leader to talk in sweeping terms. It serves the Silicon Valley operators who are controlling a lot of A.I. development now because it focuses on future fears and not the issues of the urgent now: bias; predictive policing; racist technology; ghost work; hidden labour; the erasure and exploitation of the Global South; inequality; and wild power imbalances.
In an interview for Tyler Cowan’s podcast, Sam Altman of OpenAI, which is behind GPT-4, responded to questions about politics in the Bay Area and his involvement in them by saying:
Well, I will caveat this by saying if you believe what I believe about the timeline to AGI [Artificial General Intelligence, an A.I. capable of doing any task a human can do] and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else. So I have not thought deeply about what it would take to solve, really, any other problem in the last few years. But I don't feel optimistic, given prior performance, that the Bay Area is going to do the right thing on housing policy.
Effectively, Altman said, yeah, I’m really above worrying about building houses now as I’m pretty sure we will soon have built god. He’s dreaming of KoreshAI; hallucinating in ways that ChatGPT certainly could never dream of doing.
Thanks to JPJH, DKD, TB, SFG, and RD for reading the draft.
And thank you for reading. Please share if you liked it…
… and consider upgrading to help support this newsletter (you’ll get bonus material too):
Artificial neural networks are computing systems inspired by the biological neural networks in our and other animal brains. They’re essentially a collection of nodes — artificial neurons — which are modelled on neurons in a biological brain. Each connection — like synapses in a human brain — can send signals to other neurons.
Thanks Mic. Sagacious, as ever.
One of the (many) things one senses this op-ed has, in part, been drafted in an attempt to distract us from, is the voracious requirement of these machine-learning tools to use real things written/drawn/shot/made by real people to learn from. Thus far, it appears, the UK government is (belatedly perhaps, but still refreshingly, and a bit surprisingly, from my perspective) not convinced of their arguments. Fingers crossed.
A good report into this is here:
http://www.londonfreelance.org/fl/2302robo.html
I also heartily recommend this, from the same source and (human) author:
http://www.londonfreelance.org/fl/2303ai.html
Cheers,
AB
"..... a brightly coloured and easily consumed Skittle versus the layered flavours of a sophisticated salad of fibrous facts." sheer bloody poetry :)