'A Large Language Model done it and ran away...'
The shaky rise of 'A.I.' is causing the commentariat's dubious doctors to diagnose Britain with entirely the wrong disease.
Previously: KoreshAI
Mini-messiahs and airport book gurus are much more terrifying than A.I.
Calling something A.I. offers a powerful combination of marketing bullshit and ‘frightening’ implications. The coked-up steroidal auto-complete stylings of the ChatGPTs and Midjourneys of this world (the former the source of countless tediously generated first paragraphs in articles about A.I., the latter ‘creator’ of Pope in a Puffy Coat) sound far less impressive when called what they are: Large Language Models (LLMs). They’re electronic parrots1 with an equally high chance that they will simply repeat abuse and slurs.
A.I. — artificial intelligence — is a phrase that calls to a potentially impossible but mesmerising goal: Building a true thinking machine. The grifters and the credulous share a desire to see the spark of intelligence in the LLM’s regurgitations. They pick up the hairballs vomited out by these compellingly coded cats and comb through them for the genius in the stench. LLMs are impressive in the way fireworks are to a child or Three-Card Monte is to a mark.
The scary story potential of A.I. or things that can be branded as A.I. has the columnists of Britain firing up their powerful flashlights and shoving them under their chins. It’s spooooooky season in the newspapers and A.I. — in collusion with the immigrants, the liberals, and the young — is coming to take your jobs, make a travesty of all you hold dear, and then kill you.
In classic columnist style, it doesn’t matter if the writer doesn’t actually know how the LLMs work; in fact, they simply pretend that nobody understands how ‘A.I.’ works despite a huge amount of literature on the topic and many, many academics and other experts who have been debating and unpicking the ethics of these developments. These columns reveal their writers as a little better than an LLM: Regurgitating arguments and frightening lines they’ve heard elsewhere, and grabbing for the most obvious references. As my wife, Dr. Kate Devlin — Reader in A.I. & Society at King’s College London — has put it: Mentioning The Terminator is the A.I. column equivalent of Godwin’s Law.
I’m a human with thoughts and feelings, and subject to repeated bouts of psychic damage, rather than an emotionless, thought-free bot, so I’m not going to go through every awful A.I. column that has entered the discourse recently like raw sewage flooding into Britain’s rivers. Instead, I’ve picked a trio of terrible examples from the past week.
In The Daily Telegraph, Tim Stanley — who I could easily mistake for a wayward A.I. with an algorithmic distaste for builders and obsession with bowties, were it not for the fact that I have met him in person — wrote, under the headline The AI disaster is already here, and we don’t care, that:
If the tech revolution is so great, why has our country become poorer? Why has customer service got worse? Why are citizens with such access to knowledge palpably dumber, and why are our kids – so interconnected and validated – so miserable? The latest figures from Ofcom, the media regulator, show that 97 per cent of 12-year-olds have a mobile phone and 88 per cent have an “online platform profile”. Even more incredible: 21 per cent of three-year-olds have a phone and 13 per cent of them are online.
By no coincidence, Jonathan Haidt, the social psychologist, has published evidence of an epidemic in teenage mental-health problems that took off in the early 2010s, concluding that a critical factor is the prevalence of phone technology. One might argue that anyone who sets up their toddler with an Instagram account isn’t a million miles away from teaching them how to smoke, and that unleashing more AI upon this population of addled junkies only increases the risk to public health (an oddly libertarian move from a Government that won’t even let us enjoy a joint). In my dream world, we’d ban smartphones for the under-18s. Failing that, let’s stick a health warning on the packet.
Stanley is playing cute with that first question; before politics in the UK was hard rebooted by Thatcherism in 1979, the issue of improvements in quality of life and productivity being shared unequally had been well covered for at least a hundred years. In 1867, Marx writes in Kapital that the steam engine was “[an antagonist] that enabled the capitalists to tread underfoot the growing demands of workers”, particularly those fighting for limits on the working day. He goes on:
It would be possible to write a whole history of the inventions made… for the sole purpose of providing capital with weapons against working-class revolt.
Many more volumes of that putative history could (and have) been written since.
Thirteen years later, Henry George — whose philosophy was that people should own the value they produce themselves and that the economic value of land and natural resources should belong to all members of a society — wrote in Progress & Poverty:
The “tramp” comes with the locomotive, and almshouses and prisons are as surely the marks of ‘material progress’ as are costly dwellings, rich warehouses, and magnificent churches. Upon streets, lighted with gas and patrolled by uniformed policemen, beggars wait for the passer-by…
… It is true that wealth has been greatly increased, and that the average comfort, leisure, and refinement has been raised; but these gains are not general. In them, the lowest class do not share. I do not mean that the condition of the lowest class has nowhere nor in anything been improved; but that there is nowhere any improvement which can be credited to increased productive power.
But Stanley’s questions must be rhetorical and entirely answer-free because if he were to ask them genuinely he would have to criticise the politics, people, and system which his paper and proprietor have pushed so aggressively and profited from some handsomely. Why has our country become poorer? The wealth is hoarded. Why has customer service got worse? Because corporations — the Telegraph’s truest gods besides Thatcher — have seized every chance to make humans employed to help other humans obsolete.
Stanley’s last question (“Why are our kids — so interconnected and validated — so miserable?”) indicates how thin and surface-level the faith, which gains him a regular spot on Radio 4’s religious segment Thought for the Day, is really. As the King James Bible has it:
For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow. (Ecclesiastes 1:18)
My stepdaughter knows more about the world as it is right now than I did when I was 12. My news came daily from TV and radio bulletins at fixed times and from glances at the newspapers; weekly from music magazines and 2000AD (through a fairground mirror of satire); and in delayed fashion from books that were often old and hilariously out of date. The fierce immediacy of now is with her instantly.
Smartphones are a window to how fucked we are but also to community and friendship; to the realisation that you are not so odd and not remotely on your own. It is ridiculous to suggest that children shouldn’t look at the other side. And when Stanley quotes the Ofcom stats, he misquotes them. They actually say:
17 per cent of 3 and 4-year-olds have a phone — not 21 per cent as Stanley says — and it’s highly likely that they are old phones used without a SIM to connect to WiFi at home and with high levels of parental/adult supervision.
39 per cent of that age group use a phone to get online; 78 per cent use a tablet; and 10% use a laptop.
97 per cent of 12 to 15-year-olds have a smartphone (not 97% of 12-year-olds as Stanley claims) and it’s unsurprising: a lot of homework is delivered online now as well as a huge amount of socialising taking place there or being organised there in the aftermath of the pandemic.
Stanley uses those figures deceptively because he doesn’t care about their real implications or the details behind them. He just needs numbers to put the shits up his readership of the ageing, the angry, and the paranoid.
That needs to feed paranoia and grievance explains why Stanley — like so many other columnists — is taken with Jonathan Haidt and his claim that a teenage mental health crisis spiked in 2012 with the explosion of smartphone ownership. It’s exactly the kind of simple answer that op-ed writers adore: It’s neat, shocking, and easily regurgitated by readers at overemotional dinner parties.
In March,
summed up the deception at the heart of Haidt's thesis in his Substack newsletter:Saying there was no sign of a teen mental health epidemic until around 2012 is the equivalent of looking back to February 2020 when the Diamond Princess cruise ship saw a massive outbreak of coronavirus, ultimately killing more than a dozen people, and declaring there was no sign of an impending viral pandemic.
Haidt has constructed a timeline convenient to his narrative that smart phones/social media are the cause of mental distress among teenagers, but the distress was present long before the ubiquity of social media use…
… Haidt’s claim that there were no signs of an epidemic prior to 2012 - a date which allows him to claim smart phones are a direct cause of the distress, as opposed to a vector through which an existing virus spread - he is simply wrong.
If it is the phones, they opened the gates on a well-documented, pre-existing phenomenon.
Teenagers were depressed, anxious, and self-harming in the 90s when Snake was as sophisticated as phones got. I know because I was there and I was one of them. Back then, it was computer games and rap music that were the folk devils allegedly sending adolescents into a doom spiral.
In Stanley’s imagination — overheated for the benefit of the Telegraph reader — Instagram is a cigarette shoved in a toddler’s mouth; social media is the new heroin and A.I. is a powerful new strain to hook the junkies. As someone who has known a fair number of addicts, I can say with some certainty that the new heroin is the old heroin and that, despite frequent comparisons by right-wing pundits, TikTok has nothing on crack.
It’s almost funny that Stanley is so keen to see a health warning slapped on smartphones; back in January, he was muttering about New Zealand’s policy of banning young people from smoking. But the most telling paragraph comes at the start of his column when he worries:
AI can be surprisingly racist, even nasty. In Belgium, an AI chatbot allegedly encouraged a young father towards taking his own life. The potential for fraud is huge. And Goldman Sachs predicts an enormous productivity gain, which Leonid Brezhnev, even if high on vodka and Quaaludes, would have instantly spotted meant job losses. AI might steal up to 300 million jobs. With its ability to replicate and generate clean prose, it could theoretically replace the entire staff of a newspaper.
Cut the waffle from that paragraph and you get to his true concern:
AI can be surprisingly racist, even nasty… it could theoretically replace the entire staff of a newspaper.
In The Times, William Hague — a man who blazed the trail for unconvincing and disturbing images that seemed computer generated when he wore a baseball cap in the late-90s — chewed his nails and played apocalyptic preacher under the headline World must wake up to speed and scale of AI:
In August 1939, Albert Einstein wrote to President Roosevelt to warn him that “the element uranium may be turned into a new and important source of energy” and that “extremely powerful bombs may thus be constructed”. Given that the first breakthrough towards doing this had been made in Nazi Germany, the United States set out with great urgency to develop atomic bombs, ultimately used against Japan in 1945.
Such was the unsurpassable power of nuclear weapons that, once the science behind them had been discovered, their development could not conceivably be stopped. A race had begun in which it was imperative to be ahead.
Last week, another letter was written about today’s equivalent of the dawn of nuclear science — the rise of artificial intelligence. This was a public letter from 1,100 researchers and experts, including Elon Musk, arguing that “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” In the absence of such planning, they called for a six-month pause in the training of AI systems such as ChatGPT-4, rather than see an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.
Unlike Einstein, who was urging the US to get ahead, these distinguished authors want everyone to slow down, and in a completely rational world that is what we would do. But, very much like the 1940s, that is not going to happen. Is the US, having gone to great trouble to deny China the most advanced semi-conductors necessary for cutting-edge AI, going to voluntarily slow itself down? Is China going to pause in its own urgent effort to compete? Putin observed six years ago that “whoever becomes leader in this sphere will rule the world”. We are now in a race that cannot be stopped.
The ‘Musk’ letter included names that had not actually signed it, notably one Mr William Gates. Meanwhile, critics, including A.I. ethicist Margaret Mitchell, who co-authored the paper ‘On the Dangers of Stochastic Parrots’ (cited in the letter) have noted that it’s framed to focus on apocalyptic scenarios rather than immediate concerns like racist and sexist bias. It’s an attempt to make some future Skynet the scary centre of the debate instead of exploitation now. Musk’s foundation is a major donor to the Future of Life Institute, which pushed the letter, and a big investor in OpenAI which is pushing for dominance.
Mitchell, chief ethical scientist at the A.I. firm Hugging Face, told Reuters:
By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on A.I. that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.
None of this complexity and self-interest is present in the picture Hague puts to his readers. It’s because he doesn’t know about this area. If he did, he might not be leaning so hard on the hyperbole of drawing a straight line for the hydrogen bomb to the ‘Stochastic Parrots’ playing knockoff Pictionary and semi-plausible whisper games.
Hague hasn’t just drunk the Kool-Aid2, he's got it pumping straight into his vein via IV. He continues:
Since the advent of Deep Learning by machines about ten years ago, the scale of “training compute” — think of this as the power of AI — has doubled every six months. If that continues, it will take five years, the length of a British parliament, for AI to become a thousand times more powerful. The stately world of making law and policy is about to be overtaken at great speed, as are many other aspects of life, work and what it means to be human when we are no longer the cleverest entity around.
Hague neglects the history. The roots of deep learning stretch back 70 years to Walter Pitts and Warren McCulloch writing the mathematical model for neural networks in 1943. People who know that also know about the A.I. winters and the stop/start nature of development in machine learning. Hague is told that ‘A.I.’ is developing at terrifying speed but what’s missing from that horror story is what A.I. is getting better at. It is becoming a better mimic; a chameleon with access to a wider range of textures and colours. It is not clever because cleverness is a quality that transcends copycatting.
Hague’s column is a sci-fi fever dream:
The rise of AI is almost certainly one of the two main events of our lifetimes, alongside the acceleration of climate change. It will transform war and geopolitics, change hundreds of millions of jobs beyond recognition, and open up a new age in which the most successful humans will merge their thinking intimately with that of machines. Adapting to this will be an immense challenge for societies and political systems, although it is also an opportunity and — since this is not going to be stopped — an urgent responsibility.
Like the nuclear age heralded by Einstein, the age of AI combines the promise of extraordinary scientific advances with the risk of being an existential threat. It opens the way to medical advances beyond our dreams and might well provide the decisive breakthroughs in new forms of energy. It could be AI that works out how we can save ourselves and the planet from our destructive tendencies, something we are clearly struggling to work out for ourselves.
On the other hand, no one has yet determined how to solve the problem of “alignment” between AI and human values, or which human values those would be. Without that, says the leading US researcher Eliezer Yudkowsky, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.
Columnists love those sweeping sentiments; sentences that are built for a pull quote but which do not survive if you try to pull them apart. Yes, A.I. will change war but only if we as a species decide not to agree treaties and restrictions on autonomous weaponry. Advanced cyborgs could happen but only if we choose to take society in that direction. None of these things are inevitable; they can be stopped or constrained just as we did with chemical weapons or CFCs.
Yudowsky is the director of the Machine Intelligence Research Institute, which is a project founded to focus on the premise runaway A.I. will destroy humanity. It is Yudowsky’s interests and the interests of his organisation’s funding to present the most terrifying visions possible and, like the ‘Musk’ letter, to dismiss the prosaic in favour of the apocalyptic.
He told Time:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
“Most” researchers do not think that because most researchers are not sure that “superhumanly smart A.I.” will ever be built.
The third terrible article I want to look at comes from Laura Kuenssberg, who showed a particular BBC kind of arrogance in presenting a Fisher Price-level guide to the coming A.I.-pocalypse. She begins:
What do the Pope's crazy puffa jacket, a student avoiding a parking ticket, a dry government document and Elon Musk warning the robots might come for us have in common?
This is not an April Fool's joke but a genuine question.
The answer is AI - artificial intelligence - two words we are going to hear a lot about in the coming months.
The picture of the Pope in a Michelin-man style white coat was everywhere online but was made using AI by a computer user from Chicago.
In Yorkshire, 22-year-old Millie Houlton asked AI chatbot ChatGPT to "please help me write a letter to the council, they gave me a parking ticket" and sent it off. The computer's version of her appeal successfully got her out of a £60 fine.
Also this week, without much fanfare, the government published draft proposals on how to regulate this emerging technology, while a letter signed by more than 1,000 tech experts including Tesla boss Elon Musk called on the world to press pause on the development of more advanced AI because it poses "profound risks to humanity".
Kuenssberg should be concerned but only because an LLM can already compose a better summary and analysis of the stories she picked.
The Pope In The Puffy Coat was interesting but people were only ‘fooled’ by it to begin with because it was so low stakes. If it had been a more controversial image, they would have been quicker to inspect it carefully. Similarly, the letter that got Millie Houlton out of her parking ticket proves nothing; we don’t know if a letter she wrote herself or one she asked another human to scribble out would have also resulted in a cancelled fine.
Just like Hague, Kuenssberg takes the ‘Musk’ letter at face value; she did less research than an LLM and offered a less compelling conclusion. Then we come to her trying to define terms:
A chatbot is, in its basic form, a computer program that's meant to simulate a conversation you might have with a human on the internet - like when you type a question to ask for help with a booking. The launch, and explosion of a much more advanced one, ChatGPT, has got tongues wagging in recent months
Artificial Intelligence in its most simple form is technology that allows a computer to think or act in a more human way
That includes machine learning when, through experience, computers can learn what to do without being given explicit instructions
A well-trained chatbot would probably have avoided the “tongues wagging” cliche and her definition of A.I. that gives the reader the impression that “a computer [thinks] or [acts]” like a human. ChatGPT frequently reminds users that it is not thinking and is not human. Similarly, her explanation of “machine learning” is reductive.
Kuenssberg goes on to indulge one of her guest’s sci-fi fantasies:
Estonian billionaire Jaan Tallinn is one of them. He was one of the brains behind internet communication app Skype but is now one of the leading voices trying to put the brakes on.
I asked him, in an interview for this Sunday's show, to explain the threat as simply as he could.
"Imagine if you substitute human civilisation with AI civilisation," he told me. "Civilisation that could potentially run millions of times faster than humans... so like, imagine global warming was sped up a million times. One big vector of existential risk is that we are going to lose control over our environment. Once we have AIs that we a) cannot stop and b) are smart enough to do things like geoengineering, build their own structures, build their own AIs, then, what's going to happen to their environment, the environment that we critically need for our survival? It's up in the air."
And if governments don't act? Mr Tallinn thinks it's possible to "apply the existing technology, regulation, knowledge and regulatory frameworks" to the current generation of AI, but says the "big worry" is letting the technology race ahead without society adapting: "Then we are in a lot of trouble."
It's worth noting they are not saying they want to put a stop to the lot but pause the high-end work that is training computers to be ever smarter and more like us.
Tallinn is an investor in A.I. but he is not an expert in it. Questions that were missing from Kuenssberg’s interview: Why would we allow A.I. to build its own civilisation and how exactly would that be possible? Why would we give A.I. the ability to geoengineer without human input and what do you mean by geoengineering? Why wouldn’t you introduce safeguards to prevent the A.I. from building its own A.I.?
Asimov sketched out laws of robotics — however flawed they might be — back in 1942. The first three are:
A robot shall not harm a human, or by inaction allow a human to come to harm.
A robot shall obey any instruction given to it by a human.
A robot shall avoid actions or situations that could cause it to come to harm itself.
These questions are not new and neither are humanity’s attempts to answer them. If you read a columnist out to terrify you about some future horror but who is unwilling to talk about the issues right now — those affecting the most vulnerable — ask another question:
Are they trying to distract me from what they and their friends are doing? Did a big language model do it and run away?
Thanks for reading. Please share if you enjoyed it. Shares really help:
… and consider upgrading to a paid subscription, which makes this newsletter more sustainable and gets you bonuses:
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
Yes, I know it was actually Flavor-Aid at Jonestown.
FWIW if/when it takes off, it’s going to burn carbon by the ton and climate change will get us long before the robots flex their muscles. Anyway all you need is a hammer and a screwdriver... meanwhile Covid an actual ongoing disaster continued unabated.
The Stanley Test - will AI be trusted to tie Tim’s bow tie well enough so that the builders’ barbs are but grist to his mill.