AI;DR: Journalism's quislings for AI and why I'll never use AI to write this newsletter
The future of journalism doesn't have to be AI-generated; we can and should resist.
If you enjoy this newsletter, please consider getting a paid subscription. I don’t push this often and tend to make most editions free, but without paid subscribers, it wouldn’t be possible to keep going. Thank you for reading.
We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.
– Ursula K. Le Guin, accepting the National Book Foundation Medal for Distinguished Contribution to American Letters in 2014.
This is the 1030th edition of this newsletter. In this and the 1029 previous instalments, I have not used AI to compose a single sentence, and I never will. I ask people to pay for my writing and I understand that an intrinsic part of that bargain is that it is my writing, for good and for ill. Writing is thinking. I don’t want to outsource that to an autocorrect machine with notions. Generative AI produces the intellectual equivalent of pink slime; mechanically reclaimed chicken shaped into breaded nuggets.
Why bother reading something that someone else couldn’t be bothered to write? What value is there in the cut-up hostage note creation of an LLM? It tells you nothing about how someone else thinks and feels about a subject. Anyone who calls themselves a writer and then serves up AI output under their name has contempt for their readers and very little respect for themselves. They’re also a de facto collaborator with a Vichy regime chipping away at human creativity in service of an industry hastening the literal enshittification of our world. AI is a thirsty demon drinking up billions of gallons of pure water in the service of curling out slop.
I should have written this particular note some time ago, but I was kicked into action after reading an op-ed by Chris Quinn, the editor of Cleveland.com and The Cleveland Plain Dealer, castigating a journalism grad for rejecting a job when she realised she would not actually be writing anything. He — or possibly an LLM lackey — wrote:
A college student withdrew from consideration for a reporting role in our newsroom this week because of how we use artificial intelligence.
It reminded me again how college journalism programs are failing to prepare students for the workforce…
… Like many students we’ve spoken with in the past year, this one had been told repeatedly by professors that AI is bad. We heard the same thing at the National Association of Black Journalists convention in Cleveland in August. Student after student said it.
That’s backwards — and it seriously handicaps them as they begin their careers. I’ve written extensively about how we use AI to do more and better work. It has quickly become critical to everything we do, and to our success.
That was grim enough — god forbid that young journalists should have principles — but a sentence that followed soon afterwards was what really set me off:
By removing writing from reporters’ workloads, we’ve effectively freed up an extra workday for them each week.
Again, writing is thinking. Taking the writing out of reporting is perverse. Writing is the point at which you knit together what you’ve found out and work out what it means. It’s in the writing that you find the spine of a story.
The Cleveland.com/Plain Dealer newsroom is, as Quinn tells it, putting information into an LLM that produces drafts which are then fact-checked, reviewed by editors, and then looked over by reporters who get “the final say”. I don’t believe him, but even if I did, it would be a crappy way of doing things. “Humans — not AI — control every step,” he writes. But that’s not true. The way stories are shaped, the extent of the vocabulary, and their structure will be defined by the AI that produces the drafts.
Quinn continues:
Artificial intelligence is not bad for newsrooms. It’s the future of them. It already allows us to be faster, more thorough and more comprehensible. It frees time for what matters most: gathering facts and developing stories to serve you.
This is just AI companies’ marketing bullshit regurgitated by an editor with an interest in cutting costs at a company with a long history of union busting. AI is the ultimate union-busting tool. The AI doesn’t ask for a pay rise, need maternity leave, or object to blatant political interference from the proprietor or the advertisers.
We know generative AI lies. Calling those lies “hallucinations” is a bit of fairytale marketing spin on behalf of the AI industry. Telling newspaper readers that AI is the author of your stories is showing that audience outright contempt. It’s the same brand of contempt Quinn shows towards young journalists who actually want to write the stories they report. Don’t they know they should be happy to do data collection and entry for their new LLM overlords instead?
Readers are not remotely as stupid as these news executives think they are. They can tell a good story, well told, by a human being from the generic slop served up by generative AI. LLM content farms suit companies that want to do things on the cheap, but they don’t serve readers.
In my book Breaking, I try to push the analogy of AI journalism as ultra-processed food and human-focused and created journalism as organic produce. I still like that distinction, but I don’t think I went far enough. AI journalism is poison; it will further lead to a diseased public conversation and a further sickening of society. There are small amounts of arsenic in our daily diets, but no one suggests an all arsenic diet is the future of nutrition.
Thanks for reading. Please think about sharing this edition…
… and, if you haven’t yet, consider upgrading to a paid subscription.
You can also buy a t-shirt if you’d like to make a one-off contribution and get a t-shirt. My book, Breaking: How the Media Works, When it Doesn’t, and Why it Matters, is out now.


Bravo Mic, well said.
There's another aspect to this which you'd imagine journalists and publishers would want to consider, which is that lobbing stuff into a public LLM is a form of publication. Presumably the Cleveland Plain Dealer has a business-specific LLM it's using, which is specific to that one workplace and what's done with it in that company doesn't feed in to the public LLM - but if not, all those story ideas, quotes and research materials that its staff (presumably; hope they're not insisting freelances work this way too) are putting into it become grist to the mill for the public version of the chatbot. Samsung infamously found this out the hard way a couple of years ago, when employees dropped proprietary semiconductor design info into ChatGPT and the company reckoned the resulting leak of trade secrets cost it over $60million. Obviously, someone would need to ask the right questions to get the LLM to republish the relevant information, but if a journalist puts material into a public LLM then they've basically just trashed any hope of their story being in any way exclusive. Similarly, anyone using generative LLM-type tools to transcribe audio of interviews needs to make sure that the terms and conditions they've signed up to don't allow the LLM company to use that audio and the transcription output to "train" the public model. The risk there is not just to blowing your exclusivity, but also potentially breaching data protection regulations and also - if the interview audio includes any moments where you agreed that the interviewee was speaking off the record - compromising your own relationship with your source as well as failing to honour any expectation of privacy they may have thought they'd agreed with you.