On a recent This Machine Kills episode, guest Hagen Blix described the ultimate form of "AI therapy" with a "human in the loop":
soundcloud.com/thismachinekill…
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
pluralistic.net/2025/05/27/ran…
1/
405. AI is the Demon God of Capital (ft. Hagen Blix)
We chat with linguist and cognitive scientist Hagen Blix about his new book Why We Fear AI (co-authored with computer scientist Ingeborg Glimmer) about how the technical qualities of AI – especially LSoundCloud
Cory Doctorow
in reply to Cory Doctorow • • •> One actual therapist is just having ten chat GPT windows open where they just like have five seconds to interrupt the chatGPT. They have to scan them all and see if it says something really inappropriate. That's your job, to stop it.
2/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Blix admits that's not where therapy is at...yet, but he references Laura Preston's 2023 N Plus One essay, "HUMAN_FALLBACK," which describes her as a backstop to a real-estate "virtual assistant," that masqueraded as a human handling the queries that confused it, in a bid to keep the customers from figuring out that they were engaging with a chatbot:
nplusonemag.com/issue-44/essay…
3/
HUMAN_FALLBACK | Laura Preston
n+1Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
This is what makes investors and bosses slobber so hard for AI - a "productivity" boost that arises from taking away the bargaining power of workers so that they can be made to labor under worse conditions for less money.
4/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
The efficiency gains of automation aren't just about using fewer workers to achieve the same output - it's about the fact that the workers you fire in this process can be used as a threat against the remaining workers: "Do your job and shut up or I'll fire you and give your job to one of your former colleagues who's now on the breadline."
5/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
This has been at the heart of labor fights over automation since the Industrial Revolution, when skilled textile workers took up the Luddite cause because their bosses wanted to fire them and replace them with child workers snatched from Napoleonic War orphanages:
pluralistic.net/2023/09/26/eno…
Textile automation wasn't just about producing *more* cloth - it was about producing *cheaper, worse cloth*.
6/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
The new machines were so easy a child could use them, because that's who *was* using them - kidnapped war orphans. The adult textile workers the machines displaced weren't afraid of technology. Far from it! Weavers used the most advanced machinery of the day, and apprenticed for seven years to learn how to operate it. Luddites had the equivalent of a Masters in Engineering from MIT.
7/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Weavers' guilds presented two problems for their bosses: first, they had enormous power, thanks to the extensive training required to operate their looms; and second, they used that power to regulate the quality of the goods they made. Even before the Industrial Revolution, weavers *could* have produced more cloth at lower prices by skimping on quality, but they refused, out of principle, because their work mattered to them.
8/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Now, of course weavers also appreciated the value of their products, and understood that innovations that would allow them to increase their productivity and make more fabric at lower prices would be good for the world. They weren't snobs who thought that only the wealthy should go clothed. Weavers had continuously adopted numerous innovations, each of which increased the productivity *and* the quality of their wares.
9/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Long before the Luddite uprising, weavers had petitioned factory owners and Parliament under the laws that guaranteed the guilds the right to oversee textile automation to ensure that it didn't come at the price of worker power *or* the quality of the textiles the machines produced.
10/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But the factory owners and their investors had captured Parliament, which ignored its own laws and did nothing as the "dark, Satanic mills" proliferated. Luddites only turned to property destruction *after* the system failed them.
Now, it's true that *eventually*, the machines improved and the fabric they turned out matched and exceeded the quality of the fabric that preceded the Industrial Revolution.
11/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But there's nothing about the *way* the Industrial Revolution unfolded - increasing the power of capital to pay workers less and treat them worse while flooding the market with inferior products - that was necessary or beneficial to that progress. Every other innovation in textile production up until that time had been undertaken with the cooperation of the guilds, who'd ensured that "progress" meant better lives for workers, better products for consumers, *and* lower prices.
12/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
If the Luddites' demands for co-determination in the Industrial Revolution had been met, we might have gotten to the same world of superior products at lower costs, but *without* the immiseration of generations of workers, mass killings to suppress worker uprisings, and decades of defective products being foisted on the public.
13/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
So there are two stories about automation and labor: in the dominant narrative, workers are afraid of the automation that delivers benefits to all of us, stand in the way of progress, and get steamrollered for their own good, as well as ours.
14/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
In the other narrative, workers are glad to have boring and dangerous parts of their work automated away and happy to produce more high-quality goods and services, and stand ready to assess and plan the rollout of new tools, and when workers object to automation, it's because they see automation being used to crush them and worsen the outputs they care *about*, at the expense of the customers they care *for*.
15/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
In modern automation/labor theory, this debate is framed in terms of "centaurs" (humans who are assisted by technology) and "reverse-centaurs" (humans who are conscripted to assist technology):
pluralistic.net/2023/04/12/alg…
There are plenty of workers who are excited at the thought of using AI tools to relieve them of some drudgework. To the extent that these workers have power over their bosses and their working conditions, that excitement might well be justified.
16/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
I hear a lot from programmers who work on their own projects about how nice it is to have a kind of hypertrophied macro system that can generate and tweak little automated tools on the fly so the humans can focus on the real, chewy challenges. Those workers are the centaurs, and it's no wonder that they're excited about improved tooling.
17/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But the reverse-centaur version is a lot darker. The reverse-centaur coder is an assistant *to* the AI, charged with being a "human in the loop" who reviews the material that the AI produces. This is a pretty terrible job to have.
For starters, the kinds of mistakes that AI coders make are the *hardest* mistakes for human reviewers to catch.
18/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
That's because LLMs are statistical prediction machines, spicy autocomplete that works by ingesting and analyzing a vast corpus of written materials and then producing outputs that represent a series of plausible guesses about which words should follow one another. To the extent that the reality the AI is participating in is statistically smooth and predictable, AI can often make eerily good guesses at words that turn into sentences or code that slot well into that reality.
19/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But where reality is lumpy and irregular, AI stumbles. AI is intrinsically conservative. As a statistically informed guessing program, it wants the future to be like the past:
reallifemag.com/the-apophenic-…
This means that AI coders stumble wherever the world contains rough patches and snags. Take "slopsquatting." For the most part, software libraries follow regular naming conventions.
20/
The Apophenic Machine — Real Life
Real LifeCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
For example, there might be a series of text-handling libraries with names like "text.parsing.docx," "text.parsing.xml," and "text.parsing.markdown." But for some reason - maybe two different projects were merged, or maybe someone was just inattentive - there's also a library called "text.txt.parsing" (instead of "text.parsing.txt").
21/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
AI coders are doing inference based on statistical analysis, and anyone inferring what the .txt parsing library is called would guess, based on the other libraries, that it was "text.parsing.txt." And that's what the AI guesses, and so it tries to import that library to its software projects.
22/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
This creates a new security vulnerability, "slopsquatting," in which a malicious actor creates a library with the expected name, which replicates the functionality of the real library, but also contains malicious code:
theregister.com/2025/04/12/ai_…
23/
AI can't stop making up software dependencies and sabotaging everything
Thomas Claburn (The Register)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Note that slopsquatting errors are extremely hard to spot. As is typical with AI coding errors, these are errors that are based on continuing a historical pattern, which is the sort of thing our own brains do all the time (think of trying to go up a step that isn't there after climbing to the top of a staircase). Notably, these are very *different* from the errors that a beginning programmer whose work is being reviewed by a more senior coder might make.
24/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
These are the very hardest errors for humans to spot, and these are the errors that AIs make the most, and they do so at machine speed:
pluralistic.net/2024/04/23/max…
To be a human in the loop for an AI coder, a programmer must engage in sustained, careful, line-by-line and command-by-command scrutiny of the code. This is the *hardest* kind of code to review, and maintaining robotic vigilance over long periods at high speeds is something humans are very bad at.
25/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Indeed, it's the kind of task we try very hard to automate, since machines are much better at being machineline than humans are. This is the essence of reverse-centaurism: when a human is expected to act like a machine in order to help the machine do something it can't do.
26/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Humans routinely fail at spotting these errors, unsurprisingly. If the purpose of automation is to make superior goods at lower prices, then this would be a real concern, since a reverse-centaur coding arrangement is bound to produce code with lurking, pernicious, especially hard-to-spot bugs that present serious risks to users.
27/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But if the purpose of automation is to discipline labor - to force coders to accept worse conditions and pay - irrespective of the impact on quality, then AI is the perfect tool for the job. The point of the human isn't to catch the AI's errors so much as it is to catch *the blame* for the AI's errors - to be what Madeleine Clare Elish calls a "moral crumple zone":
estsjournal.org/index.php/ests…
28/
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction
estsjournal.orgCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
As has been the case since the Industrial Revolution, the project of automation isn't just about increasing productivity, it's about weakening labor power as a prelude to lowering quality. Take what's happened to the news industry, where mass layoffs are being offset by AI tools. At Hearst's King Features Syndicates, a *single writer* was charged with producing over *30* summer guides, the entire package:
404media.co/viral-ai-generated…
29/
Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst
Jason Koebler (404 Media)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
That is an *impossible* task, which is why the writer turned to AI to do his homework, and then, infamously, published a "summer reading guide" that was full of nonexistent books that were hallucinated by a chatbot:
404media.co/chicago-sun-times-…
Most people reacted to this story as a *consumer* issue: they were outraged that the world was having a defective product foisted upon it.
30/
Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist
Jason Koebler (404 Media)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But the consumer issue here is downstream from the *labor* issue: when the writers at King Features Syndicate are turned into reverse-centaurs, they will *inevitably* produce defective outputs. The point of the worker - the "human in the loop" - isn't to supervise the AI, it's to take the blame for the AI. That's just what happened, as this poor schmuck absorbed an internet-sized rasher of shit flung his way by outraged social media users.
31/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
After all, it was his byline on the story, not the chatbot's. He's the moral crumple-zone.
The implication of this is that consumers and workers are class allies in the automation wars. The point of using automation to weaken labor isn't just cheaper products - it's cheaper, *defective* products, inflicted on the unsuspecting and defenseless public who are no longer protected by workers' professionalism and pride in their jobs.
32/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
That's what's going on at Duolingo, where CEO Luis von Ahn created a firestorm by announcing mass firings of human language instructors, who would be replaced by AI. The "AI first" announcement pissed off Duolingo's workers, of course, but what caught von Ahn off-guard was how much this pissed off Duolingo's *users*:
tech.slashdot.org/story/25/05/…
33/
Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments - Slashdot
tech.slashdot.orgCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But of course, this makes perfect sense. After all, language-learners are literally incapable of spotting errors in the AI instruction they receive. If you spoke the language well enough to spot the AI's mistakes, you wouldn't need Duolingo!
34/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
I don't doubt that there are countless ways in which AIs could benefit both language learners and the Duolingo workers who develop instructional materials, but for that to happen, workers' and learners' needs will have to be the focus of AI integration. Centaurs could produce great language learning materials with AI - but reverse-centaurs can only produce slop.
35/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Unsurprisingly, many of the most successful AI products are "bossware" tools that let employers monitor and discipline workers who've been reverse-centaurized. Both blue-collar and white-collar workplaces have filled up with "electronic whips" that monitor and evaluate performance:
pluralistic.net/2024/08/02/des…
AI can give bosses "dashboards" that tell them which Amazon delivery drivers operate their vehicles with their mouths open (Amazon doesn't let its drivers sing on the job).
36
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Meanwhile, a German company called Celonis will sell your boss a kind of AI phrenology tool that assesses your "emotional quality" by spying on you while you work:
crackedlabs.org/en/data-work/p…
Tech firms were among the first and most aggressive adopters of AI-based electronic whips. But these whips weren't used on coders - they were reserved for tech's vast blue-collar and contractor workforce: clickworkers, gig workers, warehouse workers, AI data-labelers and delivery drivers.
37/
Monitoring, Streamlining and Reorganizing Work with Digital Technology
Cracked LabsCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Tech bosses tormented these workers but pampered their coders. That wasn't out of any sentimental attachment to tech workers. Rather, tech bosses were *afraid* of tech workers, because tech workers possess a rare set of skills that can be harnessed by tech firms to produce *gigantic* returns.
38/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Tech workers have historically been princes of labor, able to command high salaries and deferential treatment from their bosses (think of the amazing tech "campus" perks), because their scarcity gave them power.
It's easy to predict how tech bosses would treat tech workers if they could get away with it - just look how they treat workers they aren't afraid of.
39/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Just like the textile mill owners of the Industrial Revolution, the thing that excites tech bosses about AI is the possibility of cutting off a group of powerful workers at the knees. After all, it took more than a century for strong labor unions to match the power that the pre-Industrial Revolution guilds had.
40/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
If AI can crush the power of tech workers, it might buy tech bosses a century of free rein to shift value from their workforce to their investors, while also doing away with pesky Tron-pilled workers who believe they have a moral obligation to "fight for the user."
William Gibson famously wrote, "The future is here, it's just not evenly distributed." The workers that tech bosses don't fear are living in the future of the workers that tech bosses can't easily replace.
41/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
This week, the *New York Times*'s veteran Amazon labor report Noam Scheiber published a deeply reported piece about the experience of coders at Amazon in the age of AI:
nytimes.com/2025/05/25/busines…
Amazon CEO Andy Jassy is palpably horny for AI coders, evidenced by investor memos boasting of AI's returns in "productivity and cost avoidance" and pronouncements about AI saving "the equivalent of 4,500 developer-years":
linkedin.com/posts/andy-jassy-…
42/
How Amazon Q automates software development with GenAI | Andy Jassy posted on the topic | LinkedIn
www.linkedin.comCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Amazon is among the most notorious abusers of blue-collar labor, the workplace where everyone who doesn't have a bullshit laptop job is expected to piss in bottles and spent an unpaid hour before and after work going through a bag- and body-search. Amazon's blue-collar workers are under continuous, totalizing, judging AI scrutiny that scores them based on whether their eyeballs are correctly oriented, whether they take too long to pick up an object, whether they pee too often.
43
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Amazon warehouse workers are injured at three times national average. Amazon AIs scan social media for disgruntled workers talking about unions, and Amazon has another AI tool that predicts which shops and departments are most likely to want to unionize.
Scheiber's piece describes what it's like to be an Amazon tech worker who's getting the reverse-centaur treatment that has heretofore been reserved for warehouse workers and drivers.
44/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
They describe "speedups" in which they are moved from writing code to reviewing AI code, their jobs transformed from solving chewy intellectual puzzles to racing to spot hard-to-find AI coding errors as a clock ticks down. Amazon bosses haven't ordered their tech workers to use AI, just raised their quotas to a level that can't be attained without getting an AI to do most of the work.
45/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Just like the *Chicago Sun-Times* writer who was expected to write all 30 articles in the summer guide package on his own. No one made him use AI, but he wasn't going to produce 30 articles on deadline without a chatbot.
Amazon insists that it is treating AI as an assistant for its coders, but the actual working conditions make it clear that this is a reverse-centaur transformation.
46/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Scheiber discusses a dissident internal group called Amazon Employees for Climate Justice, who link the company's use of AI to its carbon footprint. Beyond those climate concerns, these workers are treating AI as a *labor* issue.
Amazon's coders have been making tentative gestures of solidarity towards its blue-collar workforce since the pandemic broke out, walking out in support of striking warehouse workers (and getting fired for doing so):
pluralistic.net/2020/04/14/abo…
47
Pluralistic: 14 Apr 2020 – Pluralistic: Daily links from Cory Doctorow
pluralistic.netCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But those firings haven't deterred Amazon's tech workers from making common cause with their comrades on the shop floor:
pluralistic.net/2021/01/19/dea…
When techies describe their experience of AI, it sometimes sounds like they're describing two completely different realities - and that's because they *are*. For workers with power and control, automation turns them into centaurs, who get to use AI tools to improve their work-lives.
48/
Pluralistic: 19 Jan 2021 – Pluralistic: Daily links from Cory Doctorow
pluralistic.netCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
For workers whose power is waning, AI is a tool for reverse-centaurism, an electronic whip that pushes them to work at superhuman speeds. And when they fail, these workers become "moral crumple zones," absorbing the blame for the defective products their bosses pushed out in order to goose profits.
As ever, what a technology *does* pales in comparison to who it does it *for* and who it does it *to*.
49/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
HEY #SEATTLE! I'm appearing at the Cascade PBS Ideas Festival NEXT SATURDAY (May 31) with the folks from NPR's On The Media!
cascadepbs.org/festival/speake…
50/
Cory Doctorow at Cascade PBS Ideas Festival
Cascade PBSCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Image:
Cryteria (modified)
commons.wikimedia.org/wiki/Fil…
CC BY 3.0
creativecommons.org/licenses/b…
eof/
File:HAL9000.svg - Wikimedia Commons
commons.wikimedia.orgFin again
in reply to Cory Doctorow • • •Sensitive content
H.Lunke & Socke
in reply to Cory Doctorow • • •Sensitive content
Cory Doctorow
in reply to H.Lunke & Socke • • •Sensitive content
@HLunke You have fallen prey to the most enduring piece of misinformation about how Mastodon works. That's not what "not listed" does. More here (including how to handle thread-reading, and where to get my posts if you'd rather not follow me on Mastodon):
pluralistic.net/2023/04/16/how…
How To Make the Least-Worst Mastodon Threads – Pluralistic: Daily links from Cory Doctorow
pluralistic.netGlenn Fleishman
in reply to Cory Doctorow • • •Sensitive content
Cory Doctorow reshared this.
Cory Doctorow
in reply to Glenn Fleishman • • •Sensitive content
Glenn Fleishman
in reply to Cory Doctorow • • •Sensitive content
Cory Doctorow reshared this.