The media in this post is not displayed to visitors. To view it, please log in.

Widely cited AI labor research ignores the most important thing AI is doing: Killing the human internet.#AI #AISlop


AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet


Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.)

In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.”

Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. Researchers appear to be too squeamish or too embarrassed to grapple with the fact that people love to use AI to make porn, and people love to use AI to spam social media and the internet, inherently causing economic harm to creators, adult performers, journalists, musicians, writers, artists, website owners, small businesses, etc. As Emanuel wrote in our first 404 Media Generative AI Market Analysis, people love to cum, and many of the most popular generative AI websites have an explicit focus on AI porn and the creation of nonconsensual AI porn. Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the “good” uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for. (It may be the case that people are disproportionately using Claude at a higher rate for more traditional work applications, but any study on the “labor market impacts of AI” should not focus on the uses of one single tool and extrapolate that out to every other tool. For what it’s worth, jailbroken versions of Claude are very popular among sexbot enthusiasts).

Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth.

Anthropic’s paper does attempt to estimate what the effect of AI will be on “arts and media,” but again, the way the researchers do this is by attempting to decide whether AI can directly do the tasks that AI researchers assume are required by someone with a job in “arts and media.” Other widely-cited papers about AI-related job loss also do not really attempt to consider the potential macro impacts of the ongoing deadening and zombification of the internet, and instead focus on “AI exposure,” which is largely an attempt to predict or measure whether an AI or LLM could directly replace specific tasks that people need to do. Widely-cited papers from the National Bureau of Economic Research and Brookings released over the last several months attempt to determine the adaptability of workers in specific sectors to having many of their tasks automated by AI. The Brookings paper, at least, mentions the possibility of a society-wide shift that is impossible to predict: “the evidence underlying the adaptive capacity estimates here is derived primarily from observed effects in localized displacement events, rather than from large-scale employment shifts across occupations. As a result, the index may be most informative when displacement is relatively isolated—for example, when a worker loses their job but related occupations remain stable. In scenarios in which AI affects clusters of related occupations simultaneously, structural job availability may matter more than individual-level characteristics. Moreover, if AI fundamentally transforms the economy on a scale comparable to the industrial revolution (as some experts have suggested could be possible), it could make entire skill sets redundant across several occupations simultaneously.”

To be clear, AI-driven job loss is a critical thing to study and to consider. But many, many jobs, side hustles, and economic activity more broadly rely on “the internet,” or social media broadly defined. Study after study shows that Google is getting worse, traffic to websites are down, and an increasing amount of both web traffic and web content is being generated by AI and bots. Anecdotally, creators and influencers have told us it’s getting harder to compete with AI slop and harder to justify spending days or weeks making content just to publish it onto platforms where their AI competitors can brute force the recommendation algorithms effortlessly. We have heard from websites that have had to lay people off or shut down because Google’s AI overviews have destroyed their web traffic or because they lose out on search engine rankings to AI slop. Authors are regularly competing with AI plagiarized versions of their books on Amazon, and Spotify is getting overrun with AI-generated music, too.

This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.


AI Channel reshared this.

The media in this post is not displayed to visitors. To view it, please log in.

A look at “necromemetics” and the meme economy in the aftermath of violence.#memes #AISlop


How Right Wing Influencers Used AI Slop to Turn Renee Good Into a Meme


After being shot and killed by ICE agent Jonathan Ross in Minneapolis earlier this month, Renee Good instantly became a symbol of anti-ICE sentiments among protestors. Raw bystander footage of her death quickly spread online. A day after her murder, an angle from Ross’s phone camera also spread, sparking even more images, memes, protest signage and art based on her last moments.

However, Good's likeness has also entered a humiliation and harassment campaign involving AI image edits and crude Photoshops of her face, in a process dubbed "Reneeification" by some online. We’ve seen this before: The trend comes soon after the rise of "Kirkification,” where people faceswapped the late right-wing political influencer Charlie Kirk's face onto innumerable images after his assassination in September 2025. Images of George Floyd, who was killed by Minneapolis police in 2020, also underwent a similar bastardization after his death. We could see it soon following the death of Alex Pretti, who ICE agents murdered in Minneapolis this weekend; Pretti is already the target of smear campaigns.

The making of a martyr in the 2020s, regardless of political affiliation, is increasingly tied to this humiliation process, which aims to tarnish the victim's legacy by lowering their likeness to a memetic punchline. It's a process that has only been accelerated by generative AI, and other factors such as "meme coin" cryptocurrencies, which monetize the shock and outrage bait.

In a post-Kirkified world, this impulse to bastardize Good's image after her death emerged immediately on mainstream social media, boosted by influential right-wing influencers, and mutated alongside the rapid spread of misinformation about her. It's an unfortunate tangling that’s likely to be repeated, as political and state-mandated violence becomes more normalized. Even before widespread generative AI and Kirk's death, the "Trayvoning" trend, which mocked the death of Trayvon Martin, a teenager who was shot and killed in 2012 by George Zimmerman, generated outrage and clicks. It involved people posting photos of themselves in Martin's death pose, wearing a black hoodie, with his dropped convenience store snacks splayed on the ground.

ICE’s Facial Recognition App Misidentified a Woman. Twice
In testimony from a CBP official obtained by 404 Media, the official described how Mobile Fortify returned two different names after scanning a woman’s face during an immigration raid. ICE has said the app’s results are a “definitive” determination of someone’s immigration status.
404 MediaJoseph Cox


AI makes all of this easier and faster, omitting the slow, arduous process of Photoshop artistry. And in the scramble to make the fastest, most viral meme, people latch onto and spread misinformation in their rush to denigrate the dead for engagement. We can see this in how an image misidentified as Good became the main source material for numerous "Reneeified" memes. In one popular example, shared by right-wing author and journalist Matt Forney, an incorrect image of a woman who isn’t Good is seen as a fountain. It's based on an AI "Kirkified" meme from September 2025, in which Kirk's fatal neck wound is seen as the structure's water source.

In the "Reneeified" remake, the water drips from nowhere. Perhaps the flow represents tears streaming down her face, but that's a generous assumption. The AI-image glaze and lack of an anatomically accurate wound strip it of a vital punchline. "Congratulations to Renee Nicole Good on four hours of sobriety!" is what Forney captioned the image. This is reminiscent of misinformation about George Floyd being high on fentanyl when he died. The memes about Floyd were meant to dehumanize him in the most callous ways possible, mixing extreme racism with vile antisemitism that attempted to mock a tragic event.

Forney's tweet is a Frankenstein's monster of a meme, mashing cruel jokes about Good, Kirk, and Floyd, stitched together with bad info and half-baked AI slop that both discredits and dilutes its goal.

Ruby Justice Thelot, an adjunct professor of Integrated Design and Media at New York University, was not surprised by the proliferation of a misidentified Good in early memes or the mixing of such symbolic deaths in Forney’s post. In fact, the situation serves what Thelot has called “necromemetics” in his 2024 essay published by Do Not Research. The term riffs on political theorist Achille Mbembe’s sociopolitical theory of necropolitics.

“We’re lured to do this, we’re lured to remix, we’re lured to memeify."


“What I call necromemetics is the ability to confer symbolic death to an individual through the circulation of digital memes, images, or videos,” Thelot told me in a call. “When I think of the Reneeification, when I think about the symbolic death that’s been conferred onto her and her likeness, I think about how that image, those videos, function as modes, as tools of separation and not unification… It is essentially a tool of conflict.”

Regardless, ICE wants the videos of their arrests to “flood the airwaves,” according to internal communications reviewed by The Washington Post. At the same time, Minnesota Governor Tim Walz recently urged civilians to “peacefully film ICE agents as they conduct these activities” in a televised address. There is no shortage of new footage coming out of Minnesota from all sides.

The misidentification of another woman as being Good has also played a role in the packaging of "meme coin" cryptocurrencies minted on Solana via Pump.fun. People are using the image of the misidentified woman faceswapped onto George Floyd for several coins on the site already.

‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid
Internal ICE material and testimony from an official obtained by 404 Media provides the clearest link yet between the technological infrastructure Palantir is building for ICE and the agency’s activities on the ground.
404 MediaJoseph Cox


People making meme coins are also using the name "George Foid," along with the incorrect image. It's a play on "George Floyd," using the derogatory incel slang term "foid," which is a shortening of the term "femoid," a portmanteau of "female" and "android," that calls women robotically unintelligent.

The nickname is also a play on "George Droyd," an imagined android version of George Floyd, created by meme coin shillers back in April 2024. The same group of shillers then created "Kirkinator," Kirk's Iron Man alter-ego, to promote a new cryptocurrency. Both characters have appeared in several AI-video memes, imagining their escapades with Elon Musk, Donald Trump, and Jeffrey Epstein.

But the name "George Foid" wasn't coined by crypto bros. The term, as applied to Good, originates from X user @PubWanghaf, who shared a quote-tweet on the day that she was killed, joking about a sixth grader smiling when he realizes that school's out for two days because of the unrest in his hometown of Minneapolis. The X user likely borrowed the term from a viral November 2024 usage, unrelated to current events.

Thelot likened the motive behind such inflammatory posters to the word “cacoethes,” meaning an irresistible urge to do something inadvisable. “We’re lured to do this, we’re lured to remix, we’re lured to memeify. Much like the apple in the prelude to the Trojan War, the goal of the image is discord in this specific scenario.”

In a media environment ripe with cacoethes, Thelot says he doesn’t trust images at all. “I don’t really know what to do with my mom, my grandma, the people around me, how to even begin to educate for that world.”

And like images, it’s also hard to trust ragebait like Reneeification. Are its proponents truly hateful, or are they just click-obsessed, money-hungry, and willing to do anything? When AI-videos of ICE agents arresting blue-haired women surface online, who’s really posting them, and is their goal to proselytise or farm engagement?

Even those outraged by the meme play into the attention-seeking methods utilized in such hateful internet phenomena. Unfortunately, viral quote-posts "dunking" on such inflammatory "Reneefication" posts also propel the content to a wider audience. There’s also a twisted version of outrage bait cropping up in the wake of all of this; a comedian podcaster’s AI image of a nonexistent mural deifying Good alongside January 6 rioter and Qanon follower Ashli Babbitt got relatively little engagement compared to the posts dunking on it.

Now, with AI, anonymous trolls wanting to mock Good in "Reneeification" memes don't need to take a photo to expose themselves, like in Trayvoning, to do so. Plus, there's a crackpot chance they could liquidate their hate into crypto wallet gains.

Owen Carry is an internet culture writer, researcher, trendspotter and former Associate Editor at Know Your Meme.


The media in this post is not displayed to visitors. To view it, please log in.

Moderators reversed course on its open door AI policy after fans filled the subreddit with AI-generated Dale Cooper slop.#davidlynch #AISlop #News


Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies


People on r/twinpeaks flooded the subreddit with AI slop images of FBI agent Dale Cooper and ChatGPT generated scripts after the community’s moderators opened the door to posting AI art. The tide of terrible Twin Peaks related slop lasted for about two days before the subreddit’s mods broke, reversed their decision, and deleted the AI generated content.

Twin Peaks is a moody TV show that first aired in the 1990s and was followed by a third season in 2017. It’s the work of surrealist auteur David Lynch, influenced lots of TV shows and video games that came after and has a passionate fan base that still shares theories and art to this day. Lynch died earlier this year and since his passing he’s become a talking point for pro-AI art people who point to several interviews and second hand stories they claim show Lynch had embraced an AI-generated slop future.

On Tuesday, a mod posted a long announcement that opened the doors to AI on the sub. In a now deleted post titled “Ai Generated Content On r/twinpeaks,” the moderator outlined the position that the sub was a place for everyone to share memes, theories, and “anything remotely creative as long as it has a loose string to the show or its case or its themes. Ai generated content is included in all of this.”

The post went further. “We are aware of how Ai ‘art’ and Ai generated content can hurt real artists,” the post said. “Unfortunately, this is just the reality of the world we live in today. At this point I don’t think anything can stop the Ai train from coming, it’s here and this is only the beginning. Ai content is becoming harder and harder to identify.”

The mod then asked Redditors to follow an honor system and label any post that used AI with a special new flair so people could filter out those posts if they didn’t want to see them. “We feel this is a best of both worlds compromise that should keep everyone fairly happy,” the mod said.

An honor system, a flair, and a filter did not mollify the community. In the following 48 hours Lynch fans expressed their displeasure by showing r/twinpeaks what it looks like when no one can “stop the Ai train from coming.” They filled the subreddit with AI-generated slop in protest, including horrifying pictures of series protagonist Cooper doing an end-zone dance on a football field while Laura Palmer screamed in the sky and more than a few awful ChatGPT generated scripts.
Image via r/twinpeaks.
Free-IDK-Chicken, a former mod of r/twinpeaks who resigned over the AI debacle, said the post wasn’t run by other members of the mod team. “It was poorly worded. A bad take on a bad stance and it blew up in their face,” she told 404 Media. “It spiraled because it was condescending and basically told the community--we don’t care that it’s theft, that it’s unethical, we’ll just flair it so you can filter it out…they missed the point that AI art steals from legit artists and damages the environment.”

According to Free-IDK-Chicken, the subreddit’s mods had been fighting over whether or not to ban AI art for months. “I tried five months ago to get AI banned and was outvoted. I tried again last month and was outvoted again,” she said.

On Thursday morning, with the subreddit buried in AI slop, the mods of r/twinpeaks relented, banned AI art, and cleaned up the protest spam. “After much thought and deliberation about the response to yesterday's events, the TP Mod Team has made the decision to reverse their previous statement on the posting of AI content in our community,” the mods said in a post announcing the new policy. “Going forward, posts including generative AI art or ChatGPT-style content are disallowed in this subreddit. This includes posting AI google search results as they frequently contain misinformation.”

Lynch has become a mascot for pro AI boosters. An image on a pro-AI art subreddit depicts Lynch wearing an OpenAI shirt and pointing at the viewer. “You can’t be punk and also be anti-AI, AI-phobic, or an AI denier. It’s impossible!” reads a sign next to the AI-generated picture of the director.
Image via r/slopcorecirclejerk
As evidence, they point to aBritish Film Institute interview published shortly before his death where he lauds AI and calls it “incredible as a tool for creativity and for machines to help creativity.” AI boosters often leave off the second part of the quote. “I’m sure with all these things, if money is the bottom line, there’d be a lot of sadness, and despair and horror. But I’m hoping better times are coming,” Lynch said.
Image via r/slopcorecirclejerk
The other big piece of evidence people use to claim Lynch was pro-AI is a secondhand account given to Vulture by his neighbor, the actress Natasha Lyonne. According to the interview in Vulture, Lyonne asked Lynch for his thoughts on AI and Lynch picked up a pencil and told her that everyone has access to it and to a phone. “It’s how you use the pencil. You see?” He said.

Setting aside the environmental and ethical arguments against AI-generated art, if AI is a “pencil,” most of what people make with it is unpleasant slop. Grotesque nonsense fills our social media feeds and AI-generated Jedis and Ghiblis have become the aesthetic of fascism.

We've seen other platforms and communities struggle with keeping AI art at bay when they've allowed it to exist alongside human-made content. On Facebook, Instagram, and Youtube, low-effort garbage is flooding online spaces and pushing productive human conversation to the margins, while floating to the top of engagement algorithms.

Other artist communities are pushing back against AI art in their own ways: Earlier this month, DragonCon organizers ejected a vendor for displaying AI-generated artwork. Artists’ portfolio platform ArtStation banned AI-generated content in 2022. And earlier this year, artists protested the first-ever AI art auction at Christie’s.


LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him

Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop

"Challah Horse" was a Polish meme warning about Facebook AI spam 'targeted at susceptible people' that was stolen by a spam page targeted at susceptible people.

"Challah Horse" was a Polish meme warning about Facebook AI spam x27;targeted at susceptible peoplex27; that was stolen by a spam page targeted at susceptible people.#AISpam #Facebook #MarkZuckerberg #AISlop #FacebookSpam