Artist Tega Brain is fighting the internet’s enshittification by turning back the clock to before ChatGPT existed.#AISlop #GoogleSearch #searchengines
'Slop Evader' Lets You Surf the Web Like It’s 2022
It’s hard to believe it’s only been a few years since generative AI tools started flooding the internet with low quality content-slop. Just over a year ago, you’d have to peruse certain corners of Facebook or spend time wading through the cultural cesspool of Elon Musk’s X to find people posting bizarre and repulsive synthetic media. Now, AI slop feels inescapable — whether you’re watching TV, reading the news, or trying to find a new apartment.That is, unless you’re using Slop Evader, a new browser tool that filters your web searches to only include results from before November 30, 2022 — the day that ChatGPT was released to the public.
The tool is available for Firefox and Chrome, and has one simple function: Showing you the web as it was before the deluge of AI-generated garbage. It uses Google search functions to index popular websites and filter results based on publication date, a scorched earth approach that virtually guarantees your searches will be slop-free.
Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry’s unrelenting, aggressive rollout of so-called “generative AI”—despite widespread criticism and the wider public’s distaste for it.
Slop Evader in action. Via Tega Brain
“This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we’re in,” Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. “I’ve been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022.”
One under-discussed impact of AI slop and synthetic media, says Brain, is how it increases our “cognitive load” when viewing anything online. When we can no longer immediately assume any of the media we encounter was made by a human, the act of using social media or browsing the web is transformed into a never-ending procession of existential double-takes.
This cognitive dissonance extends to everyday tasks that require us to use the internet—which is practically everything nowadays. Looking for a house or apartment? Companies are using genAI tools to generate pictures of houses and rental properties, as well as the ads themselves. Trying to sell your old junk on Facebook Marketplace? Meta’s embrace of generative AI means you may have to compete with bots, fake photos, and AI-generated listings. And when we shop for beauty products or view ads, synthetic media tools are taking our filtered and impossibly-idealized beauty standards to absurd and disturbing new places.
In all of these cases, generative AI tools further thumb the scales of power—saving companies money while placing a higher cognitive burden on regular people to determine what’s real and what’s not.
“I open up Pinterest and suddenly notice that half of my feed are these incredibly idealized faces of women that are clearly not real people,” said Brain. “It’s shoved into your face and into your feed, whether you searched for it or not.”
Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won’t be able to find anything time-sensitive or current—including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time—nostalgia for a human-centric world wide web that no longer exists.
Of course, the tool’s limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo’s search indexing instead of Google’s. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley’s AI-pushers have forced on us.
“I don’t think browser add-ons are gonna save us,” said Brain. “For me, the purpose of doing this work is mostly to act as a provocation and give people examples of how you can refuse this stuff, to furnish one’s imaginary for what a politics of refusal could look like.”
With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year). There’s also been a growing movementpushing back against the new AI data centers threatening to pollute communities andraise residents’ electricity bills. But no matter what form AI slop-refusal takes, it will need to be a group effort.
“It’s like with the climate debate, we’re not going to get out of this shitshow with individual actions alone,” she added. “I think that’s the million dollar question, is what is the relationship between this kind of individual empowerment work and collective pushback.”
Data centers are concentrated in these states. Here's what's happening to electricity prices
Residential utility bills rose 6% on average nationwide in August compared with the same period last year, according to the Energy Information Administration.Spencer Kimball (CNBC)
The 'psyops' revealed by X are entirely the fault of the perverse incentives created by social media monetization programs.
The x27;psyopsx27; revealed by X are entirely the fault of the perverse incentives created by social media monetization programs.#AI #AISlop
America’s Polarization Has Become the World's Side Hustle
A new feature on X is making people suddenly realize that some large portion of the divisive, hateful, and spammy content designed to inflame tensions or, at the very least, is designed to get lots of engagement on social media, is being published by accounts that are pretending to be based in the United States but are actually being run by people in countries like Bangladesh, Vietnam, India, Cambodia, Russia, and other countries. An account called “Ivanka News” is based in Nigeria, “RedPilledNurse” is from Europe, “MAGA Nadine” is in Morocco, “Native American Soul” is in Bangladesh, and “Barron Trump News” is based in Macedonia, among many, many of others.Inauthentic viral accounts on X are just the tip of the iceberg, though, as we have reported. A huge amount of the viral content about American politics and American news on social media is from sock puppet and bot accounts monetized by people in other countries. The rise of easy to use, free AI generative tools have supercharged this effort, and social media monetization programs have incentivized this effort and are almost entirely to blame. The current disinformation and slop phenomenon on the internet today makes the days of ‘Russian bot farms’ and ‘fake news pages from Cyprus’ seem quaint; the problem is now fully decentralized and distributed across the world and is almost entirely funded by social media companies themselves.
This will not be news to people who have been following 404 Media, because I have done multiple investigations about the perverse incentives that social media and AI companies have created to incentivize people to fill their platforms with slop. But what has happened on X is the same thing that has happened on Facebook, Instagram, YouTube, and other social media platforms (it is also happening to the internet as a whole, with AI slop websites laden with plagiarized content and SEO spam and monetized with Google ads). Each social media platform has either an ad revenue sharing program, a “creator bonus” program, or a monetization program that directly pays creators who go viral on their platforms.This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.
youtube.com/embed/tagCqd_Ps1g?…
Examples include “AK Educate” (which is one of thousands), which posts every few days about how to monetize accounts on Facebook, YouTube, X, Instagram, TikTok, Etsy, and others. “How to create Twitter X Account for Monitization [sic] | Earn From Twitter in Pakistan,” is the name of a typical video in this genre. These channels are not just teaching people how to make and spam content, however. They are teaching people specifically how to make it seem like they are located in the United States, and how to create content that they believe will perform with American audiences on American social media. Sometimes they are advising the use of VPNs and other tactics to make it seem like the account is posting from the United States, but many of the accounts explain that doing this step doesn’t actually matter.
Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.For the most part, the only ‘psyop’ here is one being run on social media users by social media companies themselves in search of getting more ad revenue by any means necessary.
For example: AK Educate has a video called “7 USA Faceless Channel Ideas for 2025,” and another video called “USA YouTube Channel Kaise Banaye [how to].” The first of these videos is in Hindi but has English subtitles.
“Where you get $1 on 1,000 views on Pakistani content,” the video begins, “you get $5 to $7 on 1,000 views on USA content.”
“As cricket is seen in Pakistan and India, boxing and MMA are widely seen in America,” he says. Channel ideas include “MMA,” “Who Died Today USA,” “How ships sink,” news from wars, motivational videos, and Reddit story voiceovers. To show you how pervasive this advice to make channels that target Americans is, look at this, which is a YouTube search for “USA Channel Kaise Banaye”:
0:00
/0:23
1×
Screengrabs from YouTube videos about how to target Americans
One of these videos, called “7 Secret USA-Based Faceless Channel Ideas for 2026 (High RPM Niches!)” starts with an explanation of “USA currency,” which details what a dollar is and what a cent is, and its value relative to the rupee, and goes on to explain how to generate English-language content about ancient history, rare cars, and tech news. Another video I watched showed, from scratch, how to create videos for a channel called “Voices of Auntie Mae,” which are supposed to be inspirational videos about Black history that are generated using a mix of ChatGPT, Google Translate, an AI voice tool called Speechma, Google’s AI image generator, CapCut, and YouTube. Another shows how to use Bing search, Google News Trends, Perplexity, and video generators to create “a USA Global News Channel Covering World Events,” which included making videos about the war in Ukraine and Chinese military parades. A video podcast about success stories included how a man made a baseball video called “baseball Tag of the year??? #mlb” in which 49 percent of viewers were in the USA: “People from the USA watch those types of videos, so my brother sitting at home in India easily takes his audience to an American audience,” one of the creators said in the video.I watched video after video being created by a channel called “Life in Rural Cambodia,” about how to create and spam AI-generated content using only your phone. Another video, presented by an AI-generated woman speaking Hindi, explains how it is possible to copy paste text from CNN to a Google Doc, run it through a program called “GravityWrite” to alter it slightly, have an AI voice read it, and post the resulting video to YouTube.
youtube.com/embed/WWuXtmLOnjk?…
A huge and growing amount of the content that we see on the internet is created explicitly because these monetization programs exist. People are making content specifically for Americans. They are not always, or even usually, creating it because they are trying to inflame tensions. They are making it because they can make money from it, and because content viewed by Americans pays the most and performs the best. The guides to making this sort of thing focus entirely on how to make content quickly, easily, and using automated tools. They focus on how to steal content from news outlets, source things from other websites, and generate scripts using AI tools. They do not focus on spreading disinformation or fucking up America, they focus on “making money.” This is a problem that AI has drastically exacerbated, but it is a problem that has wholly been created by social media platforms themselves, and which they seem to have little or no interest in solving.The new feature on X that exposes this fact is notable because people are actually talking about it, but Facebook and YouTube have had similar features for years, and it has changed nothing. Clicking any random horrific Facebook slop page, such as this one called “City USA” which exclusively posts photos of celebrities holding birthday cakes, shows that even though it lists its address as being in New York City, the page is being run by someone in Cambodia. This page called “Military Aviation” which lists its address as “Washington DC,” is actually based in Indonesia. This page called “Modern Guardian” and which exclusively posts positive, fake AI content about Elon Musk, lists itself as being in Los Angeles but Facebook’s transparency tools say it is based in Cambodia.
Besides journalists and people who feel like they are going crazy looking at this stuff, there are, realistically, no social media users who are going into the “transparency” pages of viral social media accounts to learn where they are based. The problem is not a lack of transparency, because being “transparent” doesn’t actually matter. The only thing revealed by this transparency is that social media companies do not give a fuck about this.
Where Facebook's AI Slop Comes From
Facebook itself is paying creators in India, Vietnam, and the Philippines for bizarre AI spam that they are learning to make from YouTube influencers and guides sold on Telegram.Jason Koebler (404 Media)
An account is spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for it.#AI #AISlop #Meta
AI-Generated Videos of ICE Raids Are Wildly Viral on Facebook
“Watch your step sir, keep moving,” a police officer with a vest that reads ICE and a patch that reads “POICE” says to a Latino-appearing man wearing a Walmart employee vest. He leads him toward a bus that reads “IMMIGRATION AND CERS.” Next to him, one of his colleagues begins walking unnaturally sideways, one leg impossibly darting through another as he heads to the back of a line of other Latino Walmart employees who are apparently being detained by ICE. Two American flag emojis are superimposed on the video, as is the text “Deportation.”The video has 4 million views, 16,600 likes, 1,900 comments, and 2,200 shares on Facebook. It is, obviously, AI generated.
Some of the comments seem to understand this: “Why is he walking like that?” one says. “AI the guys foot goes through his leg,” another says. Many of the comments clearly do not: “Oh, you’ll find lots of them at Walmart,” another top comment reads. “Walmart doesn’t do paperwork before they hire you?” another says. “They removing zombies from Walmart before Halloween?”
0:00
/0:14
1×The latest trend in Facebook’s ever downward spiral down the AI slop toilet are AI deportation videos. These are posted by an account called “USA Journey 897” and have the general vibe of actual propaganda videos posted by ICE and the Department of Homeland Security’s social media accounts. Many of the AI videos focus on workplace deportations, but some are similar to horrifying, real videos we have seen from ICE raids in Chicago and Los Angeles. The account was initially flagged to 404 Media by Chad Loder, an independent researcher.
“PLEASE THAT’S MY BABY,” a dark-skinned woman screams while being restrained by an ICE officer in another video. “Ma’am stop resisting, keep moving,” an officer says back. The camera switches to an image of the baby: “YOU CAN’T TAKE ME FROM HER, PLEASE SHE’S RIGHT THERE. DON’T DO THIS, SHE’S JUST A BABY. I LOVE YOU, MAMA LOVES YOU,” the woman says. The video switches to a scene of the woman in the back of an ICE van. The video has 1,400 likes and 407 comments, which include “ Don’t separate them….take them ALL!,” “Take the baby too,” and “I think the days of use those child anchors are about over with.”
0:00
/0:14
1×The USA Journey 897 account publishes multiple of these videos a day. Most of its videos have at least hundreds of thousands of views, according to Facebook’s own metrics, and many of them have millions or double-digit millions of views. Earlier this year, the account largely posted a mix of real but stolen videos of police interactions with people (such as Luigi Mangione’s perp walk) and absurd AI-generated videos such as jacked men carrying whales or riding tigers.
The account started experimenting with extremely crude AI-generated deportation videos in February, which included videos of immigrants handcuffed on the tarmac outside of deportation planes where their arms randomly detached from their body or where people suddenly disappeared or vanished through stairs, for example. Recent videos are far more realistic. None of the videos have an AI watermark on them, but the type and style of video changed dramatically starting with videos posted on October 1, which is the day after OpenAI’s Sora 2 was released; around that time is when the account started posting videos featuring identifiable stores and restaurants, which have become a common trope in Sora 2 videos.A YouTube page linked from the Facebook account shows a real video uploaded of a car in Cyprus nearly two years ago before any other content was uploaded, suggesting that the person behind the account may live in Cyprus (though the account banner on Facebook includes both a U.S. and Indian flag). This YouTube account also reveals several other accounts being used by the person. Earlier this year, the YouTube account was posting side hustle tips about how to DoorDash, AI-generated videos of singing competitions in Greek, AI-generated podcasts about the WNBA, and AI-generated videos about “Billy Joyel’s health.” A related YouTube account called Sea Life 897 exclusively features AI-generated history videos about sea journeys, which links to an Instagram account full of AI-generated boats exploding and a Facebook account that has rebranded from being about AI-generated “Sea Life” to an account now called “Viral Video’s Europe” that is full of stolen images of women with gigantic breasts and creep shots of women athletes.
My point here is that the person behind this account does not seem to actually have any sort of vested interest in the United States or in immigration. But they are nonetheless spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for that type of content, and because Facebook directly makes payments for it. As we have seen with other types of topical AI-generated content on Facebook, like videos about Palestinian suffering in Gaza or natural disasters around the world, many people simply do not care if the videos are real. And the existence of these types of videos serves to inoculate people from the actual horrors that ICE is carrying out. It gives people the chance to claim that any video is AI generated, and serves to generally litter social media with garbage, making real videos and real information harder to find.
0:00
/0:14
1×an early, crude video posted by the account
Meta did not immediately respond to a request for comment about whether the account violates its content standards, but the company has seemingly staked its present and future on allowing bizarre and often horrifying AI-generated content to proliferate on the platform. AI-generated content about immigrants is not new; in the leadup to last year’s presidential debate, Donald Trump and his allies began sharing AI-generated content about Haitian immigrants who Trump baselessly claimed were eating dogs and cats in Ohio.
In January, immediately before Trump was inaugurated, Meta changed its content moderation rules to explicitly allow for the dehumanization of immigrants because it argued that its previous policies banning this were “out of touch with mainstream discourse.” Phrases and content that are now explicitly allowed on Meta platforms include “Immigrants are grubby, filthy pieces of shit,” “Mexican immigrants are trash!” and “Migrants are no better than vomit,” according to documents obtained and published by The Intercept. After those changes were announced, content moderation experts told us that Meta was “opening up their platform to accept harmful rhetoric and mod public opinion into accepting the Trump administration’s plans to deport and separate families.”
Leaked Meta Rules: Users Are Free to Post “Mexican Immigrants Are Trash!” or “Trans People Are Immoral”
Facebook now allows attacks on immigrants and trans people, and posts like “Mexican immigrants are trash!” and “I’m a proud racist.”Sam Biddle (The Intercept)
Andrew Cuomo Uses AI MPREG Schoolhouse Rock Bill to Attack Mamdani, Is Out of Ideas#AISlop
Andrew Cuomo Uses AI MPREG Schoolhouse Rock Bill to Attack Mamdani, Is Out of Ideas
I am haunted by a pregnant bill in Andrew Cuomo’s new AI-generated attack ad against Zohran Mamdani.Cuomo posted the ad on his X account that riffed on the famous Schoolhouse Rock! song “I’m just a bill.” In Cuomo’s AI-generated cartoon nightmare, Zohran Mamdani lights money on fire while a phone bearing the ChatGPT logo explains, apparently, that Mamdani is not qualified.
The ad bears all the hallmarks of the sloppiest of AI trash: weird artifacting, strange voices that don’t sync with the mouths talking, and inconsistent animation. It feels both surreal and of the moment and completely ancient.
🎶“I’m Just A Shill” (FT. Zohran) pic.twitter.com/ga3JxnYO7B
— Andrew Cuomo (@andrewcuomo) October 30, 2025
And then there’s the pregnant bill.The Schoolhouse Rock! Bill is an iconic cartoon character that has been parodied by everyone from The Simpsons to Saturday Night Live. There are thousands, perhaps millions, of pictures of the cartoon bill online, all available to be gobbled up by scrapers and turned into training data for AI.
For some reason, the bill in Cuomo’s ad has thick red lips (notably absent in the original) and appears to be pregnant. Adding to the discordant AI jank of the image, the pregnancy is only visible when the bill is standing up. Sometimes it’s leaning against the steps and in those shots it has the slim figure characteristic of its inspiration. But when the bill stands it looks positively inflated, almost as if the video generator used to make Cuomo’s ad was trained on MPREG fetish art of the bill and not the original cartoon itself. The thick and luscious red lips are present whether the bill is leaning or standing.
Towards the end of the ad, an anthropomorphic phone with a ChatGPT logo wanders into the scene. Standing next to the pregnant bill, I could not but help but think that the phone is the father of whatever child the bill carried.
My observation led to an argument in the 404 Media Slack channel and opinions were split. “It does not seem pregnant to me,” said Emanuel Maiberg.
Jason Koebler, however, came to my defense. He circled the pregnant belly of the cartoon bill and shared it. “Baby is stored in the circle area,” he said.Perplexed by all this, I reached out to Cuomo’s campaign for an explanation. I wanted a response to the ad and to get his thoughts on AI-generated political content. More importantly, I needed to know their opinion on the pregnancy. “Does that bill look pregnant to you?” I asked. “I think it looks pregnant, but my editors are split. I would love for the Campaign to weigh in.” Out of journalist due diligence, I also reached out to Mamdani’s press office. Neither campaign has responded to my request for it to weigh in on the pregnancy of the AI-generated cartoon bill.
This is not the first time the Cuomo campaign has used AI. An ad in early October featured a deepfaked Cuomo working as a train operator, stock trader, and a stagehand. A week ago, the Cuomo campaign released a long, racist video depicting criminals endorsing Mamdani. Critics called the ad racist. The campaign deleted it shortly after it was posted and blamed the whole thing on a junior staffer.
It is worth noting that Cuomo's AI slop is being deployed most likely because the candidate has been utterly incapable of generating any authentic excitement about his campaign in New York City or on the internet, and he is facing a digitally native, younger candidate who just seems effortlessly Good At the Internet and Posting.
This is, unfortunately, how a lot of politics works in 2025. Desperate campaigns and desperate presidents are in a slop-fueled arms race to make the most ridiculous possible ads and social media content. It looks cheap, is cheap, and is the realm of politicians who are totally out of ideas, but increasingly it feels like slop is the dominant aesthetic of our time.
AI-generated imagery takes New York politics by storm
Andrew Cuomo's AI campaign ad sparked criticism.Bobby Cuza (Spectrum News NY1)
Moderators reversed course on its open door AI policy after fans filled the subreddit with AI-generated Dale Cooper slop.#davidlynch #AISlop #News
Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies
People on r/twinpeaks flooded the subreddit with AI slop images of FBI agent Dale Cooper and ChatGPT generated scripts after the community’s moderators opened the door to posting AI art. The tide of terrible Twin Peaks related slop lasted for about two days before the subreddit’s mods broke, reversed their decision, and deleted the AI generated content.Twin Peaks is a moody TV show that first aired in the 1990s and was followed by a third season in 2017. It’s the work of surrealist auteur David Lynch, influenced lots of TV shows and video games that came after and has a passionate fan base that still shares theories and art to this day. Lynch died earlier this year and since his passing he’s become a talking point for pro-AI art people who point to several interviews and second hand stories they claim show Lynch had embraced an AI-generated slop future.
On Tuesday, a mod posted a long announcement that opened the doors to AI on the sub. In a now deleted post titled “Ai Generated Content On r/twinpeaks,” the moderator outlined the position that the sub was a place for everyone to share memes, theories, and “anything remotely creative as long as it has a loose string to the show or its case or its themes. Ai generated content is included in all of this.”
The post went further. “We are aware of how Ai ‘art’ and Ai generated content can hurt real artists,” the post said. “Unfortunately, this is just the reality of the world we live in today. At this point I don’t think anything can stop the Ai train from coming, it’s here and this is only the beginning. Ai content is becoming harder and harder to identify.”
The mod then asked Redditors to follow an honor system and label any post that used AI with a special new flair so people could filter out those posts if they didn’t want to see them. “We feel this is a best of both worlds compromise that should keep everyone fairly happy,” the mod said.
An honor system, a flair, and a filter did not mollify the community. In the following 48 hours Lynch fans expressed their displeasure by showing r/twinpeaks what it looks like when no one can “stop the Ai train from coming.” They filled the subreddit with AI-generated slop in protest, including horrifying pictures of series protagonist Cooper doing an end-zone dance on a football field while Laura Palmer screamed in the sky and more than a few awful ChatGPT generated scripts.
Image via r/twinpeaks.
Free-IDK-Chicken, a former mod of r/twinpeaks who resigned over the AI debacle, said the post wasn’t run by other members of the mod team. “It was poorly worded. A bad take on a bad stance and it blew up in their face,” she told 404 Media. “It spiraled because it was condescending and basically told the community--we don’t care that it’s theft, that it’s unethical, we’ll just flair it so you can filter it out…they missed the point that AI art steals from legit artists and damages the environment.”According to Free-IDK-Chicken, the subreddit’s mods had been fighting over whether or not to ban AI art for months. “I tried five months ago to get AI banned and was outvoted. I tried again last month and was outvoted again,” she said.
On Thursday morning, with the subreddit buried in AI slop, the mods of r/twinpeaks relented, banned AI art, and cleaned up the protest spam. “After much thought and deliberation about the response to yesterday's events, the TP Mod Team has made the decision to reverse their previous statement on the posting of AI content in our community,” the mods said in a post announcing the new policy. “Going forward, posts including generative AI art or ChatGPT-style content are disallowed in this subreddit. This includes posting AI google search results as they frequently contain misinformation.”
Lynch has become a mascot for pro AI boosters. An image on a pro-AI art subreddit depicts Lynch wearing an OpenAI shirt and pointing at the viewer. “You can’t be punk and also be anti-AI, AI-phobic, or an AI denier. It’s impossible!” reads a sign next to the AI-generated picture of the director.
Image via r/slopcorecirclejerk
As evidence, they point to aBritish Film Institute interview published shortly before his death where he lauds AI and calls it “incredible as a tool for creativity and for machines to help creativity.” AI boosters often leave off the second part of the quote. “I’m sure with all these things, if money is the bottom line, there’d be a lot of sadness, and despair and horror. But I’m hoping better times are coming,” Lynch said.Image via r/slopcorecirclejerk
The other big piece of evidence people use to claim Lynch was pro-AI is a secondhand account given to Vulture by his neighbor, the actress Natasha Lyonne. According to the interview in Vulture, Lyonne asked Lynch for his thoughts on AI and Lynch picked up a pencil and told her that everyone has access to it and to a phone. “It’s how you use the pencil. You see?” He said.Setting aside the environmental and ethical arguments against AI-generated art, if AI is a “pencil,” most of what people make with it is unpleasant slop. Grotesque nonsense fills our social media feeds and AI-generated Jedis and Ghiblis have become the aesthetic of fascism.
We've seen other platforms and communities struggle with keeping AI art at bay when they've allowed it to exist alongside human-made content. On Facebook, Instagram, and Youtube, low-effort garbage is flooding online spaces and pushing productive human conversation to the margins, while floating to the top of engagement algorithms.
Other artist communities are pushing back against AI art in their own ways: Earlier this month, DragonCon organizers ejected a vendor for displaying AI-generated artwork. Artists’ portfolio platform ArtStation banned AI-generated content in 2022. And earlier this year, artists protested the first-ever AI art auction at Christie’s.
Artists Are Revolting Against AI Art on ArtStation
Artists are fed up with AI art on the portfolio platform, which is owned by Epic Games, but the company isn't backing down.Chloe Xiang (VICE)
Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok#News #AISlop
Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok
TikTok is full of AI slop videos about the National Guard’s deployment in Washington, D.C., some of which use Google’s new VEO AI video generator. Unlike previous efforts to flood the zone with AI slop in the aftermath of a disaster or major news event, some of the videos blend real footage with AI footage, making it harder than ever to tell what’s real and what’s not, which has the effect of distorting people’s understanding of the military occupation of DC.At the start of last week, the Trump administration announced that all homeless people should immediately move out of Washington DC. This was followed by an order to Federal agents to occupy the city and remove tents where homeless people had been living. These events were reported on by many news outlets, for example, this footage from NBC shows the reality of at least one part of the exercise. On TikTok, though, this is just another popular trending topic, where slop creators and influencers can work together to create and propagate misinformation.
404 Media has previously covered how perceptions of real-life events can be quickly manipulated with AI images and footage; this is more of the same; with the release of new, better AI video creation tools like Google’s VEO, the footage is more convincing than ever.
playlist.megaphone.fm?p=TBIEA2…
Some of the slop is obvious fantasy-driven engagement farming and gives itself away aesthetically or through content. This video and this very similar one show tents being pulled from a vast field into the back of a moving garbage truck, with the Capitol building in the background, on the Washington Mall. They’re not tagged as AI, but at least a few people in the comments are able to identify them as such; both videos still have over 100,000 views. This somehow more harrowing one feat. Hunger Games song has 41,000.@biggiesmellscoach Washington DC cleanup organized by Trump. Homeless are now given secure shelters, rehab, therapy, and help. #washingtondc #fyp #satire #trending #viral ♬ origineel geluid - nina.editssWith something like this video, made with VEO, the slop begins to feel more like a traditional news report. It has 146,000 views and it’s made of several short clips with news-anchorish voiceover. I had to scroll down past a lot of “Thank you president Trump” and “good job officers” comments to find any that pointed out that it was fake, even though the watermark for Google’s VEO generator is in the corner.
The voiceover also “reports” semi-accurately on what happened in DC, but without any specifics: “Police moved in today, to clear out a homeless camp in the city. City crews tore down tents, packed up belongings, and swept the park clean. Some protested, some begged for more time. But the cleanup went on. What was once a community is now just an empty field.” I found the same video posted to X, with commenters on both platforms taking offence at the use of the term “community.”
Comments on the original and X postings of this video which is clearly made with VEO
I also found several examples of shorter slop clips like this one, which has almost 1 million views, and this one, with almost half a million, which both exaggerate the scale and disarray of the encampments. In one of the videos, the entirety of an area that looks like the National Mall (but isn’t) has been taken over by tents. Quickly scrolling these videos gives the viewer an incorrect understanding of what the DC “camps” and “cleanup” looked like.
These shorter clips have almost 1.5 million views between them
The account that posted these videos was called Hush Documentary when I first encountered it, but had changed its name to viralsayings by Monday evening. The profile also has a five-second AI-generated footage of ATF officers patrolling a neighborhood; marked as AI, with 89,000 views.
What’s happening also is that real footage and fake footage are being mixed together in a popular greenscreen TikTok format where a person gives commentary (basically, reporting or commenting on the news) while footage plays in the background. That is happening in this clip, which features that same AI footage of ATF officers.
The viralsayings version of the footage is marked as AI. The remixed version, combined with real footage, is not.
I ended up finding a ton of instances where accounts mixed slop clips of the camp clearings, with seemingly real footage—notably many of them included this viral original footage of police clearing a homeless encampment in Georgetown. But a lot of them are ripping each other off. For example, many accounts have ripped off the voiceover of this viral clip from @Alfredito_mx (which features real footage) and have put it over top of AI footage. This clone from omivzfrru2 has nearly 200,000 and features both real and AI clips; I found at least thirty other copies, all with between ~2000 and 5000 views.
The scraping-and-recreating robot went extra hard with this one - the editing is super glitchy, the videos overlay each other, the host flickers around the screen, and random legs walk by in the background.
@mgxrdtsi 75 homeless camps in DC cleared by US Park Police since Trump's 'Safe and Beautiful' executive order #alfredomx #washington #homeless #safeandbeautiful #trump ♬ original sound - mgxrdtsiSo, one viral video from a popular creator has spawned thousands of mirrors in the hope of chipping off a small amount of the engagement of the original; those copies need footage, go looking for content in the tags, encounter the slop, and can’t tell / don’t care if it’s real. Then more thousands of people see the slop copies and end up getting a totally incorrect view of an actual unfolding news situation.
In these videos, it’s only totally clear to me that the content is fake because I found the original sources. Lots of this footage is obviously fake if you’re familiar with the actual situation in DC or familiar with the geography and streets in DC. But most people are not. If you told me “some of these shots are AI,” I don’t think I could identify all of those shots confidently. Is the flicker or blurring onscreen from the footage, from a bad camera, from a time-lapse or being sped up, from endless replication online, or from the bad green screen of a “host”? Now, scrolling social media means encountering a mix of real and fake video, and the AI fakes are getting good enough that deciphering what’s actually happening requires a level of attention to detail that most people don’t have the knowledge or time for.
People Think AI Images of Hollywood Sign Burning Are Real
AI generated slop is tricking people into thinking an already devastating series of wildfires in Los Angeles are even worse than they are — and using it to score political points.Samantha Cole (404 Media)
LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him
Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop
LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him
Viral Instagram accounts making LeBron 'brainrot' videos have also been banned.Jason Koebler (404 Media)
Inside the Economy of AI Spammers Getting Rich By Exploiting Disasters and Misery
How AI spammers monetized the LA fires and other natural disasters.Dexter Thomas (404 Media)
"Challah Horse" was a Polish meme warning about Facebook AI spam 'targeted at susceptible people' that was stolen by a spam page targeted at susceptible people.
"Challah Horse" was a Polish meme warning about Facebook AI spam x27;targeted at susceptible peoplex27; that was stolen by a spam page targeted at susceptible people.#AISpam #Facebook #MarkZuckerberg #AISlop #FacebookSpam
Viral 'Challah Horse' Image Zuckerberg Loved Was Originally Created as a Warning About Facebook's AI Slop
"Challah Horse" was a Polish meme warning about Facebook AI spam 'targeted at susceptible people' that was stolen by a spam page targeted at susceptible people.Jason Koebler (404 Media)
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg seems to enjoy the spam that has taken over his flagship product.Jason Koebler (404 Media)