Salta al contenuto principale


I am starting to think I will never receive my horny novelty holiday decorations.#AISlop #christmas #etsy


When Will My Pornographic Shrek Christmas Ornament Arrive?


I am starting to think I will never receive my personalized, likely AI-generated horny Shrek Christmas ornaments I purchased from Wear and Decor. I had hoped the indecent and probably unauthorized Shrek ornament depicting the green ogre getting a blowjob would arrive before Christmas and, ideally, before I traveled home for the holidays. I doubt that’s going to happen. I think I’ve been rooked.

The ornament depicts Shrek, his eyes wide and a smile on his ogre lips, as a long haired Fiona descends upon his crotch. “Let’s get Shrekxy and save Santa the trip,” reads a caption above the scene on the online retailer Wear and Decor read. There was space at the bottom where I could personalize the ornament with the name of myself and a loved one, as if to indicate that I was Shrek and that Fiona was my wife.
playlist.megaphone.fm?p=TBIEA2…
When I showed it to my wife weeks ago, after we first put up our Christmas tree, she simply said “No.” “Don’t you think it’s funny?” I said.“You’re supposed to be shopping for a tree topper,” she said.

“It’s only $43.99 for two,” I said. “That’s a bargain.”She stared.

I had been shopping for a tree topper online when I stumbled into the strange world of AI generated pornographic custom ornaments starring popular cartoon characters listed on sites of dubious repute. I do not know what it says about my algorithms that attempting to find a nice, normal, and classy tree topper for Christmas led me to a horrifying world of horny—and seemingly AI generated— knock off novelty Christmas ornaments. I don’t want to reflect on that. I just want to show you what I’ve stumbled upon.

There is a whole underground world of erotic Christmas ornaments starring famous cartoon characters. Some of them are on Etsy, but most are dubious looking sites with names like Homacus and Pop Art. There are themes that repeat. Spanking. Butts. In flagrante delicto bedroom scenes. The promise that the purchaser can personalize these gifts with the name of their loved one and the logo of their favorite football team. I am sure the Baltimore Ravens love that you can buy an ornament depicting a nude Grinch gripping the ass of a female Grinch (notably not that of his canonical wife Martha May Whovier) emblazoned with their logo.
Image via Homacus.
“My butt would be so lonely without you touching it all the time,” reads the inscription above Zootopia’s Nick Wilde with Judy Hopps bent over his knee. You can purchase this same scene with Belle and Beast, Rey and Ben from Star Wars, a pair of Grinches, or Jack Skellington and Sally from Nightmare Before Christmas. In another variant, a male cartoon character is bent over the ass of a presenting female. Shrek is nose deep in Fiona’s ass. “I adore and love every part of you—Especially your butt. Merry Grinchmas,” the caption reads.
Image via Homacus.
The ornaments rarely carry the name of the actual characters they’re depicting. They are “Funny Fairytale Ornament” and “Funny Green Monsters” and “Personalized Funny Lion Couple Christmas Ornament, Custom Name Animal Lovers Decoration, Cute Romantic Holiday Gift.” These titles feel like hold overs from the prompt that was, I assumed, used in an AI image generator to create the ornaments. There are other signs.

Some of the Shrek ornaments refer to the green ogre as Grinches. Shrek often looks correct but Fiona is sometimes Yassified, her ogre features smoothed and made more feminine. In an ornament with Belle draped over Beast’s leg, the smiling prince has seven fingers on his left hand. The lighting in the “photos” of the objects is never quite right.
Image via Homacus.
Time Magazine declared the “Architects of AI” as its Person of the Year in 2025 and there is something about flipping through these listings for cheap and horny ornaments that feels like living in the future. This is the world the architects have built, one where some anonymous person out there in the online ether can quickly generate a lewd cartoon drawing of something from your childhood in an attempt to swindle you for a few bucks while you’re shopping for a Christmas tree topper.

I clicked “purchase” on the $40 Shrek blowjob ornament on November 28. The money was deducted from my account but I have not received confirmation of shipping.




Artist Tega Brain is fighting the internet’s enshittification by turning back the clock to before ChatGPT existed.#AISlop #GoogleSearch #searchengines


'Slop Evader' Lets You Surf the Web Like It’s 2022


It’s hard to believe it’s only been a few years since generative AI tools started flooding the internet with low quality content-slop. Just over a year ago, you’d have to peruse certain corners of Facebook or spend time wading through the cultural cesspool of Elon Musk’s X to find people posting bizarre and repulsive synthetic media. Now, AI slop feels inescapable — whether you’re watching TV, reading the news, or trying to find a new apartment.

That is, unless you’re using Slop Evader, a new browser tool that filters your web searches to only include results from before November 30, 2022 — the day that ChatGPT was released to the public.

The tool is available for Firefox and Chrome, and has one simple function: Showing you the web as it was before the deluge of AI-generated garbage. It uses Google search functions to index popular websites and filter results based on publication date, a scorched earth approach that virtually guarantees your searches will be slop-free.

Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry’s unrelenting, aggressive rollout of so-called “generative AI”—despite widespread criticism and the wider public’s distaste for it.


Slop Evader in action. Via Tega Brain

“This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we’re in,” Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. “I’ve been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022.”

One under-discussed impact of AI slop and synthetic media, says Brain, is how it increases our “cognitive load” when viewing anything online. When we can no longer immediately assume any of the media we encounter was made by a human, the act of using social media or browsing the web is transformed into a never-ending procession of existential double-takes.

This cognitive dissonance extends to everyday tasks that require us to use the internet—which is practically everything nowadays. Looking for a house or apartment? Companies are using genAI tools to generate pictures of houses and rental properties, as well as the ads themselves. Trying to sell your old junk on Facebook Marketplace? Meta’s embrace of generative AI means you may have to compete with bots, fake photos, and AI-generated listings. And when we shop for beauty products or view ads, synthetic media tools are taking our filtered and impossibly-idealized beauty standards to absurd and disturbing new places.

In all of these cases, generative AI tools further thumb the scales of power—saving companies money while placing a higher cognitive burden on regular people to determine what’s real and what’s not.

“I open up Pinterest and suddenly notice that half of my feed are these incredibly idealized faces of women that are clearly not real people,” said Brain. “It’s shoved into your face and into your feed, whether you searched for it or not.”

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won’t be able to find anything time-sensitive or current—including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time—nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool’s limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo’s search indexing instead of Google’s. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley’s AI-pushers have forced on us.

“I don’t think browser add-ons are gonna save us,” said Brain. “For me, the purpose of doing this work is mostly to act as a provocation and give people examples of how you can refuse this stuff, to furnish one’s imaginary for what a politics of refusal could look like.”

With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year). There’s also been a growing movementpushing back against the new AI data centers threatening to pollute communities andraise residents’ electricity bills. But no matter what form AI slop-refusal takes, it will need to be a group effort.

“It’s like with the climate debate, we’re not going to get out of this shitshow with individual actions alone,” she added. “I think that’s the million dollar question, is what is the relationship between this kind of individual empowerment work and collective pushback.”




The 'psyops' revealed by X are entirely the fault of the perverse incentives created by social media monetization programs.

The x27;psyopsx27; revealed by X are entirely the fault of the perverse incentives created by social media monetization programs.#AI #AISlop


America’s Polarization Has Become the World's Side Hustle


A new feature on X is making people suddenly realize that some large portion of the divisive, hateful, and spammy content designed to inflame tensions or, at the very least, is designed to get lots of engagement on social media, is being published by accounts that are pretending to be based in the United States but are actually being run by people in countries like Bangladesh, Vietnam, India, Cambodia, Russia, and other countries. An account called “Ivanka News” is based in Nigeria, “RedPilledNurse” is from Europe, “MAGA Nadine” is in Morocco, “Native American Soul” is in Bangladesh, and “Barron Trump News” is based in Macedonia, among many, many of others.

Inauthentic viral accounts on X are just the tip of the iceberg, though, as we have reported. A huge amount of the viral content about American politics and American news on social media is from sock puppet and bot accounts monetized by people in other countries. The rise of easy to use, free AI generative tools have supercharged this effort, and social media monetization programs have incentivized this effort and are almost entirely to blame. The current disinformation and slop phenomenon on the internet today makes the days of ‘Russian bot farms’ and ‘fake news pages from Cyprus’ seem quaint; the problem is now fully decentralized and distributed across the world and is almost entirely funded by social media companies themselves.

This will not be news to people who have been following 404 Media, because I have done multiple investigations about the perverse incentives that social media and AI companies have created to incentivize people to fill their platforms with slop. But what has happened on X is the same thing that has happened on Facebook, Instagram, YouTube, and other social media platforms (it is also happening to the internet as a whole, with AI slop websites laden with plagiarized content and SEO spam and monetized with Google ads). Each social media platform has either an ad revenue sharing program, a “creator bonus” program, or a monetization program that directly pays creators who go viral on their platforms.

This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.
youtube.com/embed/tagCqd_Ps1g?…
Examples include “AK Educate” (which is one of thousands), which posts every few days about how to monetize accounts on Facebook, YouTube, X, Instagram, TikTok, Etsy, and others. “How to create Twitter X Account for Monitization [sic] | Earn From Twitter in Pakistan,” is the name of a typical video in this genre. These channels are not just teaching people how to make and spam content, however. They are teaching people specifically how to make it seem like they are located in the United States, and how to create content that they believe will perform with American audiences on American social media. Sometimes they are advising the use of VPNs and other tactics to make it seem like the account is posting from the United States, but many of the accounts explain that doing this step doesn’t actually matter.

Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.

For the most part, the only ‘psyop’ here is one being run on social media users by social media companies themselves in search of getting more ad revenue by any means necessary.

For example: AK Educate has a video called “7 USA Faceless Channel Ideas for 2025,” and another video called “USA YouTube Channel Kaise Banaye [how to].” The first of these videos is in Hindi but has English subtitles.

“Where you get $1 on 1,000 views on Pakistani content,” the video begins, “you get $5 to $7 on 1,000 views on USA content.”

“As cricket is seen in Pakistan and India, boxing and MMA are widely seen in America,” he says. Channel ideas include “MMA,” “Who Died Today USA,” “How ships sink,” news from wars, motivational videos, and Reddit story voiceovers. To show you how pervasive this advice to make channels that target Americans is, look at this, which is a YouTube search for “USA Channel Kaise Banaye”:


0:00
/0:23

Screengrabs from YouTube videos about how to target Americans
One of these videos, called “7 Secret USA-Based Faceless Channel Ideas for 2026 (High RPM Niches!)” starts with an explanation of “USA currency,” which details what a dollar is and what a cent is, and its value relative to the rupee, and goes on to explain how to generate English-language content about ancient history, rare cars, and tech news. Another video I watched showed, from scratch, how to create videos for a channel called “Voices of Auntie Mae,” which are supposed to be inspirational videos about Black history that are generated using a mix of ChatGPT, Google Translate, an AI voice tool called Speechma, Google’s AI image generator, CapCut, and YouTube. Another shows how to use Bing search, Google News Trends, Perplexity, and video generators to create “a USA Global News Channel Covering World Events,” which included making videos about the war in Ukraine and Chinese military parades. A video podcast about success stories included how a man made a baseball video called “baseball Tag of the year??? #mlb” in which 49 percent of viewers were in the USA: “People from the USA watch those types of videos, so my brother sitting at home in India easily takes his audience to an American audience,” one of the creators said in the video.

I watched video after video being created by a channel called “Life in Rural Cambodia,” about how to create and spam AI-generated content using only your phone. Another video, presented by an AI-generated woman speaking Hindi, explains how it is possible to copy paste text from CNN to a Google Doc, run it through a program called “GravityWrite” to alter it slightly, have an AI voice read it, and post the resulting video to YouTube.
youtube.com/embed/WWuXtmLOnjk?…
A huge and growing amount of the content that we see on the internet is created explicitly because these monetization programs exist. People are making content specifically for Americans. They are not always, or even usually, creating it because they are trying to inflame tensions. They are making it because they can make money from it, and because content viewed by Americans pays the most and performs the best. The guides to making this sort of thing focus entirely on how to make content quickly, easily, and using automated tools. They focus on how to steal content from news outlets, source things from other websites, and generate scripts using AI tools. They do not focus on spreading disinformation or fucking up America, they focus on “making money.” This is a problem that AI has drastically exacerbated, but it is a problem that has wholly been created by social media platforms themselves, and which they seem to have little or no interest in solving.

The new feature on X that exposes this fact is notable because people are actually talking about it, but Facebook and YouTube have had similar features for years, and it has changed nothing. Clicking any random horrific Facebook slop page, such as this one called “City USA” which exclusively posts photos of celebrities holding birthday cakes, shows that even though it lists its address as being in New York City, the page is being run by someone in Cambodia. This page called “Military Aviation” which lists its address as “Washington DC,” is actually based in Indonesia. This page called “Modern Guardian” and which exclusively posts positive, fake AI content about Elon Musk, lists itself as being in Los Angeles but Facebook’s transparency tools say it is based in Cambodia.

Besides journalists and people who feel like they are going crazy looking at this stuff, there are, realistically, no social media users who are going into the “transparency” pages of viral social media accounts to learn where they are based. The problem is not a lack of transparency, because being “transparent” doesn’t actually matter. The only thing revealed by this transparency is that social media companies do not give a fuck about this.




An account is spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for it.#AI #AISlop #Meta


Andrew Cuomo Uses AI MPREG Schoolhouse Rock Bill to Attack Mamdani, Is Out of Ideas#AISlop


Moderators reversed course on its open door AI policy after fans filled the subreddit with AI-generated Dale Cooper slop.#davidlynch #AISlop #News


Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies


People on r/twinpeaks flooded the subreddit with AI slop images of FBI agent Dale Cooper and ChatGPT generated scripts after the community’s moderators opened the door to posting AI art. The tide of terrible Twin Peaks related slop lasted for about two days before the subreddit’s mods broke, reversed their decision, and deleted the AI generated content.

Twin Peaks is a moody TV show that first aired in the 1990s and was followed by a third season in 2017. It’s the work of surrealist auteur David Lynch, influenced lots of TV shows and video games that came after and has a passionate fan base that still shares theories and art to this day. Lynch died earlier this year and since his passing he’s become a talking point for pro-AI art people who point to several interviews and second hand stories they claim show Lynch had embraced an AI-generated slop future.

On Tuesday, a mod posted a long announcement that opened the doors to AI on the sub. In a now deleted post titled “Ai Generated Content On r/twinpeaks,” the moderator outlined the position that the sub was a place for everyone to share memes, theories, and “anything remotely creative as long as it has a loose string to the show or its case or its themes. Ai generated content is included in all of this.”

The post went further. “We are aware of how Ai ‘art’ and Ai generated content can hurt real artists,” the post said. “Unfortunately, this is just the reality of the world we live in today. At this point I don’t think anything can stop the Ai train from coming, it’s here and this is only the beginning. Ai content is becoming harder and harder to identify.”

The mod then asked Redditors to follow an honor system and label any post that used AI with a special new flair so people could filter out those posts if they didn’t want to see them. “We feel this is a best of both worlds compromise that should keep everyone fairly happy,” the mod said.

An honor system, a flair, and a filter did not mollify the community. In the following 48 hours Lynch fans expressed their displeasure by showing r/twinpeaks what it looks like when no one can “stop the Ai train from coming.” They filled the subreddit with AI-generated slop in protest, including horrifying pictures of series protagonist Cooper doing an end-zone dance on a football field while Laura Palmer screamed in the sky and more than a few awful ChatGPT generated scripts.
Image via r/twinpeaks.
Free-IDK-Chicken, a former mod of r/twinpeaks who resigned over the AI debacle, said the post wasn’t run by other members of the mod team. “It was poorly worded. A bad take on a bad stance and it blew up in their face,” she told 404 Media. “It spiraled because it was condescending and basically told the community--we don’t care that it’s theft, that it’s unethical, we’ll just flair it so you can filter it out…they missed the point that AI art steals from legit artists and damages the environment.”

According to Free-IDK-Chicken, the subreddit’s mods had been fighting over whether or not to ban AI art for months. “I tried five months ago to get AI banned and was outvoted. I tried again last month and was outvoted again,” she said.

On Thursday morning, with the subreddit buried in AI slop, the mods of r/twinpeaks relented, banned AI art, and cleaned up the protest spam. “After much thought and deliberation about the response to yesterday's events, the TP Mod Team has made the decision to reverse their previous statement on the posting of AI content in our community,” the mods said in a post announcing the new policy. “Going forward, posts including generative AI art or ChatGPT-style content are disallowed in this subreddit. This includes posting AI google search results as they frequently contain misinformation.”

Lynch has become a mascot for pro AI boosters. An image on a pro-AI art subreddit depicts Lynch wearing an OpenAI shirt and pointing at the viewer. “You can’t be punk and also be anti-AI, AI-phobic, or an AI denier. It’s impossible!” reads a sign next to the AI-generated picture of the director.
Image via r/slopcorecirclejerk
As evidence, they point to aBritish Film Institute interview published shortly before his death where he lauds AI and calls it “incredible as a tool for creativity and for machines to help creativity.” AI boosters often leave off the second part of the quote. “I’m sure with all these things, if money is the bottom line, there’d be a lot of sadness, and despair and horror. But I’m hoping better times are coming,” Lynch said.
Image via r/slopcorecirclejerk
The other big piece of evidence people use to claim Lynch was pro-AI is a secondhand account given to Vulture by his neighbor, the actress Natasha Lyonne. According to the interview in Vulture, Lyonne asked Lynch for his thoughts on AI and Lynch picked up a pencil and told her that everyone has access to it and to a phone. “It’s how you use the pencil. You see?” He said.

Setting aside the environmental and ethical arguments against AI-generated art, if AI is a “pencil,” most of what people make with it is unpleasant slop. Grotesque nonsense fills our social media feeds and AI-generated Jedis and Ghiblis have become the aesthetic of fascism.

We've seen other platforms and communities struggle with keeping AI art at bay when they've allowed it to exist alongside human-made content. On Facebook, Instagram, and Youtube, low-effort garbage is flooding online spaces and pushing productive human conversation to the margins, while floating to the top of engagement algorithms.

Other artist communities are pushing back against AI art in their own ways: Earlier this month, DragonCon organizers ejected a vendor for displaying AI-generated artwork. Artists’ portfolio platform ArtStation banned AI-generated content in 2022. And earlier this year, artists protested the first-ever AI art auction at Christie’s.




AI slop is taking over workplaces. Workers said that they thought of their colleagues who filed low-quality AI work as "less creative, capable, and reliable than they did before receiving the output."#AISlop #AI


"These AI videos are just repeating things that are on the internet, so you end up with a very simplified version of the past."#AI #AISlop #YouTube #History


80s Nostalgia AI Slop Is Boomerfying the Masses for a Past That Never Existed#AISlop


Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok#News #AISlop


Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok


TikTok is full of AI slop videos about the National Guard’s deployment in Washington, D.C., some of which use Google’s new VEO AI video generator. Unlike previous efforts to flood the zone with AI slop in the aftermath of a disaster or major news event, some of the videos blend real footage with AI footage, making it harder than ever to tell what’s real and what’s not, which has the effect of distorting people’s understanding of the military occupation of DC.

At the start of last week, the Trump administration announced that all homeless people should immediately move out of Washington DC. This was followed by an order to Federal agents to occupy the city and remove tents where homeless people had been living. These events were reported on by many news outlets, for example, this footage from NBC shows the reality of at least one part of the exercise. On TikTok, though, this is just another popular trending topic, where slop creators and influencers can work together to create and propagate misinformation.

404 Media has previously covered how perceptions of real-life events can be quickly manipulated with AI images and footage; this is more of the same; with the release of new, better AI video creation tools like Google’s VEO, the footage is more convincing than ever.
playlist.megaphone.fm?p=TBIEA2…
Some of the slop is obvious fantasy-driven engagement farming and gives itself away aesthetically or through content. This video and this very similar one show tents being pulled from a vast field into the back of a moving garbage truck, with the Capitol building in the background, on the Washington Mall. They’re not tagged as AI, but at least a few people in the comments are able to identify them as such; both videos still have over 100,000 views. This somehow more harrowing one feat. Hunger Games song has 41,000.

@biggiesmellscoach Washington DC cleanup organized by Trump. Homeless are now given secure shelters, rehab, therapy, and help. #washingtondc #fyp #satire #trending #viral ♬ origineel geluid - nina.editss

With something like this video, made with VEO, the slop begins to feel more like a traditional news report. It has 146,000 views and it’s made of several short clips with news-anchorish voiceover. I had to scroll down past a lot of “Thank you president Trump” and “good job officers” comments to find any that pointed out that it was fake, even though the watermark for Google’s VEO generator is in the corner.

The voiceover also “reports” semi-accurately on what happened in DC, but without any specifics: “Police moved in today, to clear out a homeless camp in the city. City crews tore down tents, packed up belongings, and swept the park clean. Some protested, some begged for more time. But the cleanup went on. What was once a community is now just an empty field.” I found the same video posted to X, with commenters on both platforms taking offence at the use of the term “community.”



Comments on the original and X postings of this video which is clearly made with VEO

I also found several examples of shorter slop clips like this one, which has almost 1 million views, and this one, with almost half a million, which both exaggerate the scale and disarray of the encampments. In one of the videos, the entirety of an area that looks like the National Mall (but isn’t) has been taken over by tents. Quickly scrolling these videos gives the viewer an incorrect understanding of what the DC “camps” and “cleanup” looked like.


These shorter clips have almost 1.5 million views between them

The account that posted these videos was called Hush Documentary when I first encountered it, but had changed its name to viralsayings by Monday evening. The profile also has a five-second AI-generated footage of ATF officers patrolling a neighborhood; marked as AI, with 89,000 views.

What’s happening also is that real footage and fake footage are being mixed together in a popular greenscreen TikTok format where a person gives commentary (basically, reporting or commenting on the news) while footage plays in the background. That is happening in this clip, which features that same AI footage of ATF officers.


The viralsayings version of the footage is marked as AI. The remixed version, combined with real footage, is not.

I ended up finding a ton of instances where accounts mixed slop clips of the camp clearings, with seemingly real footage—notably many of them included this viral original footage of police clearing a homeless encampment in Georgetown. But a lot of them are ripping each other off. For example, many accounts have ripped off the voiceover of this viral clip from @Alfredito_mx (which features real footage) and have put it over top of AI footage. This clone from omivzfrru2 has nearly 200,000 and features both real and AI clips; I found at least thirty other copies, all with between ~2000 and 5000 views.

The scraping-and-recreating robot went extra hard with this one - the editing is super glitchy, the videos overlay each other, the host flickers around the screen, and random legs walk by in the background.

@mgxrdtsi 75 homeless camps in DC cleared by US Park Police since Trump's 'Safe and Beautiful' executive order #alfredomx #washington #homeless #safeandbeautiful #trump ♬ original sound - mgxrdtsi

So, one viral video from a popular creator has spawned thousands of mirrors in the hope of chipping off a small amount of the engagement of the original; those copies need footage, go looking for content in the tags, encounter the slop, and can’t tell / don’t care if it’s real. Then more thousands of people see the slop copies and end up getting a totally incorrect view of an actual unfolding news situation.

In these videos, it’s only totally clear to me that the content is fake because I found the original sources. Lots of this footage is obviously fake if you’re familiar with the actual situation in DC or familiar with the geography and streets in DC. But most people are not. If you told me “some of these shots are AI,” I don’t think I could identify all of those shots confidently. Is the flicker or blurring onscreen from the footage, from a bad camera, from a time-lapse or being sped up, from endless replication online, or from the bad green screen of a “host”? Now, scrolling social media means encountering a mix of real and fake video, and the AI fakes are getting good enough that deciphering what’s actually happening requires a level of attention to detail that most people don’t have the knowledge or time for.




Built using AI technology from Baidu and DeepSeek, these virtual livestreamers sell everything from wet wipes to printers and work 24 hours a day, seven days a week.#wired #AISlop


People failing to identify a video of adorable bunnies as AI slop has sparked worries that many more people could fall for online scams.#AISlop #TikTok


LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him

Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop




Inside the Economy of AI Spammers Getting Rich By Exploiting Disasters and Misery#AI #AISlop



"Challah Horse" was a Polish meme warning about Facebook AI spam 'targeted at susceptible people' that was stolen by a spam page targeted at susceptible people.

"Challah Horse" was a Polish meme warning about Facebook AI spam x27;targeted at susceptible peoplex27; that was stolen by a spam page targeted at susceptible people.#AISpam #Facebook #MarkZuckerberg #AISlop #FacebookSpam



Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children#AISlop #AISpam #Facebook



Some of the most popular content on Facebook leading up to the election was AI-generated Elon Musk inspiration porn made by people in other countries that went viral in the US.#AI #Facebook #AISlop