The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

It turns out when you try to serve slop on a product people pay for, no one wants it.#AISlop #Disney #Sora #OpenAI


Disney's Sora Disaster Shows AI Will Not Revolutionize Hollywood


Barely three months ago, the Walt Disney Company announced that it would be bringing user-generated AI slop to Disney+ as part of a landmark $1 billion investment into OpenAI that would allow people to use Sora to create short videos from more than 200 beloved Disney characters. The announcement was so important that Disney’s then-CEO Bob Iger and OpenAI CEO Sam Altman both championed it in a press release that is full of the kind of cope that Silicon Valley AI boosters and some Hollywood executives suggest would unleash a new era of moviemaking and storytelling powered by AI that is cheaper than making movies with human workers.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Iger said.

“Disney will become a major customer of OpenAI, using its APIs to build new products, tools, and experiences, including for Disney+, and deploying ChatGPT for its employees,” the press release stated. “Under the license, fans will be able to watch curated selections of Sora-generated videos on Disney+, and OpenAI and Disney will collaborate to utilize OpenAI’s models to power new experiences for Disney+ subscribers, furthering innovative and creative ways to connect with Disney’s stories and characters. Sora and ChatGPT Images are expected to start generating fan-inspired videos with Disney’s multi-brand licensed characters in early 2026.”

Tuesday was a disastrous day for that future, and the complete and utter failure of both Sora and Disney’s dalliance with AI garbage suggests AI slop is indeed not the future of Hollywood. Disney did not even get to the point here it allowed people to build anything with Disney characters before pulling the plug on the whole endeavor and its investment.

Sora is dead. May the memory of its four-month existence as a copyright infringement machine that was also used to make videos of men strangling women and ICE arresting undocumented immigrants be a blessing.

Disney is pulling out of its billion-dollar investment in OpenAI entirely. Other efforts to slopify Hollywood look underwhelming, appear to have been quietly shelved, or have utterly failed to gather any audience whatsoever. This news does not bode well for OpenAI and it likely does not bode well for Paramount’s megamerger with Warner Brothers, a deal whose financial terms and the debt involved only make sense if you can believe in a future in which the cost of creating blockbuster movies is drastically reduced by AI via huge numbers of people losing their jobs.

At the time of Disney’s announcement with OpenAI, it was hard to imagine why Disney would infect its flagship paid streaming service with content from a service whose viral videos consisted of users turning Pikachu into a felon and SpongeBob into Hitler. It was not clear why Disney would want AI slop made by randos to live next to, say the $200 million Toy Story 4 or any number of Disney’s masterpieces. It was also hard to imagine why a company that has so aggressively enforced its copyright would suddenly say all bets are off for Sam Altman’s plagiarism machine. The only thing that made any sense is that Hollywood executives, like Silicon Valley executives, hate paying for human labor so much that they have convinced themselves that their customers would happily consume AI slop if it was shoved down their throats.

After Sora’s initial novelty wore off, it became clear that people do not actually want this, and that the people using Sora were using it at great financial cost to OpenAI in order to largely take videos off-platform to spam other social media sites. The Sora subreddit has been basically dead for months outside of people attempting to figure out how to get it to create nudes or people complaining about content violations. When I scrolled Sora Tuesday evening I almost exclusively saw videos that had few or no likes or comments. I saw very little Disney content, though I did see a lot of South Park, Peppa Pig, and SpongeBob videos, none of which were very good.


0:00
/0:45

The death of Sora is a good time to check in on how other attempts to slopify Hollywood are going. In December 2024, I wrote about Chinese television giant TCL’s attempt to make an AI-generated movie studio called TCL Film Machine, which was pitched as a “key pillar of TCLArt, an important brand initiative of TCL to make art more accessible and inspiring worldwide.” I went to the premiere of a series of short films that were pitched as a new way of making movies faster and cheaper. At the time, I asked Chris Regina, TCL’s then Chief Content Officer and a leader of the TCL Film Machine project what the plan was.

“If you can imagine where we might be a year or 18 months from now, I think that in some ways is probably what scares a lot of the industry because they can see where it sits today, and as much as they want to poke holes or be critical of it, they do realize that it will continue to be better,” he said, 14 months ago.

Regina and another TCL executive on that project now have other jobs. TCL itself has released the five shorts I saw, as well as an 11-minute, widely mocked romcom film called Next Stop Paris, and a four-minute film called Memory Maker. Memory Maker was released 13 months ago and has 1,771 views on YouTube. Next Stop Paris has 10,000 views on YouTube. Comments have been turned off for both movies. The “applications” page for prospective TCL Film Machine projects is now just a static page, and TCL hasn’t mentioned AI films in any of its press releases in roughly a year; many of its recent announcements have to do with releasing reruns of shows from the 80s and 90s.

Meanwhile, much-hyped “AI movies” or “AI special effects,” including the Brad Pitt-Tom Cruise AI fight scene that the New York Times boldly declared “spooked Hollywood” have been wildly overhyped, still have various continuity errors and an uncanny feel, or are simply not movies in any meaningful sense.

This is not to say that AI will have no role in Hollywood or that people are not making money from AI slop. Hollywood studios are using AI behind the scenes for editing, storyboarding, scratch voiceover, and a handful of other things. But the wild hype of AI slop as a direct threat to human storytelling and AI tools as a replacement for talented humans in Hollywood has not come to pass and it’s not clear if it ever will. The AI movies at AI film festivals continue to suck and the people who show up to them are largely people involved in making them or invested in having them work out. AI slop is effective on social media, meanwhile, not because it is good or because people like it but because these platforms are flooded with it, because social media companies are invested in making generative AI tools, and because their algorithms are wildly broken. It turns out when you try to serve slop on a product people pay for, no one wants it.

And the end of Sora does not mean there is no demand for AI video generators, but it does mean that the overwhelming use case for AI video generators continues to be what it has always been: people making porn, nonconsensual sexual imagery, disinformation, and low-effort slop at scale. The people making this type of content do not want to deal with guardrails or limitations and so have largely flocked to open source and Chinese models. When you take away those use cases, it turns out there’s basically nothing left.


The media in this post is not displayed to visitors. To view it, please log in.

Widely cited AI labor research ignores the most important thing AI is doing: Killing the human internet.#AI #AISlop


AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet


Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.)

In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.”

Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. Researchers appear to be too squeamish or too embarrassed to grapple with the fact that people love to use AI to make porn, and people love to use AI to spam social media and the internet, inherently causing economic harm to creators, adult performers, journalists, musicians, writers, artists, website owners, small businesses, etc. As Emanuel wrote in our first 404 Media Generative AI Market Analysis, people love to cum, and many of the most popular generative AI websites have an explicit focus on AI porn and the creation of nonconsensual AI porn. Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the “good” uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for. (It may be the case that people are disproportionately using Claude at a higher rate for more traditional work applications, but any study on the “labor market impacts of AI” should not focus on the uses of one single tool and extrapolate that out to every other tool. For what it’s worth, jailbroken versions of Claude are very popular among sexbot enthusiasts).

Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth.

Anthropic’s paper does attempt to estimate what the effect of AI will be on “arts and media,” but again, the way the researchers do this is by attempting to decide whether AI can directly do the tasks that AI researchers assume are required by someone with a job in “arts and media.” Other widely-cited papers about AI-related job loss also do not really attempt to consider the potential macro impacts of the ongoing deadening and zombification of the internet, and instead focus on “AI exposure,” which is largely an attempt to predict or measure whether an AI or LLM could directly replace specific tasks that people need to do. Widely-cited papers from the National Bureau of Economic Research and Brookings released over the last several months attempt to determine the adaptability of workers in specific sectors to having many of their tasks automated by AI. The Brookings paper, at least, mentions the possibility of a society-wide shift that is impossible to predict: “the evidence underlying the adaptive capacity estimates here is derived primarily from observed effects in localized displacement events, rather than from large-scale employment shifts across occupations. As a result, the index may be most informative when displacement is relatively isolated—for example, when a worker loses their job but related occupations remain stable. In scenarios in which AI affects clusters of related occupations simultaneously, structural job availability may matter more than individual-level characteristics. Moreover, if AI fundamentally transforms the economy on a scale comparable to the industrial revolution (as some experts have suggested could be possible), it could make entire skill sets redundant across several occupations simultaneously.”

To be clear, AI-driven job loss is a critical thing to study and to consider. But many, many jobs, side hustles, and economic activity more broadly rely on “the internet,” or social media broadly defined. Study after study shows that Google is getting worse, traffic to websites are down, and an increasing amount of both web traffic and web content is being generated by AI and bots. Anecdotally, creators and influencers have told us it’s getting harder to compete with AI slop and harder to justify spending days or weeks making content just to publish it onto platforms where their AI competitors can brute force the recommendation algorithms effortlessly. We have heard from websites that have had to lay people off or shut down because Google’s AI overviews have destroyed their web traffic or because they lose out on search engine rankings to AI slop. Authors are regularly competing with AI plagiarized versions of their books on Amazon, and Spotify is getting overrun with AI-generated music, too.

This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.


LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him

Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop

"Challah Horse" was a Polish meme warning about Facebook AI spam 'targeted at susceptible people' that was stolen by a spam page targeted at susceptible people.

"Challah Horse" was a Polish meme warning about Facebook AI spam x27;targeted at susceptible peoplex27; that was stolen by a spam page targeted at susceptible people.#AISpam #Facebook #MarkZuckerberg #AISlop #FacebookSpam