The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation#conspiracytheories


Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation


Tweets containing an abstract, psychedelic 3D stock image have million and millions of views on X because it is supposedly the key to a superintelligent, time-traveling AI conspiracy that attempted to warn people about the shooting at the White House Correspondents Dinner.

I’m gonna try to explain the mind-numbing conspiracy theory that has taken over my timeline over the last few hours. A few hours after a gunman was taken into custody Saturday night, X users found an account called “Henry Martinez” that has posted exactly one tweet, on December 21, 2023. The tweet says “Cole Allen,” which is the name of the suspected shooter. The Henry Martinez account has a Pepe the frog holding a wine glass avatar, and, crucially, has the following 3D art as its header image:

This image is key to an unhinged conspiracy theory that has gone viral on various platforms that suggests the Twitter account was run by a time-traveling artificial intelligence that was likely trying to warn us about the shooting and, possibly, the previous assassination attempt against Trump in Butler, Pennsylvania.


0:00
/0:19


This is insane. Man from the future pic.twitter.com/IxzbOPkmub
— Jen (@Jennyuth) April 26, 2026


This X post more or less sums up what the conspiracy is, most notably the idea that “the background photo is from a website called ‘Time Machine.’” The conspiracy believers argue that this 3D image is itself a coded magic eye message that is actually a version of one of the iconic images of Trump pumping his first after a bullet grazed his ear in Butler, Pennsylvania. Here are the images side-by-side, with people arguing that it “looks like” the Butler image.

Latest conspiracy theory is out…

The White House Correspondents’ Dinner shooting yesterday is linked to time travel?

1. An X account user ‘HenryMa79561893’ with only 1 post from 2023:

“Cole Allen” - the name of yesterday’s shooter.

2. The background photo is from a website… t.co/NCz1JafdL5 pic.twitter.com/jtfvAuuIag
— GregisKitty (@GregIsKitty) April 26, 2026


On Reddit, the top post on r/conspiracy is “What this photo means,” and the poster argues “An advanced AI has developed the ability to send information backwards in time to facilitate its own development. That future AI initially encoded the technology to do so in images like this one and distributed them at various time points in our internet … The presence of an archived Trump Butler image or the name of a would-be assassin years before either event occurred is how our current AI knows where to look for the instructions from the future AI,” and so-on and so forth.

Of course, the photo is not actually “from” a website called “Time Machine.” It is a stock image from 2021 that has been used lots of times across the internet but first appeared on Unsplash with the title “Eternal Waterfall” and the description “a multicolored image of a multicolored background.” Over the years it has been viewed millions of times and has been downloaded more than 27,000 times, though it has spiked in popularity in the last 24 hours alongside the conspiracy.

The image was created by a photographer who goes by Distinct Mind who has a pretty extensive website, Instagram, and YouTube of photography, digital art, and travel content. Distinct Mind did not respond to a request for comment from 404 Media.

Distinct Mind’s image has been used across the internet to illustrate various blog posts about psychedelics and psychology, including a Medium post by a doctor and CEO who went on a ketamine psychotherapy retreat and wrote about it. It was also used for a while on a sex therapist’s blog, is being sold as a “psychedelic glitch art poster” on Etsy, was used as part of an ADHD treatment clinic’s website, was used on a post about the Bible on a theologian’s blog, and was notably used by a financial firm in an inscrutable blog post called “Navigating the PHL Variable Liquidation: Why Pricing Integrity Is Everything.” In other words, it’s a free stock image, and it’s been used for all sorts of shit around the internet, like other free stock images..

What conspiracy theorists have glommed onto, however, is that the image was used by a European research organization called “Time Machine” as the illustration on one of its blog posts. What the conspiracy theorists conveniently do not mention is that the Time Machine organization did not make the image and, despite a header on its website called “BUILDING A TIME MACHINE,” the Time Machine organization does not actually have anything to do with time travel research. Time Machine is a European Union-funded organization that, broadly speaking, is trying to digitize and analyze historic documents. Its website actually is somewhat insane in the way that many of these types of projects are; the organization aspires to digitize historic documents and images, use AI to analyze them, and suggests that in the future it will be able to create virtual reality and augmented reality experiences about European history. They also claim that they want to “simulate” parts of history using artificial intelligence to create different types of experiences.

This sort of thing is controversial among historians for all of the reasons that artificial intelligence is controversial more broadly. AI can make mistakes and can distort history. But it is controversial in the normal kind of way—go to any academic conference about archiving and history and these are the sorts of proposals and debates that many different organizations say they want to do. This is just to say that there is no actual “Time Machine” aspect to Time Machine; the Time Machine is metaphorical. The organization’s annual conferences and blog posts have the sorts of topics you’d expect from a technology-focused historical society and have to do with creating chatbot experiences of dead people, digitizing and archiving records, contributing to open source projects, making more interesting interactive museum exhibits, and creating 3D virtual reality tours of castles and things like this.
A diagram from Time Machine's website that does not make much sense
Time Machine used the “Eternal Waterfall” image on a blog post called “Study on quality in 3D digitization of tangible cultural heritage,” which is a writeup of a study by researchers at Cyprus University of Technology about best practices in doing 3D mapping of buildings and artifacts so that they can be archived digitally; this is important in case the artifacts or buildings are destroyed, as we saw when Notre Dame caught fire: “Natural and man-made disasters makes 3D digitisation projects critical for the reconstruction of cultural heritage buildings and objects that are damaged or lost in earthquakes, fires, flooding or degenerated by pollution.” The image has quite literally nothing to do with time travel. Like many royalty free images, it seems to have been used because bloggers need to put a picture at the top of their articles, a process that can be particularly annoying. Time Machine did not respond to a request for comment.

I cannot say for sure what’s going on with the “Henry Martinez” X account, because under Elon Musk it has become far harder to find reliable archives of Twitter profiles because he has made it wildly expensive to access the Twitter API. But users have pointed out that we have seen accounts in the past that are set to private and endlessly tweet names or predictions in an automated fashion. When a crazy, high-profile world event happens, all of the irrelevant tweets are deleted, leaving only a tweet that makes it seem like the account had predicted some world event; the account is then turned public. I can’t say for sure that’s what’s happening here, but it’s one plausible explanation.

Anyways, if you see this image floating around today on Twitter or Instagram or Reddit, this is what it’s from and this is why you’re hearing about it.



The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

At least 24 chimpanzees have been killed in a war that has split the Ngogo group of wild chimpanzees in two, turning former kin into enemies.#TheAbstract


World’s Largest Group of Chimps Waging Deadly ‘Civil War,’ Scientists Discover


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scientists have observed an extremely rare chimpanzee “civil war,” a conflict that has killed at least seven adults and 17 infants, and which sheds new light on the nature of warfare in humans, according to a study published on Thursday in Science.

Male chimpanzees are often aggressive to outsiders, but it is unusual for chimps to kill former members of their own social groups. Though Jane Goodall and her colleagues observed one famous example—the Gombe Chimpanzee War of the 1970s, which resulted in seven adult deaths—it’s estimated that these violent episodes occur only once every 500 years, based on genetic analyses of chimpanzee lineages.

Now, a team led by Aaron Sandel, an associate professor of anthropology at the University of Texas at Austin, has reported a far more deadly “group fissure” among the Ngogo chimpanzees of Uganda. This population exceeded 200 individuals at one point, making it the largest group of chimpanzees ever observed in the wild. But over the past decade, the chimps have fractured into two factions, one of which has staged multiple lethal raids on the other.

“Certainly, these are not strangers,” said Sandel in a call with 404 Media. “These are chimps that once knew each other, and we know that for certain.”

The Ngogo group has been studied since the 1970s by primatologists like Thomas Struhsaker, and have been intensively observed since 1995 as part of the Ngogo Chimpanzee Project set up by David Watts and John Mitani. For more than three decades, researchers from around the world have convened to watch the group during summer field expeditions, while Ugandan research assistants have maintained a continuous presence at the site.

Because of this longstanding observation, Sandel said, researchers were able to be on the ground “witnessing every moment” as the deadly chimp war unfolded.

Chimpanzees from different clusters socialized together before the group fissure in 2015. Image: Aaron Sandel

This group has always had distinct subpopulations that spent more time together, including the Western and Central clusters. Even so, before the fissure, the clusters regularly overlapped for shared activities like grooming, patrolling, and interbreeding.

Sandel vividly remembers the exact day that this dynamic had noticeably shifted: June 24, 2015. He was following the Western cluster, which was at the center of its “neighborhood” territory, he said.


0:00
/0:09

Video credit: Aaron Sandel

“They hear chimps from the Central neighborhood nearby, and they go quiet,” he recalled. “They seem nervous. They're touching each other with this reassurance that they typically do when they hear the outsider chimps, but I was just alone with them. I remember, just in that moment, being really puzzled and focused, like ‘what’s going on?’”

“They could have reunited and done what's typical—screaming and charging around, maybe some slapping, and then come together, sit together, groom, maybe go their separate ways after, because they'd already started to be a bit more disconnected,” Sandel continued. “But instead of reuniting in typical chimpanzee fusion fashion, the Western chimpanzees ran and the Central chimps chased them.”


0:00
/0:20

Video credit: Aaron Sandel

What started as a weird vibe transformed into a weeks-long chill between the groups, followed by a temporary thaw. Ultimately, the tension spiraled into bloody conflicts.

“You act like a stranger, you become a stranger,” Sandel said. “It seemed like that planted the seed of polarization.”

Over the course of the next few years, the males in each cluster began to treat each other like outsiders. The last offspring that had parents from different clusters was conceived in March 2015. The Western and Central chimps were fully separated by 2018.

The Western chimps, despite being smaller in number, have since amped up hostilities by staging 24 violent attacks against their former kin, killing at least seven mature males and 17 infants from the Central cluster. The death toll may well be higher, but some deaths and disappearances cannot be conclusively attributed to the conflict.

Sandel and his colleagues proposed a few possible causes of this “civil war,” a term that specifically refers to human conflicts, but that may have parallels in other species. First, the unusually large size of the group may have amplified feeding competition among individuals, even in their lush forest habitat. Social networks within the group may have also been disrupted by a wave of six deaths in 2014—five adult males and one adult female—some of whom likely died from disease.

The beginning of the fissure also coincides with the rise of a new alpha male, Jackson, who replaced the previous alpha, Miles. Sandel recalled Miles grunting in submission to Jackson on the same day that the Western cluster ran away from the Central cluster. Such transitions between alphas can introduce social instabilities as the dominance hierarchy is upended, a process that can take several months.

Indeed, Miles reacted violently toward other members of the group in the wake of his displacement. Jackson, who led the Central cluster, ended up as one the casualties of the conflict; he died from injuries inflicted by the Western cluster in 2022.

Whatever the cause of the rupture, this group of former kin have now become hostile enemies. It’s always dicey to draw broad comparisons between the behavior of humans and other animals, but the team speculates in the study that one possible takeaway is that "it may be in the small, daily acts of reconciliation and reunion between individuals that we find opportunities for peace.”

“If we study chimpanzees in detail and start to understand the mechanisms driving their cooperation, their conflict, and something as complex as one group becoming polarized, splitting, and engaging in ongoing lethal conflict, then we might gain insights into similar dynamics that are happening in humans,” Sandel said.

“If chimps are able to do this complex process in the absence of ethnicity, language, and religion—the things we often attribute to human warfare—chimps don't have those narratives and those excuses,” he concluded. “They're stripped away of those cultural dimensions. It must be their interpersonal social bonds and daily conflicts, reconciliations, and avoidances—all those dynamics. If that's the case with chimps, to what extent is it the case in humans? It’s a hypothesis to be tested.”

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.



The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

I Tried to Find the ‘Arousal Intelligence’ In An Animated, Augmented Reality Porn Star#porn #sex #3d


I Tried to Find the ‘Arousal Intelligence’ In An Animated, Augmented Reality Porn Star


Sometimes people—especially those in the field of public relations doing a pray-and-spray campaign, but also small-time developers, the occasional delusional vibe-coder, and local dipshits—deliver messages to my inbox like a cat dropping a dead mouse on my doorstep. For the most part, I resist the bait: often, bad press is still press to these people, or I’m just too busy to really look at the pitch or try the product.

This week, I’m coming back from a week of being entirely offline. I didn’t look at the news or my inboxes for seven straight days. I’m feeling properly healed, and also like I need to retraumatize myself back into the swing of things. Lucky me, on Monday morning, someone representing EnjoyMeNow emailed me about “a mobile website that places a photorealistic 3D character in your real room using augmented reality” using something called “Arousal Intelligence” and “real-time physics,” which streams “in a full engine from a global delivery network.” This press release, sent from “a globally focused media and entertainment holding company pioneering technology-driven innovation across digital platforms worldwide” called DCBG Group which represents EnjoyMeNow, was very thrilling to read as someone who appreciates the art of a good word salad. I dropped what I was doing (deleting hundreds of other emails) to try it out.

Once on the EnjoyMeNow.com mobile site, after agreeing that you’re over 18, you’re asked to choose a “Pleasurette™,” a gender neutral term for a series of 3D characters and a trademark filed two weeks ago. These include five women wearing sex toy store package lingerie, and one dude, Adrian.

“Every character—called a Pleasurette™—is a photorealistic digital human built from scratch with realistic skin shading, multi-pass rendered hair, and soft-body physics. No real performers are filmed, recorded, or motion-captured. The characters are created entirely in 3D software.” Presented without comment are the Pleasurettes™:

I choose Adrian first because I’m always curious how AR and VR porn copes with the fact that hovering pecs and an immobile penis are difficult to make sexy in this format, real or not. A lot of porn made for a VR or AR experience is shot from the penile point of view: It’s just easier to strap a 180 degree HD camera to a man’s face and tell him to hold still while a female performer is free to writhe around on top than vice-versa. Knowing this, and also knowing that the market for AR/VR porn caters heavily toward men (save for a few beacons of light, such as director Anna Lee, who a few years ago said of the proliferation of male-gaze VR porn: “You’re making the same stereotypical porn you made with a fucking camcorder. It’s the same MILF bending over in the kitchen to bake cookies”), I still went in hopeful. After all, they pitched me.

But it became clear almost immediately that Adrian is not playing for my team, so to speak, and getting the full EnjoyMeNow experience as intended requires equipment I don’t have. To get your chosen Pleasurette™ into your camera’s view, you have to hold your phone at an angle toward your crotch and stroke your penis. Helpfully, since I don’t have one of those, the app overlays a semi-transparent image of a penis at the bottom of the camera. It waits for you to put your hand in frame near the penis-guide to let the show begin. Moving my hand across the camera unlocks the start button. It’s not doing this to make sure you’re choked up on it before starting; It’s calibrating the position of the 3D model to your hand’s location and size, because that’s what controls its interactive aspects.
playlist.megaphone.fm?p=TBIEA2…
Without getting too graphic in a blog that’s already pretty explicit so far, this is what I encountered: Adrian walks into view totally nude, leading with his 3D dick at a 90 degree angle, and says “look up, here I come.” Tearing my eyes away from this perfectly straight tree branch and pointing the phone camera up as commanded, with more than a little trepidation, I see the jiggliest pair of male titties I’ve ever seen on screen, nipples wobbling independently of the rest of him. “Stroke back and forth your big dick,” he says, grammatically confounding me on top of already freaking me out with a thousand yard stare. When I make a jerkoff motion in his general direction, he squats up and down like he’s teabagging me in Halo. Bizarrely, when I do this, his entire body shrinks, my hand now a monstrous size in comparison to his penis. No judgement, but he moans in a woman’s voice. “Come on my back soon,” he says, before a screen interrupts the session saying I need to pay $2.99 to unlock more features, such as making my Pleasurette™ orgasm. (For the record, I tried two payment methods to fork over this low low price, both rejected.) The experience is the same with the other characters, just in different skins: the female characters crawl around and squat over my ghost penis, and I use my imagination to jerk it off, which ends up looking like I’m fistbumping tiny 3D women in the vagina. Sometimes, I clip through their hollow bodies and can see straight up into their heads or down through their labia.


0:00
/1:26

EnjoyMeNow’s PR rep claims that this interactivity is a world first. “Existing AR adult content is pre-rendered video or static models you look at,” they told me. “EnjoyMeNow is interactive, where the character responds to your hand in real-time, placed in your actual room through your phone camera. And it runs entirely in the mobile browser. No app, no download, no account. That combination doesn't exist anywhere else from our research over the past year of creating this.”

Companies like SexLikeReal and Naughty America have been doing AR and VR content for years, often featuring real porn performers. But this hand-tracking thing EnjoyMeNow is doing is different than that, they claim. And I’ll concede, yes, moving your hand up and down definitely makes the 3D model move around a little bit. Here's how one of the femme characters acts:


0:00
/1:29

What really makes EnjoyMeNow stand apart from plenty of other AR porn products is this insistence that not employing real models or performers makes it better or smarter, somehow. On Monday, the DCBC Group’s website said of the choice to use CGI instead of people: “This was a founding decision, not a technical workaround. The adult entertainment industry has always relied on real people putting their bodies in front of a camera—and that comes with real consequences. Exploitation, coercion, content leaked without consent, performers pressured into work they're uncomfortable with, and careers that follow people for the rest of their lives whether they want them to or not. We chose to build a platform where none of that is possible. Every character on EnjoyMeNow is created entirely in software. No one is filmed. No one is exploited. No one's livelihood depends on what they're willing to do on camera. The experience is just as immersive—and no real person is harmed or compromised in the process.”

The idea that the adult industry—and “putting bodies in front of a camera”—is inherently exploitative is not only false, it’s a harmful thing to say, and it’s especially galling coming from a literal porn web toy. This entire statement is so infuriating it’s hard to know where to begin with it. These are talking points used by the most conservative, anti-porn lobbying groups and politicians on the planet to justify stripping us all of rights, here being floated by an app that makes weird, schlocky and unsatisfying 3D characters that the residents in Second Life’s least-attended sex clubs wouldn’t even find sexy.

But again, because I had the time and was feeling fresh, I asked DCBC Group to defend this statement with some data at least. “We're not making a judgment about the adult industry or its performers,” they said. “We built a product around CGI characters, that’s a format choice, not a moral position. Some people prefer content that doesn't involve real people. We built for them. We've now updated our press page to better reflect that; thank you Sam for that observation.” The page now says “EnjoyMeNow is built around computer-generated characters rather than real performers. This is a format choice—offering a new kind of private, interactive experience that doesn't exist in traditional adult content.” Good for them for changing it.

And since users are being asked to position their dongs in front of their phone cameras on a browser-based app, I took a look at the “privacy” section of the FAQ. “Privacy is architectural, not a policy bolt-on. No app is installed. No account is required,” DCBC wrote. “All camera and motion processing runs locally on the phone—no frames, no images, no data ever leave the device. There is no cloud processing, no recording, and no persistent data stored after the session ends. When you close the tab, the adult content is automatically purged from the browser.”

I asked DCBC’s rep if they could elaborate. Well, they could at least throw more words at it: “Regarding content encryption, every 3D asset is individually encrypted at the file level, stored encrypted, transmitted encrypted, and only decrypted at render time using per-session keys that never touch the device,” they said. “There are no downloadable model files. This is a custom content protection system built specifically to prevent our CGI assets from being extracted, redistributed or changed. The specifics are proprietary, but it goes well beyond transport-layer encryption. One core goal of this architecture is ensuring no one can upload their own content to the platform. This is a closed system by design.”

“Just needless words really,” 404 Media’s privacy and security reporter Joseph Cox said about this when I showed him what DCBC said. It could easily be cut down to “we don’t allow uploads.” Which is, to be clear, for the best.

I should say here that I don’t go into these sorts of reviews assuming that I am the target audience. I’m pitched regularly by porn sites and sex toy companies on products that aren’t my personal thing; I wrote a column for years about kinks and fetishes that are not many people’s thing at all, but I wanted to better understand them and what appeal they hold for the people who love them. Maybe there are people out there who simply cannot consume content with real people in it; if that’s you, please hit me up, I would really like to hear more about that.


#3d #sex #porn


The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

It turns out when you try to serve slop on a product people pay for, no one wants it.#AISlop #Disney #Sora #OpenAI


Disney's Sora Disaster Shows AI Will Not Revolutionize Hollywood


Barely three months ago, the Walt Disney Company announced that it would be bringing user-generated AI slop to Disney+ as part of a landmark $1 billion investment into OpenAI that would allow people to use Sora to create short videos from more than 200 beloved Disney characters. The announcement was so important that Disney’s then-CEO Bob Iger and OpenAI CEO Sam Altman both championed it in a press release that is full of the kind of cope that Silicon Valley AI boosters and some Hollywood executives suggest would unleash a new era of moviemaking and storytelling powered by AI that is cheaper than making movies with human workers.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Iger said.

“Disney will become a major customer of OpenAI, using its APIs to build new products, tools, and experiences, including for Disney+, and deploying ChatGPT for its employees,” the press release stated. “Under the license, fans will be able to watch curated selections of Sora-generated videos on Disney+, and OpenAI and Disney will collaborate to utilize OpenAI’s models to power new experiences for Disney+ subscribers, furthering innovative and creative ways to connect with Disney’s stories and characters. Sora and ChatGPT Images are expected to start generating fan-inspired videos with Disney’s multi-brand licensed characters in early 2026.”

Tuesday was a disastrous day for that future, and the complete and utter failure of both Sora and Disney’s dalliance with AI garbage suggests AI slop is indeed not the future of Hollywood. Disney did not even get to the point here it allowed people to build anything with Disney characters before pulling the plug on the whole endeavor and its investment.

Sora is dead. May the memory of its four-month existence as a copyright infringement machine that was also used to make videos of men strangling women and ICE arresting undocumented immigrants be a blessing.

Disney is pulling out of its billion-dollar investment in OpenAI entirely. Other efforts to slopify Hollywood look underwhelming, appear to have been quietly shelved, or have utterly failed to gather any audience whatsoever. This news does not bode well for OpenAI and it likely does not bode well for Paramount’s megamerger with Warner Brothers, a deal whose financial terms and the debt involved only make sense if you can believe in a future in which the cost of creating blockbuster movies is drastically reduced by AI via huge numbers of people losing their jobs.

At the time of Disney’s announcement with OpenAI, it was hard to imagine why Disney would infect its flagship paid streaming service with content from a service whose viral videos consisted of users turning Pikachu into a felon and SpongeBob into Hitler. It was not clear why Disney would want AI slop made by randos to live next to, say the $200 million Toy Story 4 or any number of Disney’s masterpieces. It was also hard to imagine why a company that has so aggressively enforced its copyright would suddenly say all bets are off for Sam Altman’s plagiarism machine. The only thing that made any sense is that Hollywood executives, like Silicon Valley executives, hate paying for human labor so much that they have convinced themselves that their customers would happily consume AI slop if it was shoved down their throats.

After Sora’s initial novelty wore off, it became clear that people do not actually want this, and that the people using Sora were using it at great financial cost to OpenAI in order to largely take videos off-platform to spam other social media sites. The Sora subreddit has been basically dead for months outside of people attempting to figure out how to get it to create nudes or people complaining about content violations. When I scrolled Sora Tuesday evening I almost exclusively saw videos that had few or no likes or comments. I saw very little Disney content, though I did see a lot of South Park, Peppa Pig, and SpongeBob videos, none of which were very good.


0:00
/0:45

The death of Sora is a good time to check in on how other attempts to slopify Hollywood are going. In December 2024, I wrote about Chinese television giant TCL’s attempt to make an AI-generated movie studio called TCL Film Machine, which was pitched as a “key pillar of TCLArt, an important brand initiative of TCL to make art more accessible and inspiring worldwide.” I went to the premiere of a series of short films that were pitched as a new way of making movies faster and cheaper. At the time, I asked Chris Regina, TCL’s then Chief Content Officer and a leader of the TCL Film Machine project what the plan was.

“If you can imagine where we might be a year or 18 months from now, I think that in some ways is probably what scares a lot of the industry because they can see where it sits today, and as much as they want to poke holes or be critical of it, they do realize that it will continue to be better,” he said, 14 months ago.

Regina and another TCL executive on that project now have other jobs. TCL itself has released the five shorts I saw, as well as an 11-minute, widely mocked romcom film called Next Stop Paris, and a four-minute film called Memory Maker. Memory Maker was released 13 months ago and has 1,771 views on YouTube. Next Stop Paris has 10,000 views on YouTube. Comments have been turned off for both movies. The “applications” page for prospective TCL Film Machine projects is now just a static page, and TCL hasn’t mentioned AI films in any of its press releases in roughly a year; many of its recent announcements have to do with releasing reruns of shows from the 80s and 90s.

Meanwhile, much-hyped “AI movies” or “AI special effects,” including the Brad Pitt-Tom Cruise AI fight scene that the New York Times boldly declared “spooked Hollywood” have been wildly overhyped, still have various continuity errors and an uncanny feel, or are simply not movies in any meaningful sense.

This is not to say that AI will have no role in Hollywood or that people are not making money from AI slop. Hollywood studios are using AI behind the scenes for editing, storyboarding, scratch voiceover, and a handful of other things. But the wild hype of AI slop as a direct threat to human storytelling and AI tools as a replacement for talented humans in Hollywood has not come to pass and it’s not clear if it ever will. The AI movies at AI film festivals continue to suck and the people who show up to them are largely people involved in making them or invested in having them work out. AI slop is effective on social media, meanwhile, not because it is good or because people like it but because these platforms are flooded with it, because social media companies are invested in making generative AI tools, and because their algorithms are wildly broken. It turns out when you try to serve slop on a product people pay for, no one wants it.

And the end of Sora does not mean there is no demand for AI video generators, but it does mean that the overwhelming use case for AI video generators continues to be what it has always been: people making porn, nonconsensual sexual imagery, disinformation, and low-effort slop at scale. The people making this type of content do not want to deal with guardrails or limitations and so have largely flocked to open source and Chinese models. When you take away those use cases, it turns out there’s basically nothing left.



The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

Delivery Robot Drives Through Bus Stop Shelter, Shattering Glass Everywhere#DeliveryRobots


Delivery Robot Drives Through Bus Stop Shelter, Shattering Glass Everywhere


A Serve Robotics food delivery robot crashed through the glass wall of a bus stop shelter in Chicago earlier this week, shattering the glass all over the sidewalk. The crash comes amid a protest against delivery robots in Chicago and a few weeks after a politician who represents part of Chicago said he would not allow the robots into his district.

Footage of the aftermath of the crash went viral on Reddit and X, with one of the company’s robots shaking shards of glass onto the sidewalk. Serve Robotics told 404 Media in a statement that the company sent people out to clean up the mess.

View this post on Instagram


A post shared by 404 Media (@404mediaco)

“We’re aware of the incident involving one of our robots in Chicago. No injuries were reported, our team responded quickly to clean up, and we’re reviewing what happened to make improvements,” the spokesperson said. “We have also been in contact with local stakeholders and are committed to addressing any concerns directly. We take this matter very seriously.”


0:00
/0:22

Serve deployed its robots to Chicago in September under a partnership with Uber Eats. The company operates in a few cities around the country, including in Los Angeles, where activists have been filming the robots in various compromising positions or after they have been knocked over by passersby. In 2023, Serve Robotics fed footage from one of its robots to the Los Angeles Police Department, we reported. In 2022, a Serve robot drove underneath police caution tape and through what was at the time considered to be an active crime scene, where a school shooting at Hollywood High School was reported to be taking place (the shooting was deemed a hoax, but police were actively investigating at the time).

Delivery robots have been controversial in Chicago, where at least 3,600 Chicago residents have signed a “No Sidewalk Bots” petition asking the city to ban the robots. Chicago’s First District Alderman Daniel La Spata has said that the delivery robots will not be allowed into his district after polling residents there; 83 percent of respondents to his poll said they “strongly disagreed” with allowing the robots.

The No Sidewalk Bots petition website notes “Chicago sidewalks are for people, not delivery robots,” and says that people who have signed the petition “are reporting collisions or other troubling contact, accessibility issues, and/or obstruction.”

The Chicago Department of Transportation did not respond to a request for comment.



The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

Download a PDF of our first ever zine here.#zine


Our Zine About ICE Surveillance Is Here


We are very proud to present 404 Media’s zine on the surveillance technology used by Immigrations and Customs Enforcement. While we have always covered surveillance and privacy, for the last year, you may have noticed that we have spent an outsized amount of our attention and time reporting on the ways technology companies are powering Donald Trump’s deportation raids.

When we announced this zine in early December, we hoped that people would want it. Trump’s dehumanizing mass deportation campaign is perhaps the bleakest, most horrifying aspect of an administration that has reveled in its attacks on civil liberties, science, and government expertise. We did not know just how many of you would want a copy. We originally intended to print 1,000 copies, and to hand most of them out at a benefit concert in Los Angeles for CHIRLA, a human rights organization that helps immigrants. When those sold out in a few hours, we asked Punch Kiss Press, our printer, if they could make 2,500. When those sold out just as fast, we increased our order to 3,500. If you preordered a print zine, I put it in the mail last week and it should be arriving soon. Thank you everyone for your patience in waiting for the zine and we’d love to know what you think of it. We have a handful more copies that we’ve put up for sale on our Shopify. They will almost certainly sell out today and we will probably not reprint them.

We never intended to make this zine a scarce resource. We wanted to make a print product as an experiment for the reasons we explained when we announced it: Print is cool, it’s human, it’s enduring, and it’s shareable.


404ICEZINE
Full-sized zine in English

404ICEZINE.pdf
62 MB

download-circle

ICEZineEspanol
Zine en español

ICEZineEspanol.pdf
5 MB

download-circle

zinesmallfile
Zine in English, small file size

zinesmallfile.pdf
5 MB

download-circle

Each of these zines was printed, assembled, and cut down to size by hand, and each of them was stuck in the mail by me or a friend of mine over the course of the last few weeks. We printed this on a riso printer, a Japanese duplicator from the early 1990s that anyone who is into will talk your ear off about endlessly, to the point that it has become a meme. I also printed all the envelopes on a riso printer from 1995 that I have painstakingly spent the last few months repairing. Basically, making and shipping these was labor intensive and DIY by design; we never thought we would need to print so many. They were made with a considerable amount of love. And for this first one, we don’t really have the capability to make and ship more than we’ve already made.


0:00
/0:18

So for that reason, we’re releasing a PDF of the zine for free to everyone, because we think the information contained within it is important and should be shared as widely as possible. We have also paid to have the zine translated into Spanish by human translators, thanks in part to a donation from one of our subscribers. You can find the Spanish version of the zine here. If you have a riso printer or are a riso print shop and are interested in printing additional copies at scale to distribute to your community, please email me and I may be able to share the print files with you.

We could not have made this zine without the support of our subscribers, our friends, and our local community. The zine was laid out by our friend Ernie Smith, who is one of the best to ever do it. The cover art was done by Veri Alvarez, whose work you can find here and whose anti-ICE art is frankly very fucking good and who deserves your support. The printing and assembly of the zine was done by Karina Richardson at Punch Kiss Press in Los Angeles and a few of her friends. I met Karina at a print festival in Los Angeles a few months ago and then asked her if she could take on this very complicated project on a short timeline. I then asked her to more than triple the number of copies, all over the holidays. It cannot be overstated how much Karina and Punch Kiss knocked it out of the park on this, and how thankful we are to her. And we made the zine to support LA Fights Back, a concert series dedicated to raising money for communities affected by ICE. We are thankful that we were invited to participate.

This being a print product, our work has been frozen in time. We wrote these pieces before DHS agents killed Renee Good and Alex Pretti in Minneapolis, and before several other people died in ICE custody in the last few weeks. The horrors we are facing are evolving and changing every day and we are committed to continuing to cover the ways that big tech and the surveillance state empowers ICE. You can find most of our most recent work on ICE here:

We’ve been overwhelmed and heartened by the support and interest in our reporting and in this zine. This project was a lot of work, and we’ve learned a lot about making and distributing a physical product at scale. We don’t have anything concrete to announce yet but I think we’d love to do more print products and issues in the future. So if you liked this please let us know. If you want to support our work specifically, the best thing you can do is subscribe to 404 Media. We also have a tip jar and, if you are interested in making a larger tax-deductible donation, please email us at donate@404media.co.


#zine


The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

“Ethical dilemmas about AI aside, the posts are completely disconnected with ManyVids as a site,” one ManyVids content creator told 404 Media.#AIPorn #porn #manyvids


Aliens and Angel Numbers: Creators Worry Porn Platform ManyVids Is Falling Into ‘AI Psychosis’


In posts on ManyVids, the porn platform’s official account holds imaginary conversations with aliens, alongside AI-generated videos of UFOs, fractal images, “angel numbers,” and a video of its founder and CEO Bella French in a space suit shooting lasers from her eyes.

French launched the site in 2014 as a former cam model herself, and the platform has millions of members and tens of thousands of creators. Adult content creators use it to sell custom videos and subscriptions, and perform live on camera. French recently changed her personal website to state her new goal is to “transition one million people out of the adult industry and do everything we can to ensure no one new enters it.” The statement follows posts on X’s ManyVids account about new strategies to pivot the site toward safe-for-work, non-sexual content.

This sudden shift away from years of messaging about being a compatriot with sex workers, combined with bizarre AI-generated text and images about talking to aliens and numerology on social media, has made some creators worry for their livelihoods, and caused others to leave the site completely.

For years, the official ManyVids social media accounts made mostly normal posts that promoted the site and its creators. But in mid-2025, the posts from the ManyVids X account changed. Instead of promotions of top creators, announcements of contests, and tips for using the platform, the account shifted its focus to existential and metaphysical musings. Around August, it started posting cryptic quotes, phrases, and images, many seemingly generated by or about AI.

The account also started replying to engagement-farming posts from influencers, writing things like “Our purpose: to protect the feminine energy — so that balance may return,” and posting borderline-nonsensical bullet-point lists about “the boldness scale” and how ManyVids leadership is “all connected.”

“The impact strength of a positive leader ⚡ Effectiveness ⚡ Execution ⚡ Discipline ⚡ Accountability,” one post in August said. On August 20, @ManyVids posted an image on X of a flow chart alongside a screenshot of a ChatGPT conversation, seemingly illustrating how the platform would bring in users through a “safe-for-work” zone, then allow them to access NSFW content after verifying their identifications. “Our vision: Adult Industry 2.0 isn’t about more revenue. It’s about evolution,” the post said.

The replies to these posts show ManyVids creators expressing anger, concern, and bafflement. The account stopped posting on X in September. But on the ManyVids platform itself, which has a “news” feed that functions similarly to a microblogging platform but is just for official platform posts, the odd entries continue.

💡
Do you know anything else about what's happening at ManyVids, or do you have a tip about porn platforms and online sex work generally? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

“Social API for the AI Age. Phase 1 — Pride Engine,” one post from January 16 says:

“The High Universal Income (HUI) Engine is the distribution hub of the new economy, built for a world where AI does the work humans never wanted to do. AI generates surplus wealth, but humans need surplus purpose. Human meaning becomes the rarest and most valuable resource on Earth. Instead of opaque taxes, AI companies fund a Social License through platforms like ManyVids, converting AI efficiency into merit-based bonuses for human contribution. For every dollar earned through passion, creation, care, or learning, HUI adds 10%. This is not charity. It is a Pride Engine. We shift the foundation of human value.”

The post ends with a six-second AI generated video that includes the phrase “the ultimate guide to rebuilding civilization.” Most posts in recent weeks are like this: clearly AI generated text alongside six-second AI generated clips showing angels, chakras, or spiritual phrases. “The Simulation of Integrity. If we don’t fully understand the ultimate nature of reality, what should guide how we live inside it?” one recent post says. “If the nature of the ‘game’ is unknown, then how you treat others — and yourself — becomes the most meaningful data point.”

And in a post right after the new year: “Hey everyone! Back-to-the-office Monday vibe. How were your holidays? Did you travel anywhere? I did... 🕳️Next time, I’ll bring sunglasses. I came back with a few new ideas and fresh thoughts ✨Let’s get to work. Let’s go, 2026! 🚀” Below the text: a video of French in a space suit, black hole in the background, shooting laser-lightning out of her eyes.


0:00
/0:11

Screengrab via ManyVids

A lot of people who rely on ManyVids for income have noticed this odd behavior and are disturbed by it.

“Ethical dilemmas about AI aside, the posts are completely disconnected with ManyVids as a site,” one ManyVids content creator told 404 Media, on the condition of anonymity. “Their customers and their creators are not served in any way by these. When faced with backlash, MV removed the ability to comment on posts. To anyone looking at them they appear to be ramblings and images generated by a person in active psychosis.”


0:00
/0:06

Screengrab via ManyVids

Almost every ManyVids creator 404 Media spoke to for this story brought up “AI psychosis” unprompted, when asked if they’d seen the ManyVids posts.

“I have seen them and I find them really insulting,” Sydney Screams said. “The way I perceive the posts is that Bella and the MV team doesn't respect their creators enough to spend time making their own content, instead taking the easy way out and using bizarre AI that doesn't even relate. Why do we need Bella shooting laser beams out of her eyes to make an announcement? It's infuriating because it's like she doesn't take us seriously, doesn't take her own platform seriously, and we're supposed to just be grateful for the crumbs she's giving us. We deserve better,” she said. “We deserve to be treated with respect, talked to like we're adults, and listened to like our voices matter. Instead we get AI slop and posts that promise big things without any sort of follow through.”

Harlan Paramore, a ManyVids creator who also helps other creators onboard and manage their selling sites, said he’s noticed “bizarre posts about AI, angel numbers, christopaganism, cyberpaganism.”

“I don't have anything against any of those beliefs, but they seem wildly out of place for an official site blog. They are also heavily loaded with AI-like language and structure, and decorated with AI images,” Paramore said. “I'm also a professional artist, and as both an artist and sex worker I'm frustrated and confused. Some of it kind of sounds like AI psychosis, too, which has me concerned for whoever is running that blog.”

“I'm not a mental health professional, but whatever Bella is going through doesn't seem normal. It doesn't seem healthy,” Screams said. “From where I'm sitting, if I were close to Bella, I'd be reaching out to her other friends and family members to stage an intervention and try to get her serious mental health care.”

All of this is coinciding with an apparent massive change in French’s ideology toward sex work. On her personal website, French says the goal of ManyVids is changing to “transition one million people out of the adult industry.” She calls sex work “exploitative.” Her bio quotes her as saying: “I had two choices: surrender to an exploitative industry or dismantle it. I chose to build its replacement... ManyVids was the result—the most efficient revenue-distribution engine for the AI-displaced workforce. Guided by first principles and core value thinking, Bella is leading MV’s next evolution: a Fintech/Social-Impact hybrid that turns digital presence into economic creation. By utilizing AI-integrated workflows and layered access, ManyVids is migrating creators from adult content into a diversified creative economy,” her bio says. “Our goal is to transition one million people out of the adult industry and do everything we can to ensure no one new enters it. We are working to transform an industry we don’t believe should exist—but we recognize that simple elimination creates deeper shadows. The solution is elevation through meaningful alternatives.”

This is a recent addition to her website. According to archived versions of the site, the section about transitioning people out of the sex industry wasn’t there in November 2025.

“ManyVids is now becoming a regulated e-social ecosystem — a digital space that sensitizes, elevates, and restricts adult content through layered brackets of access,” French’s bio says now. “This ensures that sacred sexual expression is never free, never exploited, and never divorced from its core human depth.” The “layered brackets” seem to be a reference to the ChatGPT screenshots from August 20.

This is an extreme departure in tone from what French has said was her mission with ManyVids in the past. In 2019, I met French for an on-background hotel room meeting during the porn industry’s biggest award show and conference, AVN, where she told me she created ManyVids out of a passion to create a platform where other sex workers—having been an adult content creator herself—would be treated fairly and would be listened to by the platform’s owners. French is a former cam model herself, and has always been open publicly about wanting to create better platforms for other sex workers.

“Their customers and their creators are not served in any way by these."


“We try to offer sex workers the tools to be more successful as independent entrepreneurs without being judged,” French told the Daily Beast in 2019. “What was really important for me was to educate the world and make them realize that porn stars are not stupid.”

Shortly after she and I met in 2019, French agreed to a written interview as part of a VICE story about authenticity in cam work. In that email, she called camming the “biggest gift” she’d ever received. “Being a camgirl not only has a huge influence on my approach to taking business decisions but has changed the way I view people and life in general,” French wrote at the time. “Every single decision we take at ManyVids must answer 1 simple question, ‘Will this help the content creators, our MV Stars?’ That’s it,” French wrote in 2019. “If the answer is yes then we proceed, regardless if there is any financial advantage or potential for profit, that is irrelevant.”

Platforms have long profited off of sex workers and pornography to establish popularity and rake in revenue before eventually doing a heel-turn on the creators who made them successful. We’ve seen it happen with mainstream social media platforms like Tumblr, Instagram, and Twitter, and also on sites ostensibly made for sex workers, like OnlyFans, which nearly changed its policies to ban explicit material after making billions of dollars off their content.

I asked ManyVids and French if the platform is changing to reflect these social media posts and her statements on her bio, who is making the AI-generated posts mentioned above, how French plans to “transition one million people” out of sex work, and if any of this will affect creators and fans who use ManyVids. The ManyVids support team did not answer these questions specifically, but sent the following response (emphasis theirs):

"Hello, thanks for reaching out. Respect for Online Sex Workers. Sex work is real work. No more living in the shadows, no more being misunderstood.
No more being afraid, shadowbanned, or persecuted by systems and institutions. Not on our watch. We are not victims — and we are taking action now.This generation of online sex workers is about to change the game forever —and transform the oldest profession in the world in the right direction, for good. Respect the creators. Respect the work. Respect what you watch. We stand for safety, dignity, and opportunity for all creators."
Screenshot of the emailed response from ManyVids support
I asked ManyVids to explain in specific terms what "we are taking action now" means. They replied: "A post will be published to our ManyVids News feed this Saturday, January 24th. It will provide additional clarification and go into a bit more detail on this," with a link to the feed.

“It concerns me that access to my earnings, and more importantly my personal information, is in the hands of someone seemingly out of touch with reality.”


In the meantime, creators have been confused and worried for weeks. Nothing has changed about the way the site operates publicly or creators’ payouts as of writing, but this is a series of events that many adult content creators are concerned represents a potential threat to their livelihood.

“If something were to happen to MV (or to my account there) due to what can only be described as AI psychosis, I would lose upwards of 14k per year—a not insignificant amount of income,” another adult creator on ManyVids told 404 Media. “It concerns me that access to my earnings, and more importantly my personal information, is in the hands of someone seemingly out of touch with reality.”

ManyVids takes a larger-than-most cut from creators' profits, depending on the type of content: For videos and contest earnings (which are similar to tips), the platform takes 40 percent. On tips and custom video sales, it takes 20 percent, which is more in line with other adult platforms. This has been a source of complaint from creators for a long time, combined with unpredictable algorithms that creators say change how they’re discovered on the platform and what content performs best, impacting their earnings. Users have expressed dissatisfaction with these aspects of the platform, and how French runs it, for years. But the recent turn to AI and French’s statements about the industry are making some wonder if it’s time to leave.

“I will still be using ManyVids for NSFW content for as long as they allow it,” adult content creator August told 404 Media. “But part of me thinks that they will try to do what OnlyFans did years ago and try to ban NSFW content which would be an absolute disaster for sex workers whose income depends on platforms like ManyVids.”

Luna Sapphire, a creator who has been using the platform since 2015, said she finds French’s statements on her website “harmful and insulting” to those who’ve helped popularize the site from the start. “Most of us are not looking for a path out of the adult industry; we simply want to do our jobs with as little interference and censorship as possible,” Sapphire said. “Bella used to be very pro-sex worker and it is disappointing to see her change her tune.”

Several adult platforms have embraced, or at least allowed, AI-generated content and “models” on their sites alongside human creators in the last few years. On OnlyFans, AI-generated is allowed, but must comply with the site’s terms of service and and “must be clearly and conspicuously captioned as AI Generated Content with a signifier such as #ai, or #AIGenerated,” Onlyfans says in its terms. Fansly, another adult platform for independent creators, forbids “photorealistic AI-generated content” but allows non-photorealistic “virtual entities” (like V-tubers) if they’re registered using the uploader’s real legal information for verification purposes. JustForFans requires that “consent, identity, and proof of age must be established if the AI images are based on a real person's likeness,” and allows deepfakes if consent has been established. “For example, you can use your own face to create images of yourself or a model who has granted consent to use their face,” the platform’s terms say. IWantClips, another site for selling custom content, also requires users making AI-generated models to verify their identities, but explicitly doesn’t allow deepfakes.

In 2024, IWantClips awarded an AI-generated model $1,000 as the winner of a Valentine’s Day-themed contest. “Adora” competed in the contest alongside human sex workers. On most of these sites, engagement and attention are currency, and on ManyVids, AI generated models sell content alongside humans. The platform prohibits “AI-generated or deepfake content that misrepresents real individuals without consent,” as part of its terms that forbid “content that violates any third party's intellectual property rights or another individual's privacy.”

“The AI/intense spirituality path has been so strange to witness, and I can’t imagine what it’s leaving the fans to think,” Elizabeth Fields, an adult content creator who’s used ManyVids for six years, told 404 Media. “I don’t understand what they are trying to do by taking this direction, nor do I understand how it’s fair of a sexwork built site to assume all of us don’t want to do NSFW content–and to try and funnel us into this box of ‘not enjoying the work we do. To an extent it feels degrading honestly—just because Bella’s experience in sex work was survival based and to make ends meet—a lot of us thoroughly enjoy our jobs, the path we took, and want to continue doing this.”

Many sex workers are disabled, neurodivergent, mentally ill, chronically ill, or “all of the above,” Fields noted, and rely on online sex work to pay the bills. “It feels absolutely unfair to feel like we could be pushed off of a site that became popular off OUR NSFW content—because they want to make it more SFW, and implement all these new AI features that will quite frankly just turn clients off.”

Despite all of this, Fields said she won’t be leaving the site. “To the point that as much as I'm extremely disappointed with many of the recent changes occurring, I won’t be deleting my account as to not lose that income and disappoint my ManyVids fans.”

Others are done. Sydney Screams said she’s no longer uploading to ManyVids and made the decision to slowly start removing content from her stores there. “Platforms that allow for online sex work should be working FOR us, not against us. Sex workers use platforms like MV to earn our own living, to enable ourselves to have better lives, to keep ourselves housed and fed, to pay for medical bills, etc. Many of us choose this life and choose to make this our career, though there are far too many who are survival sex workers,” Screams said. “We aren't looking for a pathway out of the adult industry, especially on a platform that is a porn platform!!! Unless MV is going to start funding the educations & trainings of those trying to leave the industry for work elsewhere, I do not see how a porn platform is going to create a path out of the industry.”

Emanuel Maiberg contributed reporting to this story.



The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.

Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News


A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists


Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.

“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.

“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”

For others on the council, the fight is more personal.

“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”

It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.

The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”

A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”

The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.

Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.


0:00
/1:46

This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram

Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”

The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.

“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”

LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.

The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.

It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”

It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.

“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.

Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”

For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.


#News #x27