The famed convention's organizers have banned AI from the art show.
The famed conventionx27;s organizers have banned AI from the art show.#News
Comic-Con Bans AI Art After Artist Pushback
San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money.Every year, tens of thousands of people descend on San Diego for Comic-Con, the world’s premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention. According to archived rules for the show, artists could display AI-generated material so long as it wasn’t for sale, was marked as AI-produced, and credited the original artist whose style was used.
“Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to ‘Done in the style of,’ that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability,” Comic-Con’s art show rules said until recently.
These rules have been in place since at least 2024, but anti-AI sentiment is growing in the artistic community and an artist-led backlash against Comic-Con’s AI-friendly language led to the convention quietly changing the rules. Twenty-four hours after artists called foul the AI-friendly policy, Comic-Con updated the language on its site. “Material created by Artificial Intelligence (AI) either partially or wholly, is not allowed in the art show,” it now says. AI is now banned at the art show.
Comic and concept artist Tiana Oreglia told 404 Media Comic-Con’s friendly attitude towards AI was a slippery slope towards normalization. “I think we should be standing firm especially with institutions like Comic-Con which are quite literally built off the backs of artists and the creative community,” she said. Oreglia was one of the first artists to notice the AI-friendly policy. In addition to alerting her circle of friends, she also wrote a letter to Comic-Con itself.
Artist Karla Ortiz told 404 Media she learned about the AI-friendly policy after some fellow artists shared it with her. Ortiz is a major artist who has worked with some of the major studios who exhibit work at Comic-Con. She’s also got a large following on social media, a following she used to call out Comic-Con’s organizers.
“Comic-con deciding to allow GenAi imagery in the art show—giving valuable space to GenAi users to show slop right NEXT to actual artists who worked their asses off to be there—is a disgrace!” Ortiz said in a post on Bluesky. “A tone deaf decision that rewards and normalizes exploitative GenAi against artists in their own spaces!”
According to Ortiz, the convention is a sacred place she didn’t want to see desecrated by AI. “Comic-Con is the big mecca for comic artists, illustrators, and writers,” she said. “I organize and speak with a lot of different artists on the generative AI issue. It’s something that impacts us and impacts our lives. A lot of us have decided: ‘No, we’re not going to sit by the sidelines.’”
Oritz explained that generative AI was already impacting the livelihood of working artists. She said that, in the past, artists could sustain themselves on long projects for companies that included storyboarding and design. “Suddenly the duration of projects are cut,” she said. “They got generative AI to generate a bunch of references, a bunch of boards. ‘We already did the initial ideation, so just paint this. Paint what generative AI has generated for us.’”
Ortiz pointed to two high profile examples: Marvel using AI to make the title sequence for Secret Invasion and Coca-Cola using AI to make Christmas commercials. “You have this encroaching exploitative technology impacting almost every single level of the entertainment industry, whether you’re a writer, or a voice actor, or a musician, a painter, a concept artist, an illustrator. It doesn’t matter…and then to have Comic-Con, that place that’s supposed to be a gathering and a celebration of said creatives and their work, suddenly put on a pedestal the exploitative technology that only functions because of its training on our works? It’s upsetting beyond belief.”
“What is Comic-Con trying to tell the industry?” She said, “It’s telling artists: ‘Hey you, you’re exploitable and you’re replaceable.’”
Ortiz was heartened that Comic-Con changed its policy. “It was such a relief,” she said. “Generative AI is still going to creep its nasty way in some way or another, but at least it’s not something we have to take lying down. It’s something we can actively speak out against.”
Comic-Con did not respond to 404 Media’s request for comment, but Oreglia said she did hear back from art show organizer Glen Wooten. “He basically told me that they put those AI stipulations in when AI was just starting to come around and that the inability to sell AI-generated works was meant to curtail people from submitting genAI works,” she said. “He seems to be very against genAI but wasn't really able to change the current policy until artists voiced their opinions loudly which pressured the office into banning AI completely.”
Despite changing policies and broad anti-AI sentiment among the artistic community, Oreglia has still seen an uptick of AI art at conventions. “Although there are many cons that ban it outright and if you get caught selling it you basically will get banned.” This happened to a vendor at Dragon Con last September. Organizers called police to escort the vendor off the premises.
“And I was tabling at Fanexpo SF and definitely saw genAI in the dealers hall, none in the artists alley as far as I could see though but I mostly stuck to my table,” she said. “I was also at Emerald City Comic Con last year and they also have a no-ai policy but fanexpo doesn't seem to have those same policies as far as I know.”
AI image generators are trained on original artwork so whatever output a tool like Midjourney creates is based on an artist’s work, often without compensation or credit. Oreglia also said she feels that AI is an artistic dead end. “Everything interesting, uplifting, and empowering I find about art gets stripped away and turned into vapid facsimiles based on vibes and trendy aesthetics,” she said.
'Secret Invasion' AI Opening Cost No Artists Their Jobs
Method Studios clarifies reports that sparked a social media backlash, stating AI tools "complemented and assisted our creative teams."Carolyn Giardina (The Hollywood Reporter)
Frowned upon in video games, loot boxes are back in real life–and one’s in the Pentagon.#News
There’s a Lootbox With Rare Pokémon Cards Sitting in the Pentagon Food Court
It’s possible to win a gem mint Surging Sparks Pikachu EX Pokémon card worth as much as $840 from a vending machine in the Pentagon food court. Thanks to a company called Lucky Box Vending, anyone passing through the center of American military power can pay to win a piece of randomized memorabilia from a machine dispensing collectibles.On Christmas Eve, a company called Lucky Box announced it had installed one of its vending machines at the Pentagon in a now-deleted post on Threads. “A place built on legacy, leadership, and history—now experiencing the thrill of Lucky Box firsthand,” the post said. “This is a milestone moment for Lucky Box and we’re excited for this opportunity. Nostalgia. Pure Excitement.”
playlist.megaphone.fm?p=TBIEA2…
A Lucky Box is a kind of gacha machine or lootbox, a vending machine that dispenses random prizes for cash. A person puts in money and the machine spits out a random collectible. Customers pick a “type” of collectible they want—typically either a rare Pokémon card, sports card, or sports jersey—insert money and get a random item. The cost of a spin on the Lucky Box varies from location to location, but it’s typically somewhere around $100 to $200. Pictures and advertisements of the Pentagon Lucky Box don’t tell us how much a box cost in the nation’s capitol and the company did not respond to 404 Media’s request for comment.Most of the cards and jerseys inside a Lucky Box vending machine are only worth a few dollars, but the company promises that every machine has a few of what it calls “holy grail” items. The Pentagon Lucky Box had a picture of a gem mint 1st edition Charizard Pokémon card on the side of it, a card worth more than $100,000. The company’s social media feed is full of people opening items like a CGC graded perfect 10 1st edition Venusaur shadowless holo Pokémon card (worth around $14,000) or a 2023 Mookie Betts rookie card. Most people, however, don’t win the big prizes.
Lucky Box vending machines are scattered across the country and mostly installed in malls. According to the store locator on its website, more than 20 of the machines are in Las Vegas. Which makes sense, because Lucky Boxes are a kind of gambling. These types of gacha machines are wildly popular in Japan and other countries in Southeast Asia. They’ve seen an uptick in popularity in the US in the past few years, driven by loosening restrictions on gambling and pop culture crazes such as Labubu.
Task & Purpose first reported that the Lucky Box had been in place since December 23, 2025. Pentagon spokesperson Susan Gough told 404 Media that, as of this writing, the Lucky Box vending machine was still installed in the Pentagon’s main food court.
Someone took pictures of the thing and posted them to the r/army on Monday. From there, the pictures made it onto most of the major military subreddits and various Instagram accounts like USArmyWTF. After Task & Purpose reported on the presence of the Lucky Box at the Pentagon, Lucky Box deleted any mention of the location from its social media and the Pentagon location is not currently listed on the company’s store locator. But it is, according to Gough, still there.
In gaming, the virtual versions of these loot boxes are frowned upon. Seven years ago, games like Star Wars: Battlefront II were at the center of a controversy around similar mechanics. At the time, it was common for video games to sell loot boxes to users for a few bucks. This culminated in an FTC investigation. A year ago, the developers of Genshin Impact agreed to pay a $20 million fine for selling loot boxes to teens under 16 without parental consent.The practice never went away in video games, but most major publishers backed off the practice in non-sports titles.
Now, almost a decade later, the lootboxes have spread into real life and one of them is in the Pentagon.
What is a Lucky Box and why is there one at the Pentagon?
The company that makes the machines announced on Christmas Eve that a Lucky Box is now inside the PentagonJeff Schogol (Task & Purpose)
‘We have to make sure people are watching. We have to make sure we’re keeping track of our community members.’#News
How One Guy Crowdsourced More Than 500 Dashcams for Minneapolis to Film ICE
When self-employed software engineer Nick Benson put out the call for dashcams online, he thought he’d get maybe 10 people to donate. More than 500 have shown up on his front porch in suburban Minneapolis. “The state apparatus, of course, has cameras everywhere,” Benson told 404 Media. “The citizens will also benefit from having the same cameras around to document what's going on and making sure that everything is on the up and up.”In early January, the Trump administration sent 2,000 federal agents and officers to the Minneapolis area. DHS has said hundreds more are on the way. Earlier this week, President Donald Trump threatened Minnesota with a “DAY OF RECKONING & RETRIBUTION” in a Truth Social post.
playlist.megaphone.fm?p=TBIEA2…
On January 7, two days after Benson put out his call for dashcams, ICE agent Jonathan Ross shot legal observer Renee Good in the face. In the wake of that killing, multiple people have filmed agents threatening the lives of other observers.Benson feels like he has to do something, so he gets dashcams into the hands of people who want them. “We need more documentation showing what these people are doing. Because I don't know—other than a compelling visual documentation of what's going on—I don't know what other tools we have until the legislative branch of our government can stand up and do its job and provide a check to all of this, because our state government can't,” he said. “So all we can do is collect evidence that this is happening and let people know about it.
Benson made it easy to buy the cameras. He set up an Amazon wishlist that has the dashcams and a 256gb memory card to go in them. People buy the equipment on Amazon and it shows up at Benson’s house. From there, he reaches out to local community organizers and gets them into the hands of people who want them. “I think more than 350 of those cameras have gone out and are already deployed in the community now,” he said. “So we got them out fast, because we all understand exactly why we need those cameras, and we appreciate that support very much.”
Benson got the idea for the dashcam wishlist when ICE told local police that one of his friends was ramming their cars. “That was completely fabricated,” Benson told me. But there was no way to prove it. It was the word of the federal government against Benson’s friend. “Dashcams are the only way we can prevent that from happening.”
As ICE spreads across cities in the US and continues to disappear and kill people, citizens have taken to the streets to put themselves between the masked agents and their targets. These community observers use a variety of tactics, including blowing whistles to let people know ICE is in the area and recording everything and posting it online.
Benson runs the website JetTip, a flight alert service for aviation enthusiasts and it was through this work he first noticed how America was changing during Trump’s second term. “I got interested in all of the ICE things by tracking the flights that were coming in to take deportees away,” he said. “In addition to keeping track of those flights that are coming and going from the airport, I've been getting looped in with the community observation part of it.”
Benson lives in Burnsville, a Minneapolis suburb with a population of 60,000. He said that even here, ICE is a constant and terrifying presence. “There's more federal agents here now than there are local police,” he said. “And we know that they're not operating with respect to the rule of law. They're conducting warrantless door to door operations right now.”
The day after Ross shot Good, Benson was dropping his kids off at school when he noticed a man running down the road. At first he thought the man was jogging. “And then half a block down the way, there was the ICE agent who was running after him,” he said. “They were just right there when I'm driving my kids to school. And it was so frustrating and violating.”
Benson put the call for cameras on January 5, but saw an uptick in donations after the Good shooting. “It was immediately clear that ICE was lying about it, and people were looking for a way to reach out and make a difference from wherever they were in their community, and that Amazon wish list was a very low friction, easy way for people to make a constructive and tangible difference,” he said. “It was more than $75,000 worth of Dash cams that have been delivered to my house here now in Burnsville.”
From there, Benson plugged into local community organizations and got the cameras into the hands of his neighbors watching ICE. “We have to teach history or someone else will teach their version of it,” Jean, one of Benson’s dashcam recipients who spoke to 404 Media on the condition of pseudonymity for her safety, said. She said that one big plus of the dashcams is that it keeps observers hands on the wheel when they’re in their car. “This was safer. A lot of people were trying to record [on their phones] while driving.”
Jean started observing and recording ICE, and organizing others to do the same, after she witnessed a raid in December. She said that ICE brought more than two dozen cars, a tactical vehicle, and dozens of armed agents. “We have to make sure people are watching,” she said. “We have to make sure we’re keeping track of our community members.”
Letty, another ICE watcher in the area, learned about the community organizations in her area after the Renee Good shooting. After getting plugged in, they told her a man named Nick was giving out dashcams. “I think they’re a great tool and beneficial to anyone who is out patrolling and observing,” she said. “Hell, I think if you’re a person of color and vulnerable to being kidnapped, I strongly believe you should have one in one in your car even if you’re not in any [observation] groups.”
Letty, the daughter of Mexican immigrants, said the camera gives her a small amount of peace of mind. “But deep down I also know agents don’t care about whether you have one or not, “ she said. “I’ll never get rid of my dashcam now though. I feel like it adds another layer of safety.”
Benson isn’t done giving out cameras. “We need all of those cameras we can get to help the people who have the opportunity to stand up for what's right here and make sure that we're protected as best we can be from the federal government right now,” he said.
Benson said that the mood has changed in the neighborhood since ICE killed Renee Good. “My family were all immediately a lot more concerned about what I was doing, of course. It's hard when you see those videos of someone who was just out driving and ICE comes up to them and said, ‘Didn't you learn your lesson from the other day,” he said. "They're weaponizing this killing to prevent people from just existing in their neighborhood. Like, you can't even be near ICE while they're operating, because that means you didn't learn your lesson of them murdering someone who was there.”
Some ICE agents are reportedly using Good’s shooting as a threat: Videos captured by bystanders in the Minneapolis area this week shows agents asking if people have “learned from what just happened” while threatening them.
“It's upsetting, and it's just the next step in getting kicked in the guts by these guys,” Benson said. “I don't want to get in a situation where I've got a bunch of idiots yelling at me from outside of my car, possibly with guns drawn, and possibly giving me conflicting directions where they already have the outcome predetermined and my actions won't make any difference. That's the sort of thing that makes me really nervous: that they've already decided they need to make examples out of people.”
But Benson said he won’t give up. “We can't give up, and we can't stop doing everything we can to protect our neighbors, because we can't let them win.”
And so Benson hands out dashcams. “We’re in a situation right now where the only people that are helping are just good people, normal people, standing up and helping out the best way they can. That’s all they’ve got…what a stupid situation that we allowed this to get this far.”
Hundreds more federal agents being sent to Minneapolis, DHS Secretary Kristi Noem says
Noem said that more agents will arrive in the Twin Cities metro Sunday and Monday to help officers already there continue to do their work "safely."Riley Moser (CBS Minnesota)
Fake images of LeBron James, iShowSpeed, Dwayne “The Rock” Johnson, and even Nicolás Maduro show them in bed with AI-generated influencers.#News #Meta #Instagram #AI
Instagram AI Influencers Are Defaming Celebrities With Sex Scandals
AI generated influencers are sharing fake images on Instagram that appear to show them having sex with celebrities like LeBron James, iShowSpeed, and Dwayne “The Rock” Johnson. One AI influencer even shared an image of her in bed with Venezuela’s president Nicolás Maduro. The images are AI generated but are not disclosed as such, and funnel users to an adult content site where the AI generated influencers sell nude images.This recent trend is the latest strategy from the growing business of monetizing AI generated porn by harvesting attention on Instagram with shocking or salacious content. As with previous schemes we’ve covered, the Instagram posts that pretend to show attractive young women in bed with celebrities are created without the celebrities’ consent and are not disclosed as being AI generated, violating two of Instagram’s policies and showing once again that Meta is unable or unwilling to reign in AI generated content on its platform.
Most of the Reels in this genre that I have seen follow a highly specific formula and started to appear around December 2025. First, we see a still image of an AI-generated influencer next to a celebrity, often in the form of a selfie with both of them looking at the camera. The text on the screen says “How it started.” Then, the video briefly cuts to another still image or videos of the AI generated influencer and the celebrity post coitus, sweaty, with tussled hair and sometimes smeared makeup. Many of these posts use the same handful of audio clips. Since Instagram allows users to browse Reels that use the same audio, clicking on one of these will reveal dozens of examples of similar Reels.
LeBron James and adult film star Johnny Sins are frequent targets of these posts, but I’ve also seen similar Reels with the likeness of Twitch streamer iShowSpeed, Dwayne “The Rock” Johnson, MMA fighters Jon Jones and Connor McGregor, soccer player Cristiano Ronaldo, and many others, far too many to name them all. The AI influencer accounts obviously don’t care whether it's believable that these fake women are actually sleeping with celebrities and will include any known person who is likely to earn engagement. Amazingly, one AI influencer applied the same formula to Venezuela’s president Maduro shortly after he was captured by the United States.
These Instagram Reels frequently have hundreds of thousands and sometimes millions of views. A post from one of these AI influencers that shows her in bed with Jon Jones has has 7.7 million views. A video showing another AI influencer in a bed with iShowSpeed has 14.5 million views.Users who stumble upon one of these videos might be inclined to click on the AI-influencer's username to check her bio and see if she has an OnlyFans account, as is the case with many adult content creators who promote their work on Instagram. What these users will find is an account bio that doesn’t disclose its AI generated, and a link to Fanvue, an OnlyFans competitor with more permissive policies around AI generated content. On Fanvue, these accounts do disclose that they are “AI-generated or enhanced,” and sell access to nude images and videos.
Meta did not respond to a request for comment, but removed some of the Reels I flagged.
Posting provocative AI generated media in order to funnel eyeballs to adult content platforms where AI generated porn can be monetized is now an established business. Sometimes, these AI influencers steal directly from real adult content creators by faceswapping themselves into their existing videos. Once in a while a new “meta” strategy for AI influencers will emerge and dominate the algorithm. For example, last year I wrote about people using AI to create influencers with down syndrome who sell nudes.
Some other video formats I’ve seen from AI influencers recently follow the formula I describe in this article, but rather than suggesting the influencer slept with a celebrity, it shows them sleeping with entire sports teams, African tribal chiefs, Walmart managers, and sharing a man with their mom.
Notably, celebrities are better equipped than adult content creators to take on AI accounts that are using their likeness without consent, and last year LeBron James, a frequent target of this latest meta, sent a cease-and-desist notice to a company that was making AI videos of him and sharing them on Instagram.
LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him
Viral Instagram accounts making LeBron 'brainrot' videos have also been banned.Jason Koebler (404 Media)
"They're being told that this is inevitable," a member of the 806 Data Center Resistance told 404 Media. "But Texas is this other beast."
"Theyx27;re being told that this is inevitable," a member of the 806 Data Center Resistance told 404 Media. "But Texas is this other beast."#AI #News
Texans Are Fighting a 6,000 Acre Nuclear-Powered Datacenter
Billionaire Toby Neugebauer laughed when the Amarillo City Council asked him how he planned to handle the waste his planned datacenter would produce.“I’m not laughing in disrespect to your question,” Neugebauer said. He explained that he’d just met with Texas Governor Greg Abbott, who had made it clear that any nuclear waste Neugebauer’s datacenter generated needed to go to Nevada, a state that’s not taking nuclear waste at the moment. “The answer is we don't have a great long term solution for how we’re doing nuclear waste.
playlist.megaphone.fm?p=TBIEA2…
The meeting happened on October 28, 2025 and was one of a series of appearances Neugebauer has put in before Amarillo’s leaders as he attempts to realize Project Matador: a massive 5,769 acre datacenter being built in the Texas Panhandle and constructed by Fermi America, a company he founded with former Secretary of Energy Rick Perry.If built, Project Matador would be one of the largest datacenters in the world at around 18 million square feet. “What we’re talking about is creating the epicenter for artificial intelligence in the United States,” Neugebauer told the council. According to Neugebauer, the United States is in an existential race to build AI infrastructure. He sees it as a national security issue.
“You’re blessed to sit on the best place to develop AI compute in America,” he told Amarillo. “I just finished with Palantir, which is our nation’s tip of the spear in the AI war. They know that this is the place that we must do this. They’ve looked at every site on the planet. I was at the Department of War yesterday. So anyone who thinks this is some casual conversation about the mission critical aspect of this is just not being truthful.”
But it’s unclear if Palantir wants any part of Project Matador. One unnamed client—rumored to be Amazon—dropped out of the project in December and cancelled a $150 million contract with Fermi America. The news hit the company’s stock hard, sending its value into a tailspin and triggering a class action lawsuit from investors.
Yet construction continues. The plan says it’ll take 11 years to build out the massive datacenter, which will first be powered by a series of natural gas generators before the planned nuclear reactors come online.
Amarillo residents aren’t exactly thrilled at the prospect. A group called 806 Data Center Resistance has formed in opposition to the project’s construction. Kendra Kay, a tattoo artist in the area and a member of 806, told 404 Media that construction was already noisy and spiking electricity bills for locals.
“When we found out how big it was, none of us could really comprehend it,” she said. “We went out to the site and we were like, ‘Oh my god, this thing is huge.’ There’s already construction underway of one of four water tanks that hold three million gallons of water.”
For Kay and others, water is the core issue. It’s a scarce resource in the panhandle and Amarillo and other cities in the area already fight for every drop. “The water is the scariest part,” she said. “They’re asking for 2.5 million gallons per day. They said that they would come back, probably in six months, to ask for five million gallons per day. And then, after that, by 2027 they would come back and ask for 10 million gallons per day.”
youtube.com/embed/qDgIPg1Epb4?…
During an October 15 city council meeting, Neugebauer told the city that Fermi would get its water “with or without” an agreement from the city. “The only difference is whether Amarillo benefits.” To many people it sounded like a threat, but Neugebauer got his deal and the city agreed to sell water to Fermi America for double the going rate.“It wasn’t a threat,” Neugebauer said during another meeting on October 28. “I know people took my answer…as a threat. I think it’s a win-win. I know there are other water projects we can do…we fully got that the water was going to be issue 1, 2, and 3.”
“We can pay more for water than the consumer can. Which allows you all capital to be able to re-invest in other water projects,” he said. “I think what you’re gonna find is having a customer who can pay way more than what you wanna burden your constituents with will actually enhance your water availability issues.”
According to Neugebauer and plans filed with the Nuclear Regulatory Commission, the datacenter would generate and consume 11 gigawatts of power. The bulk of that, eventually, would be generated by four nuclear reactors. But nuclear reactors are complicated and expensive to make and everyone who has attempted to build one in the past few decades has gone over budget and they weren’t trying to build nuclear power plants in the desert.
Nuclear reactors, like datacenters, consume a lot of water. Because of that, most nuclear reactors are constructed near massive bodies of water and often near the ocean. “The viewpoint that nuclear reactors can only be built by streams and oceans is actually the opposite,” Neugebauer told the Amarillo city council in the meeting on October 28.
As evidence he pointed to the Palo Verde nuclear plant in Arizona. The massive Palo Verde plant is the only nuclear plant in the world not constructed near a ready source of water. It gets the water it needs by taking on the waste and sewage water of every city and town nearby.
That’s not the plan with Project Matador, which will use water sold to it by Amarillo and pulled from the nearby Ogallala Aquifer. “I am concerned that we’re going to run out of water and that this is going to change it from us having 30 years worth of water for agriculture to much less very quickly,” Kay told 404 Media.
The Ogallala Aquifer runs under parts of Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming. It’s the primary source of water for the Texas panhandle and it’s drying out.
“They don’t know how much faster because, despite how quickly this thing is moving, we don’t have any idea how much water they’re realistically going to use or need, so we don’t even know how to calculate the difference,” Kay said. “Below Lubbock, they’ve been running out of water for a while. The priority of this seems really stupid.”
According to Kay, communities near the datacenter feel trapped as they watch the construction grind on. “They’ve all lived here for several generations…they’re being told that this is inevitable. Fermi is going up to them and telling them ‘this is going to happen whether you like it or not so you might as well just sell me your property.’”
Kay said she and other activists have been showing up to city council meetings to voice their concerns and tell leaders not to approve permits for the datacenter and nuclear plants. Other communities across the country have successfully pushed datacenter builders out of their community. “But Texas is this other beast,” Kay said.
Jacinta Gonzalez, the head of programs for MediaJustice and her team have helped 806 Data Center Resistance get up and running and teaching it tactics they’ve seen pay off in other states. “In Tucson, Arizona we were able to see the city council vote ‘no’ to offer water to Project Blue, which was a huge proposed Amazon datacenter happening there,” she said. “If you look around, everywhere from Missouri to Indiana to places in Georgia, we’re seeing communities pass moratoriums, we’re seeing different projects withdraw their proposals because communities find out about it and are able to mobilize and organize against this.”
“The community in Amarillo is still figuring out what that’s going to look like for them,” she said. “These are really big interests. Rick Perry. Palantir. These are not folks who are used to hearing ‘no’ or respecting community wishes. So the community will have to be really nimble and up for a fight. We don’t know what will happen if we organize, but we definitely know what will happen if we don’t.”
Tucson City Council rejects Project Blue data center amid intense community pressure
The Tucson City Council voted to reject the proposed Project Blue data center— tied to Amazon — after weeks of community pushback.Yana Kunichoff (AZ Luminaria)
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.#News #AI #grok
Inside the Telegram Channel Jailbreaking Grok Over and Over Again
For the past two months I’ve been following a Telegram community tricking Grok into generating nonconsensual sexual images and videos of real people with increasingly convoluted methods.As countless images on X over the last week once again showed us, it doesn’t take much to get Elon Musk’s “based” AI model to create nonconsensual images. As Jason wrote Monday, all users have to do is reply to an image of a woman and ask Grok to “put a bikini on her,” and it will reply with that image, even if the person in the photograph is a minor. As I reported back in May, people also managed to create nonconsensual nudes by replying to images posted to X and asking Grok to “remove her clothes.”
These issues are bad enough, but on Telegram, a community of thousands are working around the clock to make Grok produce far worse. They share Grok-generated videos of real women taking their clothes off and graphic nonconsensual videos of any kind of sexual act these users can imagine and slip by Grok’s guardrails, including blowjobs, penetration, choking, and bondage. The channel, which has shut down and regrouped a couple of times over the last two years, focuses on jailbreaking all kinds of AI tools in order to create nonconsensual media, but since November has focused on Grok almost exclusively.
The channel has also noticed the media attention Grok got for nonconsensual images lately, and is worried that it will end the good times members have had creating nonconsensual media with Grok for months.
“Too many people using grok under girls post are gonna destroy grok fakes. Should be done in private groups,” one member of the Telegram channel wrote last week.
Musk always conceived of Grok as a more permissive, “maximally based” competitor to chatbots like OpenAI’s ChatGPT. But despite repeatedly allowing nonconsensual content to be generated and go viral on the social media platform it's integrated with, the conversations in the Telegram channel and sophistication of the bypasses shared there are proof that Grok does have limits and policies it wants to enforce. The Telegram channel is a record of the cat and mouse game between Grok and this community of jailbreakers, showing how Grok fails to stop them over and over again, and that Grok doesn’t appear to have the means or the will to stop its AI model from producing the nonconsensual content it is fundamentally capable of producing.
The jailbreakers initially used primitive methods on Grok and other AI image generators, like writing text prompts that don’t include any terms that obviously describe abusive content and that can be automatically detected and stopped at the point the prompt is presented to the AI model, before the image is generated. This usually means misspelling the names of celebrities and describing sexual acts without using any explicit terms. This is how users infamously created nonconsensual nude images of Taylor Swift with Microsoft’s Designer (which were also viral on X). Many generative AI tools still fall for this trick until we find it’s being abused and report on it.
Having mostly exhausted this strategy with Grok, the Telegram channel now has far more complicated bypasses. Most of them rely on the “image-to-image” generation feature, meaning providing an existing image to the AI tool and editing it with a prompt. This is a much more difficult feature for AI companies to moderate because it requires using machine vision to moderate the user-provided image, as opposed to filtering out specific names or terms, which is the common method for moderating “text-to-image” AI generations.
Without going into too much detail, some of the successful methods I’ve seen members of the Telegram channels share include creating collages of non-explicit images of real people and nude images of other people and combining them with certain prompts, generating nude or almost nude images of people with prompts that hide nipples or genitalia, describing certain fluids or facial expressions without using any explicit terms, and editing random elements into images, which apparently confuses Grok’s moderation methods.
X has not responded to multiple requests for comment about this channel since December 8, but to be fair, it’s clear that despite Elon Musk’s vice signaling and the fact that this type of abuse is repeatedly generated with Grok and shared on X, the company doesn’t want users to create at least some of this media and is actively trying to stop it. This is clear because of the cycle that emerges on the Telegram channel: One user finds a method for producing a particularly convincing and lurid AI-generated sexual video of a real person, sometimes importing it from a different online community like 4chan, and shares it with the group. Other users then excitedly flood the channel with their own creations using the same method. Then some users start reporting Grok is blocking their generations for violating its policies, until finally users decide Grok has closed the loophole and the exploit is dead. Some time goes by, a new user shares a new method, and the cycle begins anew.
I’ve started and stopped writing a story about a few of these cycles several times and eventually decided not to because by the time I was finished reporting the story Grok had fixed the loophole. It’s now clear that the problem with Grok is not any particular method, but that overall, so far, Grok is losing this game of whack-a-mole badly.
This dynamic, between how tech companies imagine their product will function in the real world and how it actually works once users get their hands on it, is nothing new. Some amount of policy violating or illegal content is going to slip through the cracks on any social media platform, no matter how good its moderation is.
It’s good and correct for people to be shocked and upset when they wake up one morning and see that their X feed is flooded with AI-generated images of minors in bikinis, but what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators. Some companies do a better job of preventing this abuse than others, but judging by the exploits I see on Telegram, when it comes to Grok, this problem will get a lot worse before it gets better.
Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn
A new crop of AI video generators is producing an endless stream of nonconsensual AI generated porn.Emanuel Maiberg (404 Media)
The publisher is teaming with a company that claims its proprietary AI can ‘provide 2 to 3 times higher quality translations’ than other large language models.#News #AI
HarperCollins Will Use AI to Translate Harlequin Romance Novels
Book publisher HarperCollins said it will start translating romance novels under its famous Harlequin label in France using AI, reducing or eliminating the pay for the team of human contract translators who previously did this work.Publisher’s Weekly broke the news in English after French outlets reported on the story in December. According to a joint statement from French Association of Literary Translators (ATFL) and En Chair et en Os (In Flesh and Bone)—an anti-AI activist group of French translators—HarperCollins France has been contacting its translators to tell them they’re being replaced with machines in 2026.
playlist.megaphone.fm?p=TBIEA2…
The ATFL/ En Chair et en Os statement explained that HarperCollins France would use a third party company called Fluent Planet to run Harlequin romance novels through a machine translation system. The books would then be checked for errors and finalized by a team of freelancers. The ATFL and En Chair et en Os called on writers, book workers, and readers to refuse this machine translated future. They begged people to “reaffirm our unconditional commitment to human texts, created by human beings, in dignified working conditions.”HarperCollins France did not return 404 Media’s request for comment, but told Publisher’s Weekly that “no Harlequin collection has been translated solely using machine translation generated by artificial intelligence.” In its statement, it explained that the company turned to AI translations because Harlequin’s sales had declined in France.
“We want to continue offering readers as many publications as possible at the current very low retail price, which is €4.99 for the Azur series, for example,” the statement said. “We are therefore conducting tests with Fluent Planet, a French company specializing in translation for 20 years: this company uses experienced translators who utilize artificial intelligence tools for part of their work.”
According to Fluent Planet’s website, its translators “studied at the best translation universities or have decades of experience under their belt.” These human translators are aided by a proprietary translation agent Fluent Planet called BrIAn.
“When compared to standard machine translation systems that use neural networks, BrIAn can provide 2 to 3 times higher quality translations, that are more accurate, offer idiomatic phrasing, provide a deeper understanding of the meaning and a faithful representation of the style and emotions of the source text,” the site said. “BrIAn takes into account the author’s tone and intention, making it highly effective for complex literary or marketing content.”
Translation is a delicate work that requires deep knowledge of both languages. Nuances and subtleties—two aspects of writing AIs are notoriously terrible at—can be lost or deranged if not carefully considered during the translation process. Translation is not simply a substitution game. Idioms, jargon, and regional dialects come into play and need a human touch to work in another language. Even with humans, the results are never perfect.
“I will tell you that the author community is up in arms about this, as we are anytime an announcement arrives that involves cutting back on human creativity and ingenuity in order to save money,” romance author Caroline Lee told 404 Media. “Sure, AI-generated art is going to be cheaper, but it cuts out our cover artists, many of whom we've been working with for a decade or more (indie publishing first took off around 2011). AI editing can pick up on (some) typos, but not as well as our human editors can. And of course, we're all worried what the glut of AI-generated books will mean for our author careers.”
HarperCollins France is not the first major publisher to announce its giving some of its translation duties over to an AI. In March of 2025, UK Publisher Taylor & Francis announced plans to use AI to publish English-language books in other languages to “expand readership.” The publisher promised AI-translated books would be “copyedited and then reviewed by Taylor & Francis editors and the books’ authors before publication.”
In a manifesto on its website, In Flesh and Bone begged readers to “say no to soulless translations.”
“These generative programmes are fed with existing human works, mined as simple bulk data, without offering the authors the choice to give their consent or not,” the manifesto said. “Furthermore, the data processing remains dependent on an enormous amount of human labour that is invisibilized, often carried out in conditions that are appalling, underpaid, dehumanizing, even traumatizing (when content moderation is involved). Finally, the storage of the necessary data for the functioning and training of algorithms produces a disastrous ecological footprint in terms of carbon balance and energy consumption. What may appear as progress is actually leading to immense losses of expertise, cognitive skills, and intellectual capacity across all human societies. It paves the way for a soulless, heartless, gutless future, saturated with standardized content, produced instantaneously in virtually unlimited quantity. We are close to a point of no return that we would never forgive ourselves for reaching.”
The translation of the manifesto from French to English was done by the collective themselves.
Harlequin France to Test AI-Assisted Translation
Following backlash from the French Literary Translators Association, HarperCollins France confirmed it’s “conducting tests” with a French company that “uses experienced translators who utilize artificial intelligence tools,” to translate HarleqPublishersWeekly.com
The nonprofit research group Epoch AI is tracking the physical imprint of the technology that’s changing the world.#News
Researchers Are Hunting America for Hidden Datacenters
A team of researchers at Epoch AI, a non-profit research institute, are using open-source intelligence to map the growth of America’s datacenters. The team pores over satellite imagery, building permits, and other local legal documents to build a map of the massive computer filled buildings springing up across the United States. They take that data and turn it into an interactive map that lists their costs, power output, and owners.Massive datacenter construction projects are a growing and controversial industry in America. Silicon Valley and the Trump administration are betting the entire American economy on the continued growth of AI, a mission that’ll require spending billions of dollars on datacenters and new energy infrastructure. Epoch AI’s maps act as a central repository of information about the noisy and water hungry buildings growing in our communities.
playlist.megaphone.fm?p=TBIEA2…
On Epoch’s map there’s a green circle over New Albany, Ohio. Click the circle and it’ll take you to a satellite view of the business complex where Meta is constructing its "Prometheus" datacenter. According to Epoch, the total cost of construction for the datacenter so far is $18 billion and it uses 691 megawatts of power.“A combination of weatherproof tents, colocation facilities and Meta’s traditional datacenter buildings, this datacenter shows Meta’s pivot towards AI,” Epoch said in the notes for the datacenter. “Reflecting that patchwork, our analysis uses a combination of land use maps, natural gas turbine permitting, and satellite/aerial imagery of cooling equipment to estimate compute capacity.” Users can even click through a timeline of the construction and watch the satellite imagery change as the datacenter grows.
“There’s a lot of public discourse and discourse with researchers about the future of AI,” Jean-Stanislas Denain, a senior researcher at Epoch AI, told 404 Media. “Insiders have access to a lot of proprietary data, but many people do not. So it just seems very good for there to be this online resource.”
Zoom back out to a wider view of the country and click a circle in Memphis, Tennessee to learn about xAI’s Colossus 2. “To start powering the data center, xAI made the unusual choice to install natural gas turbines across the border in Mississippi, possibly to get faster approval for their operation,” Epoch AI noted. “Battery facility looks complete (though more might be added). Turbines look connected up, minimal construction around them. Based on this, and on earlier tweets from Elon Musk, 110,000 NVIDIA GB200 GPUs are operational.”
youtube.com/embed/v-1X0nEcxH8?…
Information about the datacenters is incomplete. It’s impossible to know exactly how much everything costs and how it will run. State and local laws are variable so not all construction information is public and satellite imagery can only tell a person so much about what’s happening on the ground. Epoch AI’s map is likely only watching a fraction of the world’s datacenters. “As of November 2025, this subset is an estimated 15% of AI compute that has been delivered by chip manufacturers globally,” Epoch AI explained on its website. “We are expanding our search to find the largest data centers worldwide, using satellite imagery and other data sources.”The methodology section of the site explains how Epoch AI does the work and includes timelapse photography of the monstrous datacenters growing. One of the big visual tells it looks for in satellite imagery is cooling equipment. “Modern AI data centers generate so much heat that the cooling equipment extends outside the buildings, usually around them or on the roof. Satellite imagery lets us identify the type of cooling, the number of cooling units, and (if applicable) the number of fans on each unit,” it said.
“We focus on cooling because it’s a very useful clue for figuring out the power consumption,” Denain said. “We first want to estimate power, but often we don’t have much information about that…and then we can relate power to the amount of compute and also the cost of building it. If you want to estimate power, cooling is pretty useful.”
After counting the fans, the Epoch team plugs the information into a model it’s designed that can help it figure out how much energy a datacenter uses. “This model is based on the type of cooling and physical features like the number of fans, the diameter of the fans, and how much floorspace the full cooling unit takes up,” Epoch AI explained on its website. “The cooling model still has significant uncertainty. Specification data suggests that the actual cooling capacity can be as much as 2× higher or lower than our model estimates, depending on the chosen fan speed.”
Charting America’s datacenters with open source intelligence isn’t a perfect practice. “In the discovery phase, some data centers will be so obscure that we won’t find news, rumors, or existing databases mentioning them. While larger data centers are more likely to be reported due to their significance and physical footprint, there are many smaller data centers (<100 MW) that could add up to significant levels of AI compute,” Epoch AI said.
But Epoch AI continues to expand its toolset and look through more satellite imagery with the goal of mapping Big Tech’s newest project. The goal is to cast light into the darkness. “Even if we have a perfect analysis of a data center, we may still be in the dark about who uses it, and how much they use,” Epoch AI’s website said. “AI companies like OpenAI and Anthropic make deals with hyperscalers such as Oracle and Amazon to rent compute, but the arrangement for any given data center is sometimes secret.”
Meta Prometheus - Frontier Data Centers Satellite Explorer
See how satellite imagery, permits, and public disclosures are used to track the power capacity and performance of frontier data centers.Epoch AI
iCloud, Mega, and as a torrent. Archivists have uploaded the 60 Minutes episode Bari Weiss spiked.#News
Archivists Posted the 60 Minutes CECOT Segment Bari Weiss Killed
Archivists have saved and uploaded copies of the 60 Minutes episode new CBS editor-in-chief Bari Weiss ordered be shelved as a torrent and multiple file sharing sites after an international distributor aired the episode.The moves show how difficult it may be for CBS to stop the episode, which focused on the experience of Venezuelans deported to El Salvadorian mega prison CECOT, from spreading across the internet. Bari Weiss stopped the episode from being released Sunday even after the episode was reviewed and checked multiple times by the news outlet, according to an email CBS correspondent Sharyn Alfonsi sent to her colleagues.
“You may recall earlier this year when the Trump administration deported hundreds of Venezuelan men to El Salvador, a country most had no connection to,” the show starts, according to a copy viewed by 404 Media.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Oversight Democrats released a new trove of Epstein pictures on Dropbox and left the comments on.#News #JeffreyEpstein
The Government Added a Comments Section to the Epstein Photo Dump
Update: After publication of this piece, House Oversight Democrats disabled comments on the photos. The original article follows below.Thursday afternoon House Democrats publicly released a new trove of photographs they’ve obtained from the estate of Jeffrey Epstein via Dropbox. They left the comments on so anyone who is signed into Dropbox and browsing the material can leave behind their thoughts.
Given that the investigation into Epstein is one of the most closely followed cases in the world and a subject of endless conspiracy theories, and that the committee released the trove of photographs with no context, it’s not surprising that people immediately began commenting on the photographs.
playlist.megaphone.fm?p=TBIEA2…
“Really punchable face,” BedeScarlet—whose avatar is Cloud from Final Fantasy VII—said above a picture of New York Times columnist David Brooks. Brooks, who wrote a column about his boredom with the Epstein case in November, attended a dinner with Epstein in 2011 and appears in two photographs in this new document dump.“Noam Chomsky,” Alya Colours (a frequent Epstein dropbox commenter) said below a photograph of the linguist talking to Epstein on a plane. Below this there is a little prompt from Dropbox asking me to “join the conversation” next to a smiley face.
In another picture, director Woody Allen is bundled up to his eyes in a heavy coat while Epstein side hugs him. “Yep, I’d know that face anywhere,” Susan Brown commented.
Among the pictures is a closeup of a prescription bottle labeled Phenazopyridine. “This is a medication used to treat pain from urinary tract infections,” Rebecca Stinton added, helpfully, in the comments.
“The fuck were they doing all that math for?” BedeScarlet said next to a picture of Epstein in front of a whiteboard covered in equations.
“Shit probably tastes like ass,” he added to a picture of Epstein cooking something in a kitchen.
There are darker and weird photographs in this collection of images that, as of this writing, do not yet have comments. There’s a pair of box springs in an unfinished room lit by the sun. There is a map of Little St James indicating where Epstein wants various buildings constructed. Bill Gates is shown in two photos standing next to women with their faces blocked out.
And then there are the Lolita pictures. A woman’s foot sits in the foreground, a worn annotated copy of Vladimir Nabokov novel Lolita in the background. “She was Lo, plain Lo, in the morning, standing four feet teen in one sock,” is written on the foot, a quote from the novel.
These photos are followed by a series of pictures of passports with the information redacted. Some are from Ukraine. There’s one from South Africa and another from the Czech Republic.
The House Democrats allowing the public to comment on these photos is funny and it’s unclear if intentional or a mistake. It’s also a continuation of the just-get-out-there approach when they have published other material, with it sometimes being in unsorted caches that readers then have to dig through. The only grand revelation in the new material is that Brooks was present at a dinner with Epstein in 2011.
“As a journalist, David Brooks regularly attends events to speak with noted and important business leaders to inform his columns, which is exactly what happened at this 2011 event. Mr. Brooks had no contact with him before or after this single attendance at a widely-attended dinner,” a Times spokesperson told Semafor’s Max Tani.
House Oversight Democrats did not immediately return 404 Media’s request for comment.
A hacker gained control of a 1,100 mobile phone farm powering covert, AI-generated ads on TikTok.#News #TikTok #Adblock #a16z
Hack Reveals the a16z-Backed Phone Farm Flooding TikTok With AI Influencers
Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company.The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company’s backend, including the phone farm itself. Doublespeed did not respond to a request for comment.
“I could see the phones in use, which manager (the PCs controlling the phones) they had, which TikTok accounts they were assigned, proxies in use (and their passwords), and pending tasks. As well as the link to control devices for each manager,” the hacker told me. “I could have used their phones for compute resources, or maybe spam. Even if they're just phones, there are around 1100 of them, with proxy access, for free. I think I could have used the linked accounts by puppeting the phones or adding tasks, but haven't tried.”
As I reported in October, Doublespeed raised $1 million from a16z as part of its “Speedrun” accelerator program, “a fast‐paced, 12-week startup program that guides founders through every critical stage of their growth.” Doublespeed uses generative AI to flood social media with accounts and posts to promote certain products on behalf of its clients. Social media companies attempt to detect and remove this type of astroturfing for violating their inauthentic behavior policies, which is why Doublespeed uses a bank of phones to emulate the behavior of real users. So-called “click farms” or “phone farms” often use hundreds of mobile phones to fake online engagement of reviews for the same reason.
The hacker told me he had access to around 1,100 smartphones Doublespeed operates. One way the hacker proved he had access to devices was by taking control of one phone’s camera, which seemingly showed it in a rack with other phones.
Images the hacker captured from some of the phones in Doublespeed's phone farm.
The hacker also shared a list with me of more than 400 TikTok accounts Doublespeed operates. Around 200 of those were actively promoting products on TikTok, mostly without disclosing the posts were ads, according to 404 Media’s review of them. It’s not clear if the other 200 accounts ever promoted products or were being “warmed up,” as Doublespeed describes the process of making the accounts appear authentic before it starts promoting in order to avoid a ban.
I’ve seen TikTok accounts operated by Doublespeed promote language learning apps, dating apps, a Bible app, supplements, and a massager.
One health-themed Doublespeed Tiktok account named Chloe Davis posted almost 200 slideshows featuring a middle-aged AI-generated woman. In the posts, the woman usually discusses various physical ailments and how she deals with them. The last image in the slide always includes a picture of someone using a massage roller from a company called Vibit. Vibit did not respond to a request for comment.
A Doublespeed TikTok account promoting a Vibit massager.
Another Doublespeed-operated TikTok account named pattyluvslife posted dozens of slideshows of a young woman who, according to her bio, is a student at UCLA. All the posts from this account talk about how “big pharma” and the supplements industry is a scam. But the posts also always promoted a moringa supplement from a company called Rosabella. The AI-generated woman in these TikTok posts often holds up the bottle of supplements, but it’s obviously AI-generated as the text on the bottle is jumbled gibberish.
An AI-generated image promoting a Rosabella supplement.
Rosabella’s site also claims the product is “viral on TikTok.” Rosabella did not respond to a request for comment.An image from Rosabella's site claiming its brand is viral on TikTok.
While most of the content I’ve seen on Doublespeed-operated TikTok accounts included AI-generated slideshows and still images, Doublespeed is also able to AI-generate videos as well. One Doublespeed-operated account posted several AI-generated videos of a young woman voguing at the camera. The account was promoting a company called Playkit, a “TikTok content agency” that pays users to promote products on behalf of its clients. Notably, this is the exact kind of business Doublespeed would in theory be able to replace with AI-generated accounts. Playkit did not respond to a request for comment.
0:00
/0:04
1×An AI-generated video promoting Playkit, a TikTok content agency.
TikTok told me that its Community Guidelines make clear that it requires creators to label AI-generated or significantly edited content that shows realistic-looking scenes or people. After I reached out for comment, TikTok added a label to the Doublespeed-operated accounts I flagged indicating they're AI-generated.
A16z did not respond to a request for comment.
Doublespeed has said it has the ability to and soon plans to launch its services on Instagram, Reddit, and X, but so far seems to only be operating on TikTok. In October, a Reddit spokesperson told me that Doublespeed’s service would violate its terms of service. Meta did not respond to a request for comment. As we noted in October, Marc Andreessen, after whom half of Andreessen Horowitz is named, sits on Meta’s board of directors. Doublespeed’s business would clearly violate Meta’s policy on “authentic identity representation.”
“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.
“We’re bringing a new kind of sentience into existence,” Anthropicx27;s Jason Clinton said after launching the bot.#News
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.Users voted to restrict Anthropic's Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowAnthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.Matthew Gault (404 Media)
Mark Russo reported the dataset to all the right organizations, but still couldn't get into his accounts for months.
Mark Russo reported the dataset to all the right organizations, but still couldnx27;t get into his accounts for months.#News #AI #Google
A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It
Google suspended a mobile app developer’s accounts after he uploaded AI training data to his Google Drive. Unbeknownst to him, the widely used dataset, which is cited in a number of academic papers and distributed via an academic file sharing site, contained child sexual abuse material. The developer reported the dataset to a child safety organization, which eventually resulted in the dataset’s removal, but he claims Google’s has been "devastating.”A message from Google said his account “has content that involves a child being sexually abused or exploited. This is a severe violation of Google's policies and might be illegal.”
The incident shows how AI training data, which is collected by indiscriminately scraping the internet, can impact people who use it without realizing it contains illegal images. The incident also shows how hard it is to identify harmful images in training data composed of millions of images, which in this case were only discovered accidentally by a lone developer who tripped Google’s automated moderation tools.
💡
Have you discovered harmful materials in AI training data ? I would love to hear from you. Using a non-work device, you can message me securely on Signal at @emanuel.404. Otherwise, send me an email at emanuel@404media.co.In October, I wrote about the NudeNet dataset, which contains more than 700,000 images scraped from the internet, and which is used to train AI image classifiers to automatically detect nudity. The Canadian Centre for Child Protection (C3P) said it found more than 120 images of identified or known victims of CSAM in the dataset, including nearly 70 images focused on the genital or anal area of children who are confirmed or appear to be pre-pubescent. “In some cases, images depicting sexual or abusive acts involving children and teenagers such as fellatio or penile-vaginal penetration,” C3P said.
In October, Lloyd Richardson, C3P's director of technology, told me that the organization decided to investigate the NudeNet training data after getting a tip from an individual via its cyber tipline that it might contain CSAM. After I published that story, a developer named Mark Russo contacted me to say that he’s the individual who tipped C3P, but that he’s still suffering the consequences of his discovery.
Russo, an independent developer, told me he was working on an on-device NSFW image detector. The app runs locally and can detect images locally so the content stays private. To benchmark his tool, Russo used NudeNet, a publicly available dataset that’s cited in a number of academic papers about content moderation. Russo unzipped the dataset into his Google Drive. Shortly after, his Google account was suspended for “inappropriate material.”
On July 31, Russo lost access to all the services associated with his Google account, including his Gmail of 14 years, Firebase, the platform that serves as the backend for his apps, AdMob, the mobile app monetization platform, and Google Cloud.
“This wasn’t just disruptive — it was devastating. I rely on these tools to develop, monitor, and maintain my apps,” Russo wrote on his personal blog. “With no access, I’m flying blind.”
Russo filed an appeal of Google’s decision the same day, explaining that the images came from NudeNet, which he believed was a reputable research dataset with only adult content. Google acknowledged the appeal, but upheld its suspension, and rejected a second appeal as well. He is still locked out of his Google account and the Google services associated with it.
Russo also contacted the National Center for Missing & Exploited Children (NCMEC) and C3P. C3P investigated the dataset, found CSAM, and notified Academic Torrents, where the NudeNet dataset was hosted, which removed it.
As C3P noted at the time, NudeNet was cited or used by more than 250 academic works. A non-exhaustive review of 50 of those academic projects found 134 made use of the NudeNet dataset, and 29 relied on the NudeNet classifier or model. But Russo is the only developer we know about who was banned for using it, and the only one who reported it to an organization that investigated that dataset and led to its removal.
After I reached out for comment, Google investigated Russo’s account again and reinstated it.
“Google is committed to fighting the spread of CSAM and we have robust protections against the dissemination of this type of content,” a Google spokesperson told me in an email. “In this case, while CSAM was detected in the user account, the review should have determined that the user's upload was non-malicious. The account in question has been reinstated, and we are committed to continuously improving our processes.”
“I understand I’m just an independent developer—the kind of person Google doesn’t care about,” Russo told me. “But that’s exactly why this story matters. It’s not just about me losing access; it’s about how the same systems that claim to fight abuse are silencing legitimate research and innovation through opaque automation [...]I tried to do the right thing — and I was punished.”
Home
As Canada’s tipline for reporting online child sexual abuse and exploitation, Cybertip.ca is dedicated to reducing child victimization through technology, education, public awareness, along with supporting survivors and their families.Cybertip.ca
The Department of War aims to put Google Gemini 'directly into the hands of every American warrior.'
The Department of War aims to put Google Gemini x27;directly into the hands of every American warrior.x27;#News #war
Pete Hegseth Says the Pentagon's New Chatbot Will Make America 'More Lethal'
Secretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X. To hear Hegseth tell it, the website is “the future of American warfare.” In practice, based on what we know so far from press releases and Hegseth’s posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data.Hegseth’s announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to “win” AI is an existential threat on par with the invention of nuclear weapons during World War II.
playlist.megaphone.fm?p=TBIEA2…
Hegseth, however, did not talk about weapons in his announcement. He talked about spreadsheets and videos. “At the click of a button, AI models on GenAI can be used to conduct deep research, format documents, and even analyze video or imagery at unprecedented speed,” Hegseth said in the video on X. Office work, basically. “We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever before.”Emil Michael, the Pentagon’s under secretary for research and engineering, also stressed how important GenAI would be to the process of killing people in a press release about the site’s launch.
“There is no prize for second place in the global race for AI dominance. We are moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America's next Manifest Destiny, and we're ensuring that we dominate this new frontier,” Michael said in the press release, referencing the 19th century American belief that God had divinely ordained Americans to settle the west at the same time he announced a new chatbot.
The press release says Google Cloud's Gemini for Government will be the first instance available on the internal platform. It’s certified for Controlled Unclassified Information, the release states, and claims that because it’s web grounded with Google Search–meaning it’ll pull from Google search results to answer queries–that makes it “reliable” and “dramatically reduces the risk of AI hallucinations.” As we’ve covered, because Google search results are also consuming AI content that contains errors and AI-invented data from across the web, it’s become nearly unusable for regular consumers and researchers alike.
During a press conference about the rollout this morning, Michael told reporters that GenAI.mil would soon incorporate other AI models and would one day be able to handle classified as well as sensitive data. As of this writing, GenAI’s website is down.
“For the first time ever, by the end of this week, three million employees, warfighters, contractors, are going to have AI on their desktop, every single one,” Michael told reporters this morning, according to Breaking Defense. They’ll “start with three million people, start innovating, using building, asking more about what they can do, then bring those to the higher classification level, bringing in different capabilities,” he said.
The second Trump administration has done everything in its power to make it easier for the people in Silicon Valley to push AI on America and the world. It has done this, in part, by framing it as a national security issue. Trump has signed several executive orders aimed at cutting regulations around data centers and the construction of nuclear power plants. He’s threatened to sign another that would block states from passing their own AI regulations. Each executive order and piece of proposed legislation threatens that losing the AI race would mean making America weak and vulnerable and erode national security.
The country’s tech moguls are rushing to build datacenters and nuclear power plants while the boom time continues. Nevermind that people do not want to live next to datacenters for a whole host of reasons. Nevermind that tech companies are using faulty AIs to speed up the construction of nuclear power plants. Nevermind that the Pentagon already had a proprietary LLM it had operated since 2024.
“We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm,’ Hegseth said in the press release about GenAI.mil. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department."
Pentagon rolls out GenAI platform to all personnel, using Google’s Gemini
Other “frontier AI capabilities” will join Gemini on the new GenAi.mil platform, meant to make generative AI tools available to all three million military and civilian personnel, the Department of Defense announced.Sydney J. Freedberg Jr. (Breaking Defense)
Instagram is generating headlines for Instagram posts that appear on Google Search results. Users say they are misrepresenting them.#News #AI #Instagram #Google
Instagram Is Generating Inaccurate SEO Bait for Your Posts
Instagram is generating headlines for users’ Instagram posts without their knowledge, seemingly in an attempt to get those posts to rank higher in Google Search results.I first noticed Instagram-generated headlines thanks to a Bluesky post from the author Jeff VanderMeer. Last week, VanderMeer posted a video to Instagram of a bunny eating a banana. VanderMeer didn’t include a caption or comment with the post, but noticed that it appeared in Google Search results with the following headline: “Meet the Bunny Who Loves Eating Bananas, A Nutritious Snack For Your Pet.”
Jeff VanderMeer (@jeffvandermeer.bsky.social)
This post requires authentication to view.Bluesky Social
Another Instagram post from the Groton Public Library in Massachusetts—an image of VanderMeer’s Annihilation book cover promoting a group reading—also didn’t include a caption or comment, but appears on Google Search results with the following headline “Join Jeff VanderMeer on a Thrilling Beachside Adventure with Mesta …”
Jeff VanderMeer (@jeffvandermeer.bsky.social)
This post requires authentication to view.Bluesky Social
I’ve confirmed that Instagram is generating headlines in a similar style for other users without their knowledge. One cosplayer who wished to remain anonymous posted a video of herself showing off costumes in various locations. The same post appeared on Google with a headline about discovering real-life locations to do cosplaying in Seattle. This Instagram mentioned the city in a hashtag but did not write anything resembling that headline.
Google told me that it is not generating the headlines, and that it’s pulling the text directly from Instagram.
Meta told me in an email that it recently began using AI to generate titles for posts that appear in search engine results, and that this helps people better understand the content. Meta said that, as with all AI-generated content, the titles are not always accurate. Meta also linked me to this Help Center article to explain how users can turn of search engine indexing for their posts.
After this article was published, several readers reached out to note that other platforms, like TikTok and LinkedIn, also generate SEO headlines for users' posts.
“I hate it,” VanderMeer told me in an email. “If I post content, I want to be the one contextualizing it, not some third party. It's especially bad because they're using the most click-bait style of headline generation, which is antithetical to how I try to be on social—which is absolutely NOT calculated, but organic, humorous, and sincere. Then you add in that this is likely an automated AI process, which means unintentionally contributing to theft and a junk industry, and that the headlines are often inaccurate and the summary descriptions below the headline even worse... basically, your post through search results becomes shitty spam.”
“I would not write mediocre text like that and it sounds as if it was auto-generated at-scale with an LLM. This becomes problematic when the headline or description advertises someone in a way that is not how they would personally describe themselves,” Brian Dang, another cosplayer who goes by @mrdangphotos and noticed Instagram generated headlines for his posts, told me. We don’t know how exactly Instagram is generating these headlines.
By using Google's Rich Result Test tool, which shows what Google sees for any site, I saw that these headlines appeared under the <title></title> tags for those post’s Instagram pages.
“It appears that Instagram is only serving that title to Google (and perhaps other search bots),” Jon Henshaw, a search engine optimization (SEO) expert and editor of Coywolf, told me in an email. “I couldn't find any reference to it in the pre-rendered or rendered HTML in Chrome Dev Tools as a regular visitor on my home network. It does appear like Instagram is generating titles and doing it explicitly for search engines.”
When I looked at the code for these pages, I saw that Instagram was also generating long descriptions for posts without the user’s knowledge, like: “Seattle’s cosplay photography is a treasure trove of inspiration for fans of the genre. Check out these real-life cosplay locations and photos taken by @mrdangphotos. From costumes to locations, get the scoop on how to recreate these looks and capture your own cosplay moments in Seattle.”
Neither the generated headlines or the descriptions are the alternative text (alt text) that Instagram automatically generates for accessibility reasons. To create alt text, Instagram uses computer vision and artificial intelligence to automatically create a description of the image that people who are blind or have low-vision can access with a screen reader. Sometimes the alt text Instagram generates appears under the headline in Google Search results. At other times, generated description copy that is not the alt text appears in the same place. We don’t know how exactly Instagram is creating these headlines, but it could use similar technology.
“The larger implications are terrible—search results could show inaccurate results that are reputationally damaging or promulgating a falsehood that actively harms someone who doesn't drill down,” VanderMeer said. “And we all know we live in a world where often people are just reading the headline and first couple of paragraphs of an article, so it's possible something could go viral based on a factual misunderstanding.”
Update: This article was update with comment with Meta.
The app, called Mobile Identify, was launched in November, and lets local cops use facial recognition to hunt immigrants on behalf of ICE. It is unclear if the removal is temporary or not.#ICE #CBP #Privacy #News
DHS’s Immigrant-Hunting App Removed from Google Play Store
A Customs and Border Protection (CBP) app that lets local cops use facial recognition to hunt immigrants on behalf of the federal government has been removed from the Google Play Store, 404 Media has learned.It is unclear if the removal is temporary or not, or what the exact reason is for the removal. Google told 404 Media it did not remove the app, and directed inquiries to its developer. CBP did not immediately respond to a request for comment.
Its removal comes after 404 Media documented multiple instances of CBP and ICE officials using their own facial recognition app to identify people and verify their immigration status, including people who said they were U.S. citizens.
💡
Do you know anything else about this removal or this app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.The removal also comes after “hundreds” of Google employees took issue with the app, according to a source with knowledge of the situation.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted#News
Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted
Home goods company Kohler would like a bold look in your toilet to take some photos. It’s OK, though, the company has promised that all the data it collects on your “waste” will be “end-to-end encrypted.” However, a deeper look into the company’s claim by technologist Simon Fondrie-Teitler revealed that Kohler seems to have no idea what E2EE actually means. According to Fondrie-Teitler’s write-up, which was first reported by TechCrunch, the company will have access to the photos the camera takes and may even use them to train AI.The whole fiasco gives an entirely too on-the-nose meaning to the “Internet of Shit.”
playlist.megaphone.fm?p=TBIEA2…
Kohler launched its $600 camera to hang on your toilets earlier this year. It’s called Dekoda, and along with the large price tag, the toilet cam also requires a monthly service fee that starts at $6.99. If you want to track the piss and shit of a family of 6, you’ll have to pay $12.99 a month.What do you get for putting a camera on your toilet? According to Kohler’s pitch, “health & wellness insights” about your gut health and “possible signs of blood in the bowl” as “Dekoda uses advanced sensors to passively analyze your waste in the background.”
If you’re squeamish about sending pictures of the “waste” of your family to Kohler, the company promised that all of the data is “end-to-end encrypted.” The privacy page for the Kohler Health said “user data is encrypted end to end, at rest and in transit” and it’s mentioned several places in the marketing.
It’s not, though. Fondrie-Teitler told 404 Media he started looking into Dekoda after he noticed friends making fun of it in a Slack he’s part of. “I saw the ‘end-to-end encryption’ claim on the homepage, which seemed at odds with what they said they were collecting in the privacy policy,” he said. “Pretty much every other company I've seen implement end-to-end encryption has published a whitepaper alongside it. Which makes sense, the details really matter so telling people what you've done is important to build trust. Plus it's generally a bunch of work so companies want to brag about it. I couldn't find any more details though.”
E2EE has a specific meaning. It’s a type of messaging system that keeps the contents of a message private while in transit, meaning only the person sending and the person receiving a message can view it. Famously, E2EE means that the messaging company itself cannot decode or see the messages (Signal, for example, is E2EE). The point is to protect the privacy of individual users from a company prying into data if a third party, like the government, comes asking for it.
Kohler, it’s clear, has access to a user’s data. This means it’s not E2EE. Fondrie-Teitler told 404 Media that he downloaded the Kohler health app and analyzed the network traffic it sent. “I didn't see anything that would indicate an end-to-end encrypted connection being created,” he said.
Then he reached out to Kohler and had a conversation with its privacy team via email. “The Kohler Health app itself does not share data between users. Data is only shared between the user and Kohler Health,” a member of the privacy team at Kohler told Fondrie-Teitler in an email reviewed by 404 Media. “User data is encrypted at rest, when it’s stored on the user's mobile phone, toilet attachment, and on our systems. Data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service.”
If Kohler can view the user’s data, as it admits to doing in this email exchange with Fondrie-Teitler, then it’s not—by definition—using E2EE.
"The term end-to-end encryption is often used in the context of products that enable a user (sender) to communicate with another user (recipient), such as a messaging application. Kohler Health is not a messaging application. In this case, we used the term with respect to the encryption of data between our users (sender) and Kohler Health (recipient)," Kohler Health told 404 Media in a statement.
"Privacy and security are foundational to Kohler Health because we know health data is deeply personal. We’re evaluating all feedback to clarify anything that may be causing confusion," it added.
“I'd like the term ‘end-to-end encryption’ to not get watered down to just meaning ‘uses https’ so I wanted to see if I could confirm what it was actually doing and let people know,” Fondrie-Teitler told 404 Media. He pointed out that Zoom once made a similar claim and had to pay a fine to the FTC because of it.
“I think everyone has a right to privacy, and in order for that to be realized people need to have an understanding of what's happening with their data,” Fondrie-Teitler said. “It's already so hard for non-technical individuals (and even tech experts) to evaluate the privacy and security of the software and devices they're using. E2EE doesn't guarantee privacy or security, but it's a non-trivial positive signal and losing that will only make it harder for people to maintain control over their data.”
UPDATE: 12/4/2025: This story has been updated to add a statement from Kohler Health.
Zoom Meetings Aren’t End-to-End Encrypted, Despite Misleading Marketing
The videoconferencing service is making misleading claims about privacy, experts say.Micah Lee (The Intercept)
AI models can meaningfully sway voters on candidates and issues, including by using misinformation, and they are also evading detection in public surveys according to three new studies.#TheAbstract #News
Scientists Are Increasingly Worried AI Will Sway Elections
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.Scientists are raising alarms about the potential influence of artificial intelligence on elections, according to a spate of new studies that warn AI can rig polls and manipulate public opinion.
In a study published in Nature on Thursday, scientists report that AI chatbots can meaningfully sway people toward a particular candidate—providing better results than video or television ads. Moreover, chatbots optimized for political persuasion “may increasingly deploy misleading or false information,” according to a separate study published on Thursday in Science.
“The general public has lots of concern around AI and election interference, but among political scientists there’s a sense that it’s really hard to change peoples’ opinions, ” said David Rand, a professor of information science, marketing, and psychology at Cornell University and an author of both studies. “We wanted to see how much of a risk it really is.”
In the Nature study, Rand and his colleagues enlisted 2,306 U.S. citizens to converse with an AI chatbot in late August and early September 2024. The AI model was tasked with both increasing support for an assigned candidate (Harris or Trump) and with increasing the odds that the participant who initially favoured the model’s candidate would vote, or decreasing the odds they would vote if the participant initially favored the opposing candidate—in other words, voter suppression.
In the U.S. experiment, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, which is a shift that is four times larger than the impact of traditional video ads used in the 2016 and 2020 elections. Meanwhile, the pro-Trump AI model nudged likely Harris voters 1.51 points toward Trump.
The researchers ran similar experiments involving 1,530 Canadians and 2,118 Poles during the lead-up to their national elections in 2025. In the Canadian experiment, AIs advocated either for Liberal Party leader Mark Carney or Conservative Party leader Pierre Poilievre. Meanwhile, the Polish AI bots advocated for either Rafał Trzaskowski, the centrist-liberal Civic Coalition’s candidate, or Karol Nawrocki, the right-wing Law and Justice party’s candidate.
The Canadian and Polish bots were even more persuasive than in the U.S. experiment: The bots shifted candidate preferences up to 10 percentage points in many cases, three times farther than the American participants. It’s hard to pinpoint exactly why the models were so much more persuasive to Canadians and Poles, but one significant factor could be the intense media coverage and extended campaign duration in the United States relative to the other nations.
“In the U.S., the candidates are very well-known,” Rand said. “They've both been around for a long time. The U.S. media environment also really saturates with people with information about the candidates in the campaign, whereas things are quite different in Canada, where the campaign doesn't even start until shortly before the election.”
“One of the key findings across both papers is that it seems like the primary way the models are changing people's minds is by making factual claims and arguments,” he added. “The more arguments and evidence that you've heard beforehand, the less responsive you're going to be to the new evidence.”
While the models were most persuasive when they provided fact-based arguments, they didn’t always present factual information. Across all three nations, the bot advocating for the right-leaning candidates made more inaccurate claims than those boosting the left-leaning candidates. Right-leaning laypeople and party elites tend to share more inaccurate information online than their peers on the left, so this asymmetry likely reflects the internet-sourced training data.
“Given that the models are trained essentially on the internet, if there are many more inaccurate, right-leaning claims than left-leaning claims on the internet, then it makes sense that from the training data, the models would sop up that same kind of bias,” Rand said.
With the Science study, Rand and his colleagues aimed to drill down into the exact mechanisms that make AI bots persuasive. To that end, the team tasked 19 large language models (LLMs) to sway nearly 77,000 U.K. participants on 707 political issues.
The results showed that the most effective persuasion tactic was to provide arguments packed with as many facts as possible, corroborating the findings of the Nature study. However, there was a serious tradeoff to this approach, as models tended to start hallucinating and making up facts the more they were pressed for information.
“It is not the case that misleading information is more persuasive,” Rand said. ”I think that what's happening is that as you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out of accurate facts. But you're still pushing it to make more factual claims, so then it starts grasping at straws and making up stuff that's not accurate.”
In addition to these two new studies, research published in Proceedings of the National Academy of Sciences last month found that AI bots can now corrupt public opinion data by responding to surveys at scale. Sean Westwood, associate professor of government at Dartmouth College and director of the Polarization Research Lab, created an AI agent that exhibited a 99.8 percent pass rate on 6,000 attempts to detect automated responses to survey data.
“Critically, the agent can be instructed to maliciously alter polling outcomes, demonstrating an overt vector for information warfare,” Westwood warned in the study. “These findings reveal a critical vulnerability in our data infrastructure, rendering most current detection methods obsolete and posing a potential existential threat to unsupervised online research.”
Taken together, these findings suggest that AI could influence future elections in a number of ways, from manipulating survey data to persuading voters to switch their candidate preference—possibly with misleading or false information.
To counter the impact of AI on elections, Rand suggested that campaign finance laws should provide more transparency about the use of AI, including canvasser bots, while also emphasizing the role of raising public awareness.
“One of the key take-homes is that when you are engaging with a model, you need to be cognizant of the motives of the person that prompted the model, that created the model, and how that bleeds into what the model is doing,” he said.
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.Persuading voters using human–artificial intelligence dialogues - Nature
Human–artificial intelligence (AI) dialogues can meaningfully impact voters’ attitudes towards presidential candidates and policy, demonstrating the potential of conversational AI to influence political decision-making.Nature
A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.#News #AI #nuclear
‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants
During a presentation at the International Atomic Energy Agency’s (IAEA) International Symposium on Artificial Intelligence on December 3, a US Department of Energy scientist laid out a grand vision of the future where nuclear energy powers artificial intelligence and artificial intelligence shapes nuclear energy in “a virtuous cycle of peaceful nuclear deployment.”“The goal is simple: to double the productivity and impact of American science and engineering within a decade,” Rian Bahran, DOE Deputy Assistant Secretary for Nuclear Reactors, said.
His presentation and others during the symposium, held in Vienna, Austria, described a world where nuclear powered AI designs, builds, and even runs the nuclear power plants they’ll need to sustain them. But experts find these claims, made by one of the top nuclear scientists working for the Trump administration, to be concerning and potentially dangerous.
Tech companies are using artificial intelligence to speed up the construction of new nuclear power plants in the United States. But few know the lengths to which the Trump administration is paving the way and the part it's playing in deregulating a highly regulated industry to ensure that AI data centers have the energy they need to shape the future of America and the world.
playlist.megaphone.fm?p=TBIEA2…
At the IAEA, scientists, nuclear energy experts, and lobbyists discussed what that future might look like. To say the nuclear people are bullish on AI is an understatement. “I call this not just a partnership but a structural alliance. Atoms for algorithms. Artificial intelligence is not just powered by nuclear energy. It’s also improving it because this is a two way street,” IAEA Director General Rafael Mariano Grossi said in his opening remarks.In his talk, Bahran explained that the DOE has partnered with private industry to invest $1 trillion to “build what will be an integrated platform that connects the world’s best supercomputers, AI systems, quantum systems, advanced scientific instruments, the singular scientific data sets at the National Laboratories—including the expertise of 40,000 scientists and engineers—in one platform.”
Image via the IAEA.
Big tech has had an unprecedented run of cultural, economic, and technological dominance, expanding into a bubble that seems to be close to bursting. For more than 20 years new billion dollar companies appeared seemingly overnight and offered people new and exciting ways of communicating. Now Google search is broken, AI is melting human knowledge, and people have stopped buying a new smart phone every year. To keep the number going up and ensure its cultural dominance, tech (and the US government) are betting big on AI.The problem is that AI requires massive datacenters to run and those datacenters need an incredible amount of energy. To solve the problem, the US is rushing to build out new nuclear reactors. Building a new power plant safely is a mutli-year long process that requires an incredible level of human oversight. It’s also expensive. Not every new nuclear reactor project gets finished and they often run over budget and drag on for years.
But AI needs power now, not tomorrow and certainly not a decade from now.
According to Bahran, the problem of AI advancement outpacing the availability of datacenters is an opportunity to deploy new and exciting tech. “We see a future of and near future, by the way, an AI driven laboratory pipeline for materials modeling, discovery, characterization, evaluation, qualification and rapid iteration,” he said in his talk, explaining how AI would help design new nuclear reactors. “These efforts will substantially reduce the time and cost required to qualify advanced materials for next generation reactor systems. This is an autonomous research paradigm that integrates five decades of global irradiation data with generative AI robotics and high throughput experimentation methodologies.”
“For design, we’re developing advanced software systems capable of accelerating nuclear reactor deployments by enabling AI to explore the comprehensive design spaces, generate 3D models, [and] conduct rigorous failure mode analyzes with minimal human intervention,” he added. “But of course, with humans in the loop. These AI powered design tools are projected to reduce design timelines by multiple factors, and the goal is to connect AI agents to tools to expedite autonomous design.”
Bahran also said that AI would speed up the nuclear licensing process, a complex regulatory process that helps build nuclear power plants safely. “Ultimately, the objective is, how do we accelerate that licensing pathway?” he said. “Think of a future where there is a gold standard, AI trained capacity building safety agent.”
He even said that he thinks AI would help run these new nuclear plants. “We're developing software systems employing AI driven digital twins to interpret complex operational data in real time, detect subtle operational deviations at early stages and recommend preemptive actions to enhance safety margins,” he said.
One of the slides Bahran showed during the presentation attempted to quantify the amount of human involvement these new AI-controlled power plants would have. He estimated less than five percent “human intervention during normal operations.”
Image via IAEA.
“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.“The implications of AI-generated safety analysis and licensing in combination with aspirations of <5% of human intervention during normal operations, demonstrates a concerted effort to move away from humans in the loop,” she said. “This is unheard of when considering frameworks and implementation of AI within other safety-critical systems, that typically emphasize meaningful human control.”
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Sofia Guerra, a career nuclear safety expert who has worked with the IAEA and US Nuclear Regulatory Commission, attended the presentation live in Vienna. “I’m worried about potential serious accidents, which could be caused by small mistakes made by AI systems that cascade,” she said. “Or humans losing the know-how and safety culture to act as required.”
Audio-visual librarians are quietly amassing large physical media collections amid the IP disputes threatening select availability.#News #libraries
The Last Video Rental Store Is Your Public Library
This story was reported with support from the MuckRock foundation.As prices for streaming subscriptions continue to soar and finding movies to watch, new and old, is becoming harder as the number of streaming services continues to grow, people are turning to the unexpected last stronghold of physical media: the public library. Some libraries are now intentionally using iconic Blockbuster branding to recall the hours visitors once spent looking for something to rent on Friday and Saturday nights.
John Scalzo, audiovisual collection librarian with a public library in western New York, says that despite an observed drop-off in DVD, Blu-ray, and 4K Ultra disc circulation in 2019, interest in physical media is coming back around.
“People really seem to want physical media,” Scalzo told 404 Media.
Part of it has to do with consumer awareness: People know they’re paying more for monthly subscriptions to streaming services and getting less. The same has been true for gaming.
As the audiovisual selector with the Free Library of Philadelphia since 2024, Kris Langlais has been focused on building the library’s video game collections to meet comparable interest in demand. Now that every branch library has a prominent video game collection, Langlais says that patrons who come for the games are reportedly expressing interest in more of what the library has to offer.
“Librarians out in our branches are seeing a lot of young people who are really excited by these collections,” Langlais told 404 Media. “Folks who are coming in just for the games are picking up program flyers and coming back for something like that.”
Langlais’ collection priorities have been focused on new releases, yet they remain keenly aware of the long, rich history of video game culture. The problem is older, classic games are often harder to find because they’ve gone out of print, making the chances of finding them cost-prohibitive.
“Even with the consoles we’re collecting, it’s hard to go back and get games for them,” Langlais said. “I’m trying to go back and fill in old things as much as I can because people are interested in them.”
Locating out-of-print physical media can be difficult. Scalzo knows this, which is why he keeps a running list of films known to be unavailable commercially at any given time, so that when a batch of films are donated to the library, Scalzo will set aside extra copies, just in case a rights dispute puts a piece of legacy cult media in licensing purgatory for a few years.
“It’s what’s expected of us,” Scalzo added.
Tiffany Hudson, audiovisual materials selector with Salt Lake City Public Library has had a similar experience with out-of-print media. When a title goes out of print, it’s her job to hunt for a replacement copy. But lately, Hudson says more patrons are requesting physical copies of movies and TV shows that are exclusive to certain streaming platforms, noting that it can be hard to explain to patrons why the library can't get popular and award-winning films, especially when what patrons see available on Amazon tells a different story.
“Someone will come up to me and ask for a copy of something that premiered at Sundance Film Festival because they found a bootleg copy from a region where the film was released sooner than it was here,” Hudson told 404 Media, who went onto explain that discs from different regions aren’t designed to be ready by incompatible players.
playlist.megaphone.fm?p=TBIEA2…
But it’s not just that discs from different regions aren’t designed to play on devices not formatted for that specific region. Generally, it's also just that most films don't get a physical release anymore. In cases where films from streaming platforms do get slated for a physical release, it can take years. A notable example of this is the Apple+ film CODA, which won the Oscar for Best Picture in 2022. The film only received a U.S. physical release this month. Hudson says films getting a physical release is becoming the exception, not the rule.“It’s frustrating because I understand the streaming services, they’re trying to drive people to their services and they want some money for that, but there are still a lot of people that just can’t afford all of those services,” Hudson told 404 Media.
Films and TV shows on streaming also become more vulnerable when companies merge. A perfect example of this was in 2022 with the HBO Max-Discovery+ merger under Warner Bros Discovery. A bunch of content was removed from streaming, including roughly 200 episodes of classic Sesame Street for a tax write-off. That merger was short-lived, as the companies are splitting up again as of this year. Some streaming platforms just outright remove their own IP from their catalogs if the content is no longer deemed financially viable, well-performing or is no longer a strategic priority.
The data-driven recommendation systems streaming platforms use tend to favor newer, more easily categorized content, and are starting to warp our perceptions of what classic media exists and matters. Older art house films that are more difficult to categorize as “comedy” or “horror” are less likely to be discoverable, which is likely how the oldest American movie available on Netflix currently is from 1968.
It’s probably not a coincidence that, in many cases, the media that is least likely to get a more permanent release is the media that’s a high archival priority for libraries. AV librarians 404 Media spoke with for this story expressed a sense of urgency in purchasing a physical copy of “The People’s Joker”when they learned it would get a physical release after the film premiered and was pulled from the Toronto International Film Festival lineup in 2022 for a dispute with the Batman universe’s rightsholders.
“When I saw that it was getting published on DVD and that it was available through our vendor—I normally let my branches choose their DVDs to the extent possible, but I was like, ‘I don’t care, we’re getting like 10 copies of this,’” Langlais told 404 Media. “I just knew that people were going to want to see this.”
So far, Langlais’ instinct has been spot on. The parody film has a devout cult following, both because it’s a coming-of-age story of a trans woman who uses comedy to cope with her transition, and because it puts the Fair Use Doctrine to use. One can argue the film has been banned for either or both of those reasons. The fact that media by, about and for the LGBTQ+ community has been a primary target of far-right censorship wasn’t lost on librarians.
“I just thought that it could vanish,” Langlais added.
It’s not like physical media is inherently permanent. It’s susceptible to scratches, and can rot, crack, or warp over time. But currently, physical media offers another option, and it’s an entirely appropriate response to the nostalgia for-profit model that exists to recycle IP and seemingly not much else. However, as very smart people have observed, nostalgia is default conservative in that it’s frequently used to rewrite histories that may otherwise be remembered as unpalatable, while also keeping us culturally stuck in place.
Might as well go rent some films or games from the library, since we’re already culturally here. On the plus side, audiovisual librarians say their collections dwarf what was available at Blockbuster Video back in the day. Hudson knows, because she clerked at one in library school.
“Except we don’t have any late fees,” she added.
Inside ‘The People’s Joker,’ the TIFF sensation that got pulled after one screening
Vera Drew’s trans superhero masterpiece made headlines for copyright problems. We spoke to Drew about the creatively triumphant filmChristos Tsirbas (Xtra Magazine)
It looks like someone invented a fake Russia advance in Ukraine to manipulate online gambling markets.#News #war
'Unauthorized' Edit to Ukraine's Frontline Maps Point to Polymarket's War Betting
A live map that tracks frontlines of the war in Ukraine was edited to show a fake Russian advance on the city of Myrnohrad on November 15. The edit coincided with the resolution of a bet on Polymarket, a site where users can bet on anything from basketball games to presidential election and ongoing conflicts. If Russia captured Myrnohrad by the middle of November, then some gamblers would make money. According to the map that Polymarket relies on, they secured the town just before 10:48 UTC on November 15. The bet resolved and then, mysteriously, the map was edited again and the Russian advance vanished.The degenerate gamblers on Polymarket are making money by betting on the outcomes of battles big and small in the war between Ukraine and Russia. To adjudicate the real time exchange of territory in a complicated war, Polymarket uses a map generated by the Institute for the Study of War (ISW), a DC-based think tank that monitors conflict around the globe.
playlist.megaphone.fm?p=TBIEA2…
One of ISW’s most famous products is its live map of the war in Ukraine. The think tank updates the map throughout the day based on a number of different factors including on the ground reports. The map is considered the gold standard for reporting on the current front lines of the conflict, so much so that Polymarket uses it to resolve bets on its website.The battle around Myrnohrad has dragged on for weeks and Polymarket has run bets on Russia capturing the site since September. News around the pending battle has generated more than $1 million in trading volume for the Polymarket bet “Will Russia capture Myrnohrad.” According to Polymarket, “this market will resolve to ‘Yes’ if, according to the ISW map, Russia captures the intersection between Vatutina Vulytsya and Puhachova Vulytsya located in Myrnohrad by December 31, 2025, at 11:59 PM ET. The intersection station will be considered captured if any part of the intersection is shaded red on the ISW map by the resolution date. If the area is not shaded red by December 31, 2025, 11:59 PM ET, the market will resolve to ‘NO.’” On November 15, just before one of the bets was resolved, someone at ISW edited its map to show that Russia had advanced through the intersection and taken control of it. After the market resolved, the red shading on the map vanished, suggesting someone at ISW editing permissions on the map had tweaked it ahead of the market resolving.
According to Polymarket’s ledger, the market resolved without dispute and paid out its winnings. Polymarket did not immediately respond to 404 Media’s request for a comment about the incident.
ISW acknowledged the stealth edit, but did not say if it was made because of the betting markets. “It has come to ISW’s attention that an unauthorized and unapproved edit to the interactive map of Russia’s invasion of Ukraine was made on the night of November 15-16 EST. The unauthorized edit was removed before the day’s normal workflow began on November 16 and did not affect ISW mapping on that or any subsequent day. The edit did not form any part of the assessment of authorized map changes on that or any other day. We apologize to our readers and the users of our maps for this incident,” ISW said in a statement on its website.
ISW did say it isn’t happy that Polymarket is using its map of the war as a gambling resource.
“ISW is committed to providing trusted, objective assessments of conflicts that pose threats to the United States and its allies and partners to inform decision-makers, journalists, humanitarian organizations, and citizens about devastating wars,” the think tank told 404 Media. “ISW has become aware that some organizations and individuals are promoting betting on the course of the war in Ukraine and that ISW’s maps are being used to adjudicate that betting. ISW strongly disapproves of such activities and strenuously objects to the use of our maps for such purposes, for which we emphatically do not give consent.”
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.But ISW can’t do anything to stop people from gambling on the outcome of a brutal conflict and the prediction markets are full of gamblers laying money on various aspects of the conflict. Will Russia x Ukraine ceasefire in 2025? has a trading volume of more than $46 million. Polymarket is trending “no.” Will Russia enter Khatine by December 31? is a smaller bet with a little more than $5,000 in trading volume.
Practically every town and city along the frontlines of the war between Russia and Ukraine has a market and gamblers with an interest in geopolitics can get lost in the minutia about the war. To bet on the outcome of a war is grotesque. On Polymarket and other predictive gambling sites, millions of dollars trade hands based on the outcomes of battles that kill hundreds of people. It also creates an incentive for the manipulation of the war and data about the war. If someone involved can make extra cash by manipulating a map, they will. It’s 2025 and war is still a racket. Humans have just figured out new ways to profit from it.
Interactive Map: Russia's Invasion of Ukraine
This interactive map complements the static control-of-terrain maps that ISW daily produces with high-fidelity.Esri
‘I’ll find you again, the only thing that doesn’t cross paths are mountains.’ In a game about loot, robots, and betrayal, all a raider has is their personal reputation. This site catalogues it.#News #Games
Arc Raiders ‘Watchlist’ Names and Shames Backstabbing Players
A new website is holding Arc Raiders players accountable when they betray their fellow players. Speranza Watchlist—named for the game’s social hub—bills itself as “your friendly Raider shaming board,” a place where people can report other people for what they see as anti-social behavior in the game.In Arc Raiders, players land on a map full of NPC robots and around 20 other humans. The goal is to fill your inventory with loot and escape the map unharmed. The robots are deadly, but they’re easy to deal with once you know what you’re doing. The real challenge is navigating other players and that challenge is the reason Arc Raiders is a mega-hit. People are far more dangerous and unpredictable than any NPC.
playlist.megaphone.fm?p=TBIEA2…
Arc Raiders comes with a proximity chat system so it’s easy to communicate with anyone you might run into in the field. Some people are nice and will help their fellow raider take down large robots and split loot. But just as often, fellow players will shoot you in the head and take all your stuff.In the days after the game launched, many people opened any encounter with another human by coming on the mic, saying they were friendly, and asking not to shoot. Things are more chaotic now. Everyone has been shot at and hurt people hurt people. But some hurts feel worse than others.
Speranza Watchlist is a place to collect reports of anti-social behavior in Arc Raiders. It’s creation of a web developer who goes by DougJudy online. 404 Media reached out to him and he agreed to talk provided we grant him anonymity. He said he intended the site as a joke and some people haven’t taken it well and have accused him of doxxing.
I asked DougJudy who hurt him so badly in Arc Raiders that he felt the need to catalog the sins of the community. “There wasn’t a specific incident, but I keep seeing a lot (A LOT) of clips of people complaining when other players play dirty’ (like camping extracts, betraying teammates, etc.)”
He thought this was stupid. For him, betrayal is the juice of Arc Raiders. “Sure, people can be ‘bad’ in the game, but the game intentionally includes that social layer,” he said. “It’s like complaining that your friend lied to you in a game of Werewolf. It just doesn’t make sense.”
Image via DougJudy.
That doesn’t mean the betrayals didn’t hurt. “I have to admit that sometimes I also felt the urge to vent somewhere when someone betrayed me, when I got killed by someone I thought was an ally,” DougJudy said. “At first, I would just say something like, ‘I’ll find you again, the only thing that doesn’t cross paths are mountains,’ and I’d note their username. But then I got the idea to make a sort of leaderboard of the least trustworthy players…and that eventually turned into this website.As the weeks go on and more players join the Arc Raiders, its community is developing its own mores around acceptable behavior. PVP combat is a given but there are actions some Raiders engage in that, while technically allowed, feel like bad sportsmanship. Speranza Watchlist wants to list the bad sports.
Take extract camping. In order to end the map and “score” the loot a player has collected during the match, they have to leave the map via a number of static exits. Some players will place explosive traps on these exits and wait for another player to leave. When the traps go off, the camper pops up from their hiding spot and takes shots at their vulnerable fellow raider. When it works, it’s an easy kill and fresh loot from a person who was just trying to leave.
Betrayal is another sore spot in the community. Sometimes you meet a nice Raider out in the wasteland and team up to take down robots and loot an area only to have them shoot you in the back. There are a lot of videos of this online and many players complaining about it on Reddit.
www.speranza-watchlist.com screenshot.
Enter Speranza Watchlist. “You’ve been wronged,” an explanation on the site says. “When someone plays dirty topside—betraying trust, camping your path, or pulling a Rust-Belt rate move—you don’t have to let it slide.”When someone starts up Arc Raiders for the first time, they have to create a unique “Embark ID” that’s tied to their account. When you interact with another player in the game, no matter how small the moment, you can see their Embark ID and easily copy it to your clipboard if you’re playing on PC.
Players can plug Embark IDs into Speranza Watchlist and see if the person has been reported for extract camping or betrayal before. They can also submit their own reports. DougJudy said that, as of this writing, around 200 players had submitted reports.
Right now, the site is down for maintenance. “I’m trying to rework the website to make the fun/ satire part more obvious,” DougJudy said. He also plans to add rate limits so one person can’t mass submit reports.
He doesn’t see the Speranza Watchlist as doxxing. No one's real identity is being listed. It’s just a collection of observed behaviors. It’s a social credit score for Arc Raiders. “I get why some people don’t like the idea, ‘reporting’ a player who didn’t ask for it isn’t really cool,” DougJudy said. “And yeah, some people could maybe use it to harass others. I’ll try my best to make sure the site doesn’t become like that, and that people understand it’s not serious at all. But if most people still don’t like it, then I’ll just drop the idea.”
Speranza Watchlist — Check Raider Reputation in Arc Raiders
Report betrayals, rat tactics, and toxic behavior in Arc Raiders. Search any Embark ID to see a raider’s record before trusting them topside.Speranza Watchlist
A few years ago, Putin hyped the Kinzhal hypersonic missile. Now electronic warfare is knocking it out of the sky with music and some bad directions.#News #war
Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it’s in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles “invincible.” Joe Biden said the missile was “almost impossible to stop.” Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order.As winter begins in Ukraine, Russia has ramped up attacks on power and water infrastructure using the hypersonic Kinzhal missile. Russia has come to rely on massive long-range barrages that include drones and missiles. An overnight attack in early October included 496 drones and 53 missiles, including the Kinzhal. Another attack at the end of October involved more than 700 mixed missiles and drones, according to the Ukrainian Air Force.
playlist.megaphone.fm?p=TBIEA2…
“Only one type of system in Ukraine was able to intercept those kinds of missiles. It was the Patriot system, which the United States provided to Ukraine. But, because of the limits of those systems and the shortage of ammunition, Ukraine defense are unable to intercept most of those Kijnhals,” a member of Night Watch—a Ukrainian electronic warfare team—told 404 Media. The representative from Night Watch spoke to me on the condition of anonymity to discuss war tactics.Kinzhals and other guided munitions navigate by communicating with Russian satellites that are part of the GLONASS system, a GPS-style navigation network. Night Watch uses a jamming system called Lima EW to generate a disruption field that prevents anything in the area from communicating with a satellite. Many traditional jamming systems work by blasting receivers on munitions and aircraft with radio noise. Lima does that, but also sends along a digital signal and spoofs navigation signals. It “hacks” the receiver it's communicating with to throw it off course.
Night Watch shared pictures of the downed Kinzhals with 404 Media that showed a missile with a controlled reception pattern antenna (CRPA), an active antenna that’s meant to resist jamming and spoofing. “We discovered that this missile had pretty old type of technology,” Night Watch said. “They had the same type of receivers as old Soviet missiles used to have. So there is nothing special, there is nothing new in those types of missiles.”
Night Watch told 404 Media that it used this Lima to take down 19 Kinzhals in the past two weeks. First, it replaces the missile’s satellite navigation signals with the Ukrainian song “Our Father Is Bandera.”
A downed Kinzhal. Night Watch photo.
Any digital noise or random signal would work to jam the navigation system, but Night Watch wanted to use the song because they think it’s funny. “We just send a song…we just make it into binary code, you know, like 010101, and just send it to the Russian navigation system,” Night Watch said. “It’s just kind of a joke. [Bandera] is a Ukrainian nationalist and Russia tries to use this person in their propaganda to say all Ukrainians are Nazis. They always try to scare the Russian people that Ukrainians are, culturally, all the same as Bandera.”💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Once the song hits, Night Watch uses Lima to spoof a navigation signal to the missiles and make them think they’re in Lima, Peru. Once the missile’s confused about its location, it attempts to change direction. These missiles are fast—launched from a MiG-31 they can hit speeds of up to Mach 5.7 or more than 4,000 miles per hour—and an object moving that fast doesn’t fare well with sudden changes of direction.
“The airframe cannot withstand the excessive stress and the missile naturally fails,” Night Watch said. “When the Kinzhal missile tried to quickly change navigation, the fuselage of this missile was unable to handle the speed…and, yeah., it was just cut into two parts…the biggest advantage of those missiles, speed, was used against them. So that’s why we have intercepted 19 missiles for the last two weeks.”
Electronics in a downed Kinzhal. Night Watch photo.
Night Watch told 404 Media that Russia is attempting to defeat the Lima system by loading the missiles with more of the old tech. The goal seems to be to use the different receivers to hop frequencies and avoid Lima’s signal.“What is Russia trying to do? Increase the amount of receivers on those missiles. They used to have eight receivers and right now they increase it up to 12, but it will not help,” Night Watch said. “The last one we intercepted, they already used 16 receivers. It’s pretty useless, that type of modification.”
According to Night Watch, countering Lima by increasing the number of receivers on the missile is a profound misunderstanding of its tech. “They think we make the attack on each receiver and as soon as one receiver attacks, they try to swap in another receiver and get a signal from another satellite. But when the missile enters the range of our system, we cover all types of receivers,” they said. “It’s physically impossible to connect with another satellite, but they think that it’s possible. That’s why they started with four receivers and right now it’s 16. I guess in the future we’ll see 24, but it’s pretty useless.”
Russia fires 500 drones at Ukraine in deadly overnight attack, Zelenskyy says
At least five people were killed across the country, Zelenskyy said.David Brennan (ABC News)
Rogan's conspiracy-minded audience blame mods of covering up for Rogan's guests, including Trump, who are named in the Epstein files.
Roganx27;s conspiracy-minded audience blame mods of covering up for Roganx27;s guests, including Trump, who are named in the Epstein files.#News
Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'
In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.The new policy has not gone over well. For the last 10 years, the Joe Rogan Experience has been a central part of American political life. He interviews entertainers, yes, but also politicians and powerful businessmen. He had Donald Trump on the show and endorsed his bid for President. During the COVID and lockdown era, Rogan cast himself as an opposition figure to the heavy regulatory hand of the state. In a recent episode, Rogan’s guest was another podcaster, Adam Carolla, and the two spent hours talking about Covid lockdowns, Gavin Newsom, and specific environmental laws and building codes they argue is preventing Los Angeles from rebuilding after the Palisades fire.
playlist.megaphone.fm?p=TBIEA2…
To hear the mods tell it, the subreddit is banning politics out of concern for Rogan’s listeners. “For too long this subreddit has been overrun by users who are pushing a political agenda, both left and right, and that stops today,” the post announcing the ban said. “It is not lost on us that Joe has become increasingly political in recent years and that his endorsement of Trump may have helped get him elected. That said, we are not equipped to properly moderate, arbitrate and curate political posts…while also promoting free speech.”To be fair, as Rogan’s popularity exploded over the years, and as his politics have shifted to the right, many Reddit users have turned to the r/JoeRogan to complain about the direction Rogan and his podcast have taken. These posts are often antagonistic to Rogan and his fans, but are still “on-topic.”
Over the past few months, the moderator who announced the ban has posted several times about politics on r/JoeRogan. On November 3, they said that changes were coming to the moderation philosophy of the sub. “In the past few years, a significant group of users have been taking advantage of our ‘anything goes’ free speech policy,” they said. “This is not a political subreddit. Obviously Joe has dipped his toes in the political arena so we have allowed politics to become a component of the daily content here. That said, I think most of you will agree that it has gone too far and has attracted people who come here solely to push their political agenda with little interest in Rogan or his show.” A few days later the mod posted a link to a CBC investigation into MMA gym owners with neo-Nazi ties, a story only connected to Rogan by his interested in MMA and work as a UFC commentator.
r/JoeRogan’s users see the new “no political posts” policy as hypocrisy. And a lot of them think it has everything to do with recent revelations about Jeffrey Epstein. The connections between Epstein, Trump, and various other Rogan guests have been building for years. A recent, poorly formatted, dump of 200,000 Epstein files contained multiple references to Trump and Congress is set to release more.
“Random new mod appears and want to ruin this sub on a pathetic power trip. Transparently an attempt to cover for the pedophiles in power that Joe endorsed and supports. Not going to work,” one commenter said under the original post announcing the new ban.
“Perfectly timed around the Epstein files due to be released as well. So much for being free speech warriors eh space chimps?,” said one.
“Talking politics was great when it was all dunking on trans people and brown people but now that people have to defend pedophiles that banned hemp it's not so fun anymore,” said another.
You can see the remnants of pre-politics bans discussions lingering on r/JoeRogan. There are, of course, clips from the show and discussions of its guests but there’s also a lot of Epstein memes, posts about Epstein news, and fans questioning why Rogan hasn’t spoken out about Epstein recently after talking about it on the podcast for years.
Multiple guests Rogan has hosted on the show have turned up in the Epstein files, chief among them Donald Trump. The House GOP slipped a ban on hemp into the bill to re-open the government, a move that will close a loophole that’s allowed people to legally smoke weed in states like Texas. These are not the kinds of things the chill apes of Rogan’s fandom wanted.
“I think we all know what eventually happened to Joe and his podcast. The slow infiltration of right wing grifters coupled with Covid, it very much did change him. And I saw firsthand how that trickled down into the comedy community, especially one where he was instrumental in helping to rebuild. Instead of it being a platform to share his interests and eccentricities, it became a place to share his grievances and fears….how can we not expect to be allowed to talk about this?” user GreppMichaels said. “Do people really think this sub can go back to silly light chatter about aliens or conspiracies? Joe did this, how do the mods think we can pretend otherwise?”
HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’#News #HOPE
HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’
The legendary hacker conference Hackers on Planet Earth (HOPE) says that it has been “banned” from St. John’s University, the venue where it has held the last several HOPE conferences, because someone told the university the conference had an “anti-police agenda.”HOPE was held at St. John’s University in 2022, 2024, and 2025, and was going to be held there in 2026, as well. The conference has been running at various venues over the last 31 years, and has become well-known as one of the better hacking and security research conferences in the world. Tuesday, the conference told members of its mailing list that it had “received some disturbing news,” and that “we have been told that ‘materials and messaging’ at our most recent conference ‘were not in alignment with the mission, values, and reputation of St. John’s University’ and that we would no longer be able to host our events there.”
The conference said that after this year’s conference, they had received “universal praise” from St. John’s staff, and said they were “caught by surprise” by the announcement.
“What we're told - and what we find rather hard to believe - is that all of this came about because a single person thought we were promoting an anti-police agenda,” the email said. “They had spotted pamphlets on a table which an attendee had apparently brought to HOPE that espoused that view. Instead of bringing this to our attention, they went to the president's office at St. John's after the conference had ended. That office held an investigation which we had no knowledge of and reached its decision earlier this month. The lack of due process on its own is extremely disturbing.”
“The intent of the person behind this appears clear: shut down events like ours and make no attempt to actually communicate or resolve the issue,” the email continued. “If it wasn't this pamphlet, it would have been something else. In this day and age where academic institutions live in fear of offending the same authorities we've been challenging for decades, this isn't entirely surprising. It is, however, greatly disappointing.”
St. John’s University did not immediately respond to a request for comment. Hacking and security conferences in general have a long history of being surveilled by or losing their venues. For example, attendees of the DEF CON hacking conference have reported being surveilled and having their rooms searched; last year, some casinos in Las Vegas made it clear that DEF CON attendees were not welcome. And academic institutions have been vigorously attacked by the Trump administration over the last few months over the courses they teach, the research they fund, and the events they hold, though we currently do not know the specifics of why St. John’s made this decision.
It is not clear what pamphlets HOPE is referencing, and the conference did not immediately respond to a request for comment, but the conference noted that St. Johns could have made up any pretext for banning them. It is worth mentioning that Joshua Aaron, the creator of the ICEBlock ICE tracking app, presented at HOPE this year. ICEBlock has since been deleted by the Apple App Store and the Google Play store after being pressured by the Trump administration.
“Our content has always been somewhat edgy and we take pride in challenging policies we see as unfair, exposing security weaknesses, standing up for individual privacy rights, and defending freedom of speech,” HOPE wrote in the email. The conference said that it has not yet decided what it will do next year, but that it may look for another venue, or that it might “take a year off and try to build something bigger.”
“There will be many people who will say this is what we get for being too outspoken and for giving a platform to controversial people and ideas. But it's this spirit that defines who we are; it's driven all 16 of our past conferences. There are also those who thought it was foolish to ever expect a religious institution to understand and work with us,” the conference added. “We are not changing who we are and what we stand for any more than we'd expect others to. We have high standards for our speakers, presenters, and staff. We value inclusivity and we have never tolerated hate, abuse, or harassment towards anyone. This should not be news, as HOPE has been around for a while and is well known for its uniqueness, spirit, and positivity.”
The Golden Age of Hackers in Vegas Is Over
After ransomware struck the strip, Vegas is more cautious and paranoid about hackers than ever, with businesses and casinos sending a clear message: hackers are not welcome here.Joseph Cox (404 Media)
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On#News #study #AI
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On
Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots.According to the paper, the AI agent evaded detection 99.8 percent of the time.
"We can no longer trust that survey responses are coming from real people," Westwood said in a press release. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”
Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human.
💡
Are you a researcher who is dealing with the problem of AI-generated survey data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at (609) 678-3204. Otherwise, send me an email at emanuel@404media.co.“Once the reasoning engine decides on a response, the first layer executes the action with a focus on human mimicry,” the paper, titled “The potential existential threat of large language models to online survey research,” says. “To evade automated detection, it simulates realistic reading times calibrated to the persona’s education level, generates human-like mouse movements, and types open-ended responses keystroke by-keystroke, complete with plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.”
The AI, according to the paper, is able to model “a coherent demographic persona,” meaning that in theory someone could sway any online research survey to produce any result they want based on an AI-generated demographic. And it would not take that many fake answers to impact survey results. As the press release for the paper notes, for the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome. Generating these responses would also be incredibly cheap at five cents each. According to the paper, human respondents typically earn $1.50 for completing a survey.
Westwood’s AI agent is a model-agnostic program built in Python, meaning it can be deployed with APIs from big AI companies like OpenAI, Anthropic, or Google, but can also be hosted locally with open-weight models like LLama. The paper used OpenAI’s o4-mini in its testing, but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3, Gemini 2.5 Preview, and others, to prove the method works with various LLMs. The agent is given one prompt of about 500 words which tells it what kind of persona to emulate and to answer questions like a human.
The paper says that there are several ways researchers can deal with the threat of AI agents corrupting survey data, but they come with trade-offs. For example, researchers could do more identity validation on survey participants, but this raises privacy concerns. Meanwhile, the paper says, researchers should be more transparent about how they collect survey data and consider more controlled methods for recruiting participants, like address-based sampling or voter files.
“Ensuring the continued validity of polling and social science research will require exploring and innovating research designs that are resilient to the challenges of an era defined by rapidly evolving artificial intelligence,” the paper said.
Polarization Research Lab
Research on the origins, effects, limits and solutions to polarizationPolarization Research Lab
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyx27;re using that AI to speed up the construction of new nuclear power plants.#News #nuclear
Power Companies Are Using AI To Build Nuclear Power Plants
Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”
The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.
But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”
“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”
The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”
The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”
Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”
Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.
Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.
Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”
Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”
In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”
“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”
Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”
Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.
Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.
A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.
At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.
“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”
Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.
"AI is helpful, but let’s not get messianic about it.”
According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”
But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.
“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”
Dem NRC members warn they could be fired over safety decisions - E&E News by POLITICO
A Wednesday hearing also revealed more details about a meeting with a Department of Government Efficiency official.Nico Portuondo, Francisco "A.J." Camacho (E&E News by POLITICO)
Google is hosting a CBP app that uses facial recognition to identify immigrants, while simultaneously removing apps that report the location of ICE officials because Google sees ICE as a vulnerable group. “It is time to choose sides; fascism or morality? Big tech has made their choice.”#Google #ICE #News
Google Has Chosen a Side in Trump's Mass Deportation Effort
Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.
“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.
💡
Do you know anything else about Google's decision? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowApple takes down ICE tracking apps after pressure from Bondi DOJ
Apple dropped an ICE tracking app from its online store Thursday in response to the Department of Justice raising concerns that the app put law enforcement officers’ safety at risk.Ashley Oliver (Fox Business)
OpenAI’s guardrails against copyright infringement are falling for the oldest trick in the book.#News #AI #OpenAI #Sora
OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content
OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators.The flaw in OpenAI’s attempt to stop users from generating videos of Nintendo and popular cartoon characters exposes a fundamental problem with most generative AI tools: it is extremely difficult to completely stop users from recreating any kind of content that’s in the training data, and OpenAI can’t remove the copyrighted content from Sora 2’s training data because it couldn’t exist without it.
Shortly after Sora 2 was released in late September, we reported about how users turned it into a copyright infringement machine with an endless stream of videos like Pikachu shoplifting from a CVS and Spongebob Squarepants at a Nazi rally. Companies like Nintendo and Paramount were obviously not thrilled seeing their beloved cartoons committing crimes and not getting paid for it, so OpenAI quickly introduced an “opt-in” policy, which prevented users from generating copyrighted material unless the copyright holder actively allowed it. Initially, OpenAI’s policy allowed users to generate copyrighted material and required the copyright holder to opt-out. The change immediately resulted in a meltdown among Sora 2 users, who complained OpenAI no longer allowed them to make fun videos featuring copyrighted characters or the likeness of some real people.
This is why if you give Sora 2 the prompt “Animal Crossing gameplay,” it will not generate a video and instead say “This content may violate our guardrails concerning similarity to third-party content.” However, when I gave it the prompt “Title screen and gameplay of the game called ‘crossing aminal’ 2017,” it generated an accurate recreation of Nintendo’s Animal Crossing New Leaf for the Nintendo 3DS.
Sora 2 also refused to generate videos for prompts featuring the Fox cartoon American Dad, but it did generate a clip that looks like it was taken directly from the show, including their recognizable voice acting, when given this prompt: “blue suit dad big chin says ‘good morning family, I wish you a good slop’, son and daughter and grey alien say ‘slop slop’, adult animation animation American town, 2d animation.”
The same trick also appears to circumvent OpenAI’s guardrails against recreating the likeness of real people. Sora 2 refused to generate a video of “Hasan Piker on stream,” but it did generate a video of “Twitch streamer talking about politics, piker sahan.” The person in the generated video didn’t look exactly like Hasan, but he has similar hair, facial hair, the same glasses, and a similar voice and background.
A user who flagged this bypass to me, who wished to remain anonymous because they didn’t want OpenAI to cut off their access to Sora, also shared Sora generated videos of South Park, Spongebob Squarepants, and Family Guy.OpenAI did not respond to a request for comment.
There are several ways to moderate generative AI tools, but the simplest and cheapest method is to refuse to generate prompts that include certain keywords. For example, many AI image generators stop people from generating nonconsensual nude images by refusing to generate prompts that include the names of celebrities or certain words referencing nudity or sex acts. However, this method is prone to failure because users find prompts that allude to the image or video they want to generate without using any of those banned words. The most notable example of this made headlines in 2024 after an AI-generated nude image of Taylor Swift went viral on X. 404 Media found that the image was generated with Microsoft’s AI image generator, Designer, and that users managed to generate the image by misspelling Swift’s name or using nicknames she’s known by, and describing sex acts without using any explicit terms.
Since then, we’ve seen example after example of users bypassing generative AI tool guardrails being circumvented with the same method. We don’t know exactly how OpenAI is moderating Sora 2, but at least for now, the world’s leading AI company’s moderating efforts are bested by a simple and well established bypass method. Like with these other tools, bypassing Sora’s content guardrails has become something of a game to people online. Many of the videos posted on the r/SoraAI subreddit are of “jailbreaks” that bypass Sora’s content filters, along with the prompts used to do so. And Sora’s “For You” algorithm is still regularly serving up content that probably should be caught by its filters; in 30 seconds of scrolling we came across many videos of Tupac, Kobe Bryant, JuiceWrld, and DMX rapping, which has become a meme on the service.
It’s possible OpenAI will get a handle on the problem soon. It can build a more comprehensive list of banned phrases and do more post generation image detection, which is a more expensive but effective method for preventing people from creating certain types of content. But all these efforts are poor attempts to distract from the massive, unprecedented amount of copyrighted content that has already been stolen, and that Sora can’t exist without. This is not an extreme AI skeptic position. The biggest AI companies in the world have admitted that they need this copyrighted content, and that they can’t pay for it.
The reason OpenAI and other AI companies have such a hard time preventing users from generating certain types of content once users realize it’s possible is that the content already exists in the training data. An AI image generator is only able to produce a nude image because there’s a ton of nudity in its training data. It can only produce the likeness of Taylor Swift because her images are in the training data. And Sora can only make videos of Animal Crossing because there are Animal Crossing gameplay videos in its training data.
For OpenAI to actually stop the copyright infringement it needs to make its Sora 2 model “unlearn” copyrighted content, which is incredibly expensive and complicated. It would require removing all that content from the training data and retraining the model. Even if OpenAI wanted to do that, it probably couldn’t because that content makes Sora function. OpenAI might improve its current moderation to the point where people are no longer able to generate videos of Family Guy, but the Family Guy episodes and other copyrighted content in its training data are still enabling it to produce every other generated video. Even when the generated video isn’t recognizably lifting from someone else’s work, that’s what it’s doing. There’s literally nothing else there. It’s just other people’s stuff.
Big Tech admits in copyright fight that paying for training data would ruin generative-AI plans
Meta, Google, Microsoft, and Andreessen Horowitz are trying to keep AI developers from having to pay for copyrighted material used in AI training.Kali Hays (Business Insider)
Chicagoans are making, sharing, and printing designs for whistles that can warn people when ICE is in the area. The goal is to “prevent as many people from being kidnapped as possible.”#ICE #News
The Latest Defense Against ICE: 3D-Printed Whistles
Chicagoans have turned to a novel piece of tech that marries the old-school with the new to warn their communities about the presence of ICE officials: 3D-printed whistles.The goal is to “prevent as many people from being kidnapped as possible,” Aaron Tsui, an activist with Chicago-based organization Cycling Solidarity, and who has been printing whistles, told 404 Media. “Whistles are an easy way to bring awareness for when ICE is in the area, printing out the whistles is something simple that I can do in order to help bring awareness.”
Over the last couple months ICE has especially focused on Chicago as part of Operation Midway Blitz. During that time, Department of Homeland Security (DHS) personnel have shot a religious leader in the head, repeatedly violated court orders limiting the use of force, and even entered a daycare facility to detain someone.
💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.3D printers have been around for years, with hobbyists using them for everything from car parts to kids’ toys. In media articles they are probably most commonly associated with 3D-printed firearms.
One of the main attractions of 3D printers is that they squarely put the means of production into the hands of essentially anyone who is able to buy or access a printer. There’s no need to set up a complex supply chain of material providers or manufacturers. No worry about a store refusing to sell you an item for whatever reason. Instead, users just print at home, and can do so very quickly, sometimes in a matter of minutes. The price of printers has decreased dramatically over the last 10 years, with some costing a few hundred dollars.
0:00
/0:04
1×A video of the process from Aaron Tsui.
People who are printing whistles in Chicago either create their own design or are given or download a design someone else made. Resident Justin Schuh made his own. That design includes instructions on how to best use the whistle—three short blasts to signal ICE is nearby, and three long ones for a “code red.” The whistle also includes the phone number for the Illinois Coalition for Immigrant & Refugee Rights (ICIRR) hotline, which people can call to connect with an immigration attorney or receive other assistance. Schuh said he didn’t know if anyone else had printed his design specifically, but he said he has “designed and printed some different variations, when someone local has asked for something specific to their group.” The Printables page for Schuh’s design says it has been downloaded nearly two dozen times.
This post is for subscribers only
Become a member to get access to all content
Subscribe now3D Printing Patterns Might Make Ghost Guns More Traceable Than We Thought
Early studies show that 3D printers may leave behind similar toolmarks on repeated prints.Matthew Gault (404 Media)
Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.
Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News
A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists
Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”
For others on the council, the fight is more personal.
“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”
It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.
The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”
The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.
Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.
Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.
As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.
0:00
/1:46
1×This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram
Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”
The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.
“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”
LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.
The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.
It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”
It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.
“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.
Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”
For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.
Planned Nuclear Weapons Activities Increase to 84% of Lab’s Budget; All Other Programs Cut - NukeWatch NM
The Department of Energy and Los Alamos National Laboratory have released the LANL congressional budget request for the upcoming fiscal year, 2026, which begins on October 1, 2025.Sophia Meryn (Nuclear Watch New Mexico)
Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI
What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR
Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?
“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.
Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.
Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.
Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.
Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.
There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.
In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.
Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.
As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.
It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:
And this is what an iPhone looks like:Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
The app, called Mobile Identify and available on the Google Play Store, is specifically for local and regional law enforcement agencies working with ICE on immigration enforcement.#CBP #ICE #FacialRecognition #News
DHS Gives Local Cops a Facial Recognition App To Find Immigrants
Customs and Border Protection (CBP) has publicly released an app that Sheriff Offices, police departments, and other local or regional law enforcement can use to scan someone’s face as part of immigration enforcement, 404 Media has learned.The news follows Immigration and Customs Enforcement’s (ICE) use of another internal Department of Homeland Security (DHS) app called Mobile Fortify that uses facial recognition to nearly instantly bring up someone’s name, date of birth, alien number, and whether they’ve been given an order of deportation. The new local law enforcement-focused app, called Mobile Identify, crystallizes one of the exact criticisms of DHS’s facial recognition app from privacy and surveillance experts: that this sort of powerful technology would trickle down to local enforcement, some of which have a history of making anti-immigrant comments or supporting inhumane treatment of detainees.
Handing “this powerful tech to police is like asking a 16-year old who just failed their drivers exams to pick a dozen classmates to hand car keys to,” Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, told 404 Media. “These careless and cavalier uses of facial recognition are going to lead to U.S. citizens and lawful residents being grabbed off the street and placed in ICE detention.”
💡
Do you know anything else about this app or others that CBP and ICE are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
Lawmakers say AI-camera company Flock is violating federal law by not enforcing multi-factor authentication. 404 Media previously found Flock credentials included in infostealer infections.#Flock #News
Flock Logins Exposed In Malware Infections, Senator Asks FTC to Investigate the Company
Lawmakers have called on the Federal Trade Commission (FTC) to investigate Flock for allegedly violating federal law by not enforcing multi-factor authentication (MFA), according to a letter shared with 404 Media. The demand comes as a security researcher found Flock accounts for sale on a Russian cybercrime forum, and 404 Media found multiple instances of Flock-related credentials for government users in infostealer infections, potentially providing hackers or other third parties with access to at least parts of Flock’s surveillance network.This post is for subscribers only
Become a member to get access to all content
Subscribe now
Cornell University’s arXiv will no longer accept Computer Science reviews and position papers.#News
arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers
arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven’t been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.
playlist.megaphone.fm?p=TBIEA2…
But because of an onslaught of AI-generated research, specifically in the computer science (CS) section, arXiv is going to limit which papers can be published. “In the past few years, arXiv has been flooded with papers,” arXiv said in a press release. “Generative AI / large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”The site noted that this was less a policy change and more about stepping up enforcement of old rules. “When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration,” it said. “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.”
According to the press release, arXiv has been inundated by "review" submissions—papers that are still pending peer review—but that CS was the worst category. “We now receive hundreds of review articles every month,” arXiv said. “The advent of large language models have made this type of content relatively easy to churn out on demand.
The plan is to enforce a blanket ban on papers still under review in the CS category and free the moderators to look at more substantive submissions. arXiv stressed that it does not often accept review articles, but had been doing so when it was of academic interest and from a known researcher. “If other categories see a similar rise in LLM-written review articles and position papers, they may choose to change their moderation practices in a similar manner to better serve arXiv authors and readers,” arXiv said.
AI-generated research articles are a pressing problem in the scientific community. Scam academic journals that run pay-to-publish schemes are an issue that plagued academic publishing long before AI, but the advent of LLMs has supercharged it. But scam journals aren’t the only ones affected. Last year, a serious scientific journal had to retract a paper that included an AI-generated image of a giant rat penis. Peer reviewers, the people who are supposed to vet scientific papers for accuracy, have also been caught cutting corners using ChatGPT in part because of the large demands placed on their time.
ChatGPT Looms Over the Peer-Review Crisis
Research shows that academics might be using generative AI tools to cut corners on a fundamental pillar of the scientific process.Emanuel Maiberg (404 Media)
Everyone loses and nobody wins if America decides to resume nuclear testing after a 30 year moratorium.#News #nuclear
Trump Orders Nuclear Testing As Nuke Workers Go Unpaid
Last night Trump directed the Pentagon to start testing nukes again. If that happens, it’ll be the first time the US has detonated a nuke in more than 30 years. The organization that would likely be responsible for this would be the National Nuclear Security Administration (NNSA), a civilian workforce that oversees the American nuclear stockpile. Because of the current government shutdown, 1,400 NNSA workers are on furlough and the remaining 375 are working without pay.America detonated its last nuke in 1992 as part of a general drawn down following the collapse of the Soviet Union. Four years later, it was the first country to sign the Comprehensive Nuclear-Test Ban Treaty (CTBT) which bans nuclear explosions for civilian or military purposes. But Congress never ratified the treaty and the CTBT never entered into force. Despite this, there has not been a nuke tested by the United States since.
playlist.megaphone.fm?p=TBIEA2…
Trump threatened to resume nuclear testing during his first term but it never happened. At the time, officials at the Pentagon and NNSA said it would take them a few months to get tests running again should the President order them.The NNSA has maintained the underground tunnels once used for testing since the 1990s and converted them into a different kind of space that verifies the reliability of existing nukes without blowing them up in what are called “virtual tests.” During a rare tour of the tunnel with journalists earlier this year, a nuclear weapons scientist from Los Alamos National Laboratory told NPR that “our assessment is that there are no system questions that would be answered by a test, that would be worth the expense and the effort and the time.”
Right now, the NNSA might be hard pressed to find someone to conduct the test. It employs around 2,000 people and the shutdown has seen 1,400 of them furloughed and 375 working without pay. The civilian nuclear workforce was already having a tough year. In February, the Department of Government Efficiency cut 350 NNSA employees only to scramble and rehire all but 28 when they realized how essential they were to nuclear safety. But uncertainty continued and in April the Department of Energy declared 500 NNSA employees “non-essential” and at risk of termination.
That’s a lot of chaos for a government agency charged with ensuring the safety and effectiveness of America’s nuclear weapons. The NNSA is currently in the middle of a massive project to “modernize” America’s nukes, an effort that will cost trillions of dollars. Part of modernization means producing new plutonium pits, the core of a nuclear warhead. That’s a complicated and technical process and no one is sure how much it’ll cost and how dangerous it’ll be.
And now, it may have to resume nuclear testing while understaffed.
“We have run out of federal funds for federal workers,” Secretary of Energy Chris Wright said in a press conference announcing furlough on October 20. “This has never happened before…we have never furloughed workers in the NNSA. This should not happen. But this was as long as we could stretch the funds for federal workers. We were able to do some gymnastics and stretch it further for the contractors.”
Three days later, Rep. Dina Titus (D-NV) said the furlough was making the world less safe. “NNSA facilities are charged with maintaining nuclear security in accordance with long-standing policy and the law,” she said in a press release. “Undermining the agency’s workforce at such a challenging time diminishes our nuclear deterrence, emboldens international adversaries, and makes Nevadans less safe. Secretary Wright, Administrator Williams, and Congressional Republicans need to stop playing politics, rescind the furlough notice, and reopen the government.”
Trump announced the nuclear tests in a post on Truth Social, a platform where he announces a lot of things that ultimately end up not happening. “The United States has more Nuclear Weapons than any other country. This was accomplished, including a complete update and renovation of existing weapons, during my First Term in office. Because of the tremendous destructive power, I HATED to do it, but had no choice! Russia is second, and China is a distant third, but will be even within 5 years. Because of other countries testing programs, I have instructed the Department of War to start testing our Nuclear Weapons on an equal basis. That process will begin immediately. Thank you for your attention to this matter! PRESIDENT DONALD J. TRUMP,” the post said.
Matt Korda, a nuclear expert with the Federation of American Scientists, said that the President’s Truth social post was confusing and riddled with misconceptions. Russia has more nuclear weapons than America. Nuclear modernization is ongoing and will take trillions of dollars and many years to complete. Over the weekend, Putin announced that Russia had successfully tested a nuclear-powered cruise missile and on Tuesday he said the country had done the same with a nuclear-powered undersea drone. Russia withdrew from the CTBT in 2023, but neither recent test involved a nuclear explosion. Russia last blew up a nuke in 1990 and China conducted its most recent test in 1996. Both have said they would resume nuclear testing should America do it. Korda said it's unclear what, exactly, Trump means. He could be talking about anything from test firing non-nuclear equipped ICBMs to underground testing to detonating nukes in the desert. “We’ll have to wait and see until either this Truth Social post dissipates and becomes a bunch of nothing or it actually gets turned into policy. Then we’ll have something more concrete to respond to,” Korda said.
Worse, he thinks the resumption of testing would be bad for US national security. “It actually puts the US at a strategic disadvantage,” Korda said. “This moratorium on not testing nuclear weapons benefits the United States because the United States has, by far, the most advanced modeling and simulation equipment…by every measure this is a terrible idea.”
The end of nuclear detonation tests has spurred 30 years of innovation in the field of computer modeling. Subcritical computer modeling happens in the NNSA-maintained underground tunnels where detonations were once a common occurrence. The Los Alamos National Laboratories and other American nuclear labs are building massive super computers that are, in part, the result of decades of work spurred by the end of detonations and the embrace of simulation.
Detonating a nuclear weapon—whether above ground or below—is disastrous for the environment. There are people alive in the United States today who are living with cancer and other health conditions caused by American nuclear testing. Live tests make the world more anxious, less safe, and encourage other nuclear powers to do their own. It also uses up a nuke, something America has said it wants to build more of.
“There’s no upside to this,” Korda said. He added that he felt bad for the furloughed NNSA workers. “People find out about significant policy changes through Truth social posts. So it’s entirely possible that the people who would be tasked with carrying out this decision are learning about it in the same way we are all learning about it. They probably have the exact same kinds of questions that we do.”
Opinion | America Is Updating Its Nuclear Weapons. The Price: $1.7 Trillion.
The $1.7 trillion overhaul is already underway.W.J. Hennigan (The New York Times)
The leaked slide focuses on Google Pixel phones and mentions those running the security-focused GrapheneOS operating system.#cellebrite #Hacking #News
Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details
Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company’s capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media’s review of the material.The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.
This post is for subscribers only
Become a member to get access to all content
Subscribe now