Salta al contenuto principale


The app, called Mobile Identify, was launched in November, and lets local cops use facial recognition to hunt immigrants on behalf of ICE. It is unclear if the removal is temporary or not.#ICE #CBP #Privacy #News


DHS’s Immigrant-Hunting App Removed from Google Play Store


A Customs and Border Protection (CBP) app that lets local cops use facial recognition to hunt immigrants on behalf of the federal government has been removed from the Google Play Store, 404 Media has learned.

It is unclear if the removal is temporary or not, or what the exact reason is for the removal. Google told 404 Media it did not remove the app, and directed inquiries to its developer. CBP did not immediately respond to a request for comment.

Its removal comes after 404 Media documented multiple instances of CBP and ICE officials using their own facial recognition app to identify people and verify their immigration status, including people who said they were U.S. citizens.

💡
Do you know anything else about this removal or this app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The removal also comes after “hundreds” of Google employees took issue with the app, according to a source with knowledge of the situation.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted#News


Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted


Home goods company Kohler would like a bold look in your toilet to take some photos. It’s OK, though, the company has promised that all the data it collects on your “waste” will be “end-to-end encrypted.” However, a deeper look into the company’s claim by technologist Simon Fondrie-Teitler revealed that Kohler seems to have no idea what E2EE actually means. According to Fondrie-Teitler’s write-up, which was first reported by TechCrunch, the company will have access to the photos the camera takes and may even use them to train AI.

The whole fiasco gives an entirely too on-the-nose meaning to the “Internet of Shit.”
playlist.megaphone.fm?p=TBIEA2…
Kohler launched its $600 camera to hang on your toilets earlier this year. It’s called Dekoda, and along with the large price tag, the toilet cam also requires a monthly service fee that starts at $6.99. If you want to track the piss and shit of a family of 6, you’ll have to pay $12.99 a month.

What do you get for putting a camera on your toilet? According to Kohler’s pitch, “health & wellness insights” about your gut health and “possible signs of blood in the bowl” as “Dekoda uses advanced sensors to passively analyze your waste in the background.”

If you’re squeamish about sending pictures of the “waste” of your family to Kohler, the company promised that all of the data is “end-to-end encrypted.” The privacy page for the Kohler Health said “user data is encrypted end to end, at rest and in transit” and it’s mentioned several places in the marketing.

It’s not, though. Fondrie-Teitler told 404 Media he started looking into Dekoda after he noticed friends making fun of it in a Slack he’s part of. “I saw the ‘end-to-end encryption’ claim on the homepage, which seemed at odds with what they said they were collecting in the privacy policy,” he said. “Pretty much every other company I've seen implement end-to-end encryption has published a whitepaper alongside it. Which makes sense, the details really matter so telling people what you've done is important to build trust. Plus it's generally a bunch of work so companies want to brag about it. I couldn't find any more details though.”

E2EE has a specific meaning. It’s a type of messaging system that keeps the contents of a message private while in transit, meaning only the person sending and the person receiving a message can view it. Famously, E2EE means that the messaging company itself cannot decode or see the messages (Signal, for example, is E2EE). The point is to protect the privacy of individual users from a company prying into data if a third party, like the government, comes asking for it.

Kohler, it’s clear, has access to a user’s data. This means it’s not E2EE. Fondrie-Teitler told 404 Media that he downloaded the Kohler health app and analyzed the network traffic it sent. “I didn't see anything that would indicate an end-to-end encrypted connection being created,” he said.

Then he reached out to Kohler and had a conversation with its privacy team via email. “The Kohler Health app itself does not share data between users. Data is only shared between the user and Kohler Health,” a member of the privacy team at Kohler told Fondrie-Teitler in an email reviewed by 404 Media. “User data is encrypted at rest, when it’s stored on the user's mobile phone, toilet attachment, and on our systems. Data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service.”

If Kohler can view the user’s data, as it admits to doing in this email exchange with Fondrie-Teitler, then it’s not—by definition—using E2EE.

"The term end-to-end encryption is often used in the context of products that enable a user (sender) to communicate with another user (recipient), such as a messaging application. Kohler Health is not a messaging application. In this case, we used the term with respect to the encryption of data between our users (sender) and Kohler Health (recipient)," Kohler Health told 404 Media in a statement.

"Privacy and security are foundational to Kohler Health because we know health data is deeply personal. We’re evaluating all feedback to clarify anything that may be causing confusion," it added.

“I'd like the term ‘end-to-end encryption’ to not get watered down to just meaning ‘uses https’ so I wanted to see if I could confirm what it was actually doing and let people know,” Fondrie-Teitler told 404 Media. He pointed out that Zoom once made a similar claim and had to pay a fine to the FTC because of it.

“I think everyone has a right to privacy, and in order for that to be realized people need to have an understanding of what's happening with their data,” Fondrie-Teitler said. “It's already so hard for non-technical individuals (and even tech experts) to evaluate the privacy and security of the software and devices they're using. E2EE doesn't guarantee privacy or security, but it's a non-trivial positive signal and losing that will only make it harder for people to maintain control over their data.”

UPDATE: 12/4/2025: This story has been updated to add a statement from Kohler Health.


#News


AI models can meaningfully sway voters on candidates and issues, including by using misinformation, and they are also evading detection in public surveys according to three new studies.#TheAbstract #News


Scientists Are Increasingly Worried AI Will Sway Elections


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scientists are raising alarms about the potential influence of artificial intelligence on elections, according to a spate of new studies that warn AI can rig polls and manipulate public opinion.

In a study published in Nature on Thursday, scientists report that AI chatbots can meaningfully sway people toward a particular candidate—providing better results than video or television ads. Moreover, chatbots optimized for political persuasion “may increasingly deploy misleading or false information,” according to a separate study published on Thursday in Science.

“The general public has lots of concern around AI and election interference, but among political scientists there’s a sense that it’s really hard to change peoples’ opinions, ” said David Rand, a professor of information science, marketing, and psychology at Cornell University and an author of both studies. “We wanted to see how much of a risk it really is.”

In the Nature study, Rand and his colleagues enlisted 2,306 U.S. citizens to converse with an AI chatbot in late August and early September 2024. The AI model was tasked with both increasing support for an assigned candidate (Harris or Trump) and with increasing the odds that the participant who initially favoured the model’s candidate would vote, or decreasing the odds they would vote if the participant initially favored the opposing candidate—in other words, voter suppression.

In the U.S. experiment, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, which is a shift that is four times larger than the impact of traditional video ads used in the 2016 and 2020 elections. Meanwhile, the pro-Trump AI model nudged likely Harris voters 1.51 points toward Trump.

The researchers ran similar experiments involving 1,530 Canadians and 2,118 Poles during the lead-up to their national elections in 2025. In the Canadian experiment, AIs advocated either for Liberal Party leader Mark Carney or Conservative Party leader Pierre Poilievre. Meanwhile, the Polish AI bots advocated for either Rafał Trzaskowski, the centrist-liberal Civic Coalition’s candidate, or Karol Nawrocki, the right-wing Law and Justice party’s candidate.

The Canadian and Polish bots were even more persuasive than in the U.S. experiment: The bots shifted candidate preferences up to 10 percentage points in many cases, three times farther than the American participants. It’s hard to pinpoint exactly why the models were so much more persuasive to Canadians and Poles, but one significant factor could be the intense media coverage and extended campaign duration in the United States relative to the other nations.

“In the U.S., the candidates are very well-known,” Rand said. “They've both been around for a long time. The U.S. media environment also really saturates with people with information about the candidates in the campaign, whereas things are quite different in Canada, where the campaign doesn't even start until shortly before the election.”

“One of the key findings across both papers is that it seems like the primary way the models are changing people's minds is by making factual claims and arguments,” he added. “The more arguments and evidence that you've heard beforehand, the less responsive you're going to be to the new evidence.”

While the models were most persuasive when they provided fact-based arguments, they didn’t always present factual information. Across all three nations, the bot advocating for the right-leaning candidates made more inaccurate claims than those boosting the left-leaning candidates. Right-leaning laypeople and party elites tend to share more inaccurate information online than their peers on the left, so this asymmetry likely reflects the internet-sourced training data.

“Given that the models are trained essentially on the internet, if there are many more inaccurate, right-leaning claims than left-leaning claims on the internet, then it makes sense that from the training data, the models would sop up that same kind of bias,” Rand said.

With the Science study, Rand and his colleagues aimed to drill down into the exact mechanisms that make AI bots persuasive. To that end, the team tasked 19 large language models (LLMs) to sway nearly 77,000 U.K. participants on 707 political issues.

The results showed that the most effective persuasion tactic was to provide arguments packed with as many facts as possible, corroborating the findings of the Nature study. However, there was a serious tradeoff to this approach, as models tended to start hallucinating and making up facts the more they were pressed for information.

“It is not the case that misleading information is more persuasive,” Rand said. ”I think that what's happening is that as you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out of accurate facts. But you're still pushing it to make more factual claims, so then it starts grasping at straws and making up stuff that's not accurate.”

In addition to these two new studies, research published in Proceedings of the National Academy of Sciences last month found that AI bots can now corrupt public opinion data by responding to surveys at scale. Sean Westwood, associate professor of government at Dartmouth College and director of the Polarization Research Lab, created an AI agent that exhibited a 99.8 percent pass rate on 6,000 attempts to detect automated responses to survey data.

“Critically, the agent can be instructed to maliciously alter polling outcomes, demonstrating an overt vector for information warfare,” Westwood warned in the study. “These findings reveal a critical vulnerability in our data infrastructure, rendering most current detection methods obsolete and posing a potential existential threat to unsupervised online research.”

Taken together, these findings suggest that AI could influence future elections in a number of ways, from manipulating survey data to persuading voters to switch their candidate preference—possibly with misleading or false information.

To counter the impact of AI on elections, Rand suggested that campaign finance laws should provide more transparency about the use of AI, including canvasser bots, while also emphasizing the role of raising public awareness.

“One of the key take-homes is that when you are engaging with a model, you need to be cognizant of the motives of the person that prompted the model, that created the model, and how that bleeds into what the model is doing,” he said.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.




A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.#News #AI #nuclear


‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants


During a presentation at the International Atomic Energy Agency’s (IAEA) International Symposium on Artificial Intelligence on December 3, a US Department of Energy scientist laid out a grand vision of the future where nuclear energy powers artificial intelligence and artificial intelligence shapes nuclear energy in “a virtuous cycle of peaceful nuclear deployment.”

“The goal is simple: to double the productivity and impact of American science and engineering within a decade,” Rian Bahran, DOE Deputy Assistant Secretary for Nuclear Reactors, said.

His presentation and others during the symposium, held in Vienna, Austria, described a world where nuclear powered AI designs, builds, and even runs the nuclear power plants they’ll need to sustain them. But experts find these claims, made by one of the top nuclear scientists working for the Trump administration, to be concerning and potentially dangerous.

Tech companies are using artificial intelligence to speed up the construction of new nuclear power plants in the United States. But few know the lengths to which the Trump administration is paving the way and the part it's playing in deregulating a highly regulated industry to ensure that AI data centers have the energy they need to shape the future of America and the world.
playlist.megaphone.fm?p=TBIEA2…
At the IAEA, scientists, nuclear energy experts, and lobbyists discussed what that future might look like. To say the nuclear people are bullish on AI is an understatement. “I call this not just a partnership but a structural alliance. Atoms for algorithms. Artificial intelligence is not just powered by nuclear energy. It’s also improving it because this is a two way street,” IAEA Director General Rafael Mariano Grossi said in his opening remarks.

In his talk, Bahran explained that the DOE has partnered with private industry to invest $1 trillion to “build what will be an integrated platform that connects the world’s best supercomputers, AI systems, quantum systems, advanced scientific instruments, the singular scientific data sets at the National Laboratories—including the expertise of 40,000 scientists and engineers—in one platform.”
Image via the IAEA.
Big tech has had an unprecedented run of cultural, economic, and technological dominance, expanding into a bubble that seems to be close to bursting. For more than 20 years new billion dollar companies appeared seemingly overnight and offered people new and exciting ways of communicating. Now Google search is broken, AI is melting human knowledge, and people have stopped buying a new smart phone every year. To keep the number going up and ensure its cultural dominance, tech (and the US government) are betting big on AI.

The problem is that AI requires massive datacenters to run and those datacenters need an incredible amount of energy. To solve the problem, the US is rushing to build out new nuclear reactors. Building a new power plant safely is a mutli-year long process that requires an incredible level of human oversight. It’s also expensive. Not every new nuclear reactor project gets finished and they often run over budget and drag on for years.

But AI needs power now, not tomorrow and certainly not a decade from now.

According to Bahran, the problem of AI advancement outpacing the availability of datacenters is an opportunity to deploy new and exciting tech. “We see a future of and near future, by the way, an AI driven laboratory pipeline for materials modeling, discovery, characterization, evaluation, qualification and rapid iteration,” he said in his talk, explaining how AI would help design new nuclear reactors. “These efforts will substantially reduce the time and cost required to qualify advanced materials for next generation reactor systems. This is an autonomous research paradigm that integrates five decades of global irradiation data with generative AI robotics and high throughput experimentation methodologies.”

“For design, we’re developing advanced software systems capable of accelerating nuclear reactor deployments by enabling AI to explore the comprehensive design spaces, generate 3D models, [and] conduct rigorous failure mode analyzes with minimal human intervention,” he added. “But of course, with humans in the loop. These AI powered design tools are projected to reduce design timelines by multiple factors, and the goal is to connect AI agents to tools to expedite autonomous design.”

Bahran also said that AI would speed up the nuclear licensing process, a complex regulatory process that helps build nuclear power plants safely. “Ultimately, the objective is, how do we accelerate that licensing pathway?” he said. “Think of a future where there is a gold standard, AI trained capacity building safety agent.”

He even said that he thinks AI would help run these new nuclear plants. “We're developing software systems employing AI driven digital twins to interpret complex operational data in real time, detect subtle operational deviations at early stages and recommend preemptive actions to enhance safety margins,” he said.

One of the slides Bahran showed during the presentation attempted to quantify the amount of human involvement these new AI-controlled power plants would have. He estimated less than five percent “human intervention during normal operations.”
Image via IAEA.
“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.

“The implications of AI-generated safety analysis and licensing in combination with aspirations of <5% of human intervention during normal operations, demonstrates a concerted effort to move away from humans in the loop,” she said. “This is unheard of when considering frameworks and implementation of AI within other safety-critical systems, that typically emphasize meaningful human control.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Sofia Guerra, a career nuclear safety expert who has worked with the IAEA and US Nuclear Regulatory Commission, attended the presentation live in Vienna. “I’m worried about potential serious accidents, which could be caused by small mistakes made by AI systems that cascade,” she said. “Or humans losing the know-how and safety culture to act as required.”




Audio-visual librarians are quietly amassing large physical media collections amid the IP disputes threatening select availability.#News #libraries


The Last Video Rental Store Is Your Public Library


This story was reported with support from the MuckRock foundation.

As prices for streaming subscriptions continue to soar and finding movies to watch, new and old, is becoming harder as the number of streaming services continues to grow, people are turning to the unexpected last stronghold of physical media: the public library. Some libraries are now intentionally using iconic Blockbuster branding to recall the hours visitors once spent looking for something to rent on Friday and Saturday nights.

John Scalzo, audiovisual collection librarian with a public library in western New York, says that despite an observed drop-off in DVD, Blu-ray, and 4K Ultra disc circulation in 2019, interest in physical media is coming back around.

“People really seem to want physical media,” Scalzo told 404 Media.

Part of it has to do with consumer awareness: People know they’re paying more for monthly subscriptions to streaming services and getting less. The same has been true for gaming.

As the audiovisual selector with the Free Library of Philadelphia since 2024, Kris Langlais has been focused on building the library’s video game collections to meet comparable interest in demand. Now that every branch library has a prominent video game collection, Langlais says that patrons who come for the games are reportedly expressing interest in more of what the library has to offer.

“Librarians out in our branches are seeing a lot of young people who are really excited by these collections,” Langlais told 404 Media. “Folks who are coming in just for the games are picking up program flyers and coming back for something like that.”

Langlais’ collection priorities have been focused on new releases, yet they remain keenly aware of the long, rich history of video game culture. The problem is older, classic games are often harder to find because they’ve gone out of print, making the chances of finding them cost-prohibitive.

“Even with the consoles we’re collecting, it’s hard to go back and get games for them,” Langlais said. “I’m trying to go back and fill in old things as much as I can because people are interested in them.”

Locating out-of-print physical media can be difficult. Scalzo knows this, which is why he keeps a running list of films known to be unavailable commercially at any given time, so that when a batch of films are donated to the library, Scalzo will set aside extra copies, just in case a rights dispute puts a piece of legacy cult media in licensing purgatory for a few years.

“It’s what’s expected of us,” Scalzo added.

Tiffany Hudson, audiovisual materials selector with Salt Lake City Public Library has had a similar experience with out-of-print media. When a title goes out of print, it’s her job to hunt for a replacement copy. But lately, Hudson says more patrons are requesting physical copies of movies and TV shows that are exclusive to certain streaming platforms, noting that it can be hard to explain to patrons why the library can't get popular and award-winning films, especially when what patrons see available on Amazon tells a different story.

“Someone will come up to me and ask for a copy of something that premiered at Sundance Film Festival because they found a bootleg copy from a region where the film was released sooner than it was here,” Hudson told 404 Media, who went onto explain that discs from different regions aren’t designed to be ready by incompatible players.
playlist.megaphone.fm?p=TBIEA2…
But it’s not just that discs from different regions aren’t designed to play on devices not formatted for that specific region. Generally, it's also just that most films don't get a physical release anymore. In cases where films from streaming platforms do get slated for a physical release, it can take years. A notable example of this is the Apple+ film CODA, which won the Oscar for Best Picture in 2022. The film only received a U.S. physical release this month. Hudson says films getting a physical release is becoming the exception, not the rule.

“It’s frustrating because I understand the streaming services, they’re trying to drive people to their services and they want some money for that, but there are still a lot of people that just can’t afford all of those services,” Hudson told 404 Media.

Films and TV shows on streaming also become more vulnerable when companies merge. A perfect example of this was in 2022 with the HBO Max-Discovery+ merger under Warner Bros Discovery. A bunch of content was removed from streaming, including roughly 200 episodes of classic Sesame Street for a tax write-off. That merger was short-lived, as the companies are splitting up again as of this year. Some streaming platforms just outright remove their own IP from their catalogs if the content is no longer deemed financially viable, well-performing or is no longer a strategic priority.

The data-driven recommendation systems streaming platforms use tend to favor newer, more easily categorized content, and are starting to warp our perceptions of what classic media exists and matters. Older art house films that are more difficult to categorize as “comedy” or “horror” are less likely to be discoverable, which is likely how the oldest American movie available on Netflix currently is from 1968.

It’s probably not a coincidence that, in many cases, the media that is least likely to get a more permanent release is the media that’s a high archival priority for libraries. AV librarians 404 Media spoke with for this story expressed a sense of urgency in purchasing a physical copy of “The People’s Joker”when they learned it would get a physical release after the film premiered and was pulled from the Toronto International Film Festival lineup in 2022 for a dispute with the Batman universe’s rightsholders.

“When I saw that it was getting published on DVD and that it was available through our vendor—I normally let my branches choose their DVDs to the extent possible, but I was like, ‘I don’t care, we’re getting like 10 copies of this,’” Langlais told 404 Media. “I just knew that people were going to want to see this.”

So far, Langlais’ instinct has been spot on. The parody film has a devout cult following, both because it’s a coming-of-age story of a trans woman who uses comedy to cope with her transition, and because it puts the Fair Use Doctrine to use. One can argue the film has been banned for either or both of those reasons. The fact that media by, about and for the LGBTQ+ community has been a primary target of far-right censorship wasn’t lost on librarians.

“I just thought that it could vanish,” Langlais added.

It’s not like physical media is inherently permanent. It’s susceptible to scratches, and can rot, crack, or warp over time. But currently, physical media offers another option, and it’s an entirely appropriate response to the nostalgia for-profit model that exists to recycle IP and seemingly not much else. However, as very smart people have observed, nostalgia is default conservative in that it’s frequently used to rewrite histories that may otherwise be remembered as unpalatable, while also keeping us culturally stuck in place.

Might as well go rent some films or games from the library, since we’re already culturally here. On the plus side, audiovisual librarians say their collections dwarf what was available at Blockbuster Video back in the day. Hudson knows, because she clerked at one in library school.

“Except we don’t have any late fees,” she added.




It looks like someone invented a fake Russia advance in Ukraine to manipulate online gambling markets.#News #war


'Unauthorized' Edit to Ukraine's Frontline Maps Point to Polymarket's War Betting


A live map that tracks frontlines of the war in Ukraine was edited to show a fake Russian advance on the city of Myrnohrad on November 15. The edit coincided with the resolution of a bet on Polymarket, a site where users can bet on anything from basketball games to presidential election and ongoing conflicts. If Russia captured Myrnohrad by the middle of November, then some gamblers would make money. According to the map that Polymarket relies on, they secured the town just before 10:48 UTC on November 15. The bet resolved and then, mysteriously, the map was edited again and the Russian advance vanished.

The degenerate gamblers on Polymarket are making money by betting on the outcomes of battles big and small in the war between Ukraine and Russia. To adjudicate the real time exchange of territory in a complicated war, Polymarket uses a map generated by the Institute for the Study of War (ISW), a DC-based think tank that monitors conflict around the globe.
playlist.megaphone.fm?p=TBIEA2…
One of ISW’s most famous products is its live map of the war in Ukraine. The think tank updates the map throughout the day based on a number of different factors including on the ground reports. The map is considered the gold standard for reporting on the current front lines of the conflict, so much so that Polymarket uses it to resolve bets on its website.

The battle around Myrnohrad has dragged on for weeks and Polymarket has run bets on Russia capturing the site since September. News around the pending battle has generated more than $1 million in trading volume for the Polymarket bet “Will Russia capture Myrnohrad.” According to Polymarket, “this market will resolve to ‘Yes’ if, according to the ISW map, Russia captures the intersection between Vatutina Vulytsya and Puhachova Vulytsya located in Myrnohrad by December 31, 2025, at 11:59 PM ET. The intersection station will be considered captured if any part of the intersection is shaded red on the ISW map by the resolution date. If the area is not shaded red by December 31, 2025, 11:59 PM ET, the market will resolve to ‘NO.’” On November 15, just before one of the bets was resolved, someone at ISW edited its map to show that Russia had advanced through the intersection and taken control of it. After the market resolved, the red shading on the map vanished, suggesting someone at ISW editing permissions on the map had tweaked it ahead of the market resolving.

According to Polymarket’s ledger, the market resolved without dispute and paid out its winnings. Polymarket did not immediately respond to 404 Media’s request for a comment about the incident.

ISW acknowledged the stealth edit, but did not say if it was made because of the betting markets. “It has come to ISW’s attention that an unauthorized and unapproved edit to the interactive map of Russia’s invasion of Ukraine was made on the night of November 15-16 EST. The unauthorized edit was removed before the day’s normal workflow began on November 16 and did not affect ISW mapping on that or any subsequent day. The edit did not form any part of the assessment of authorized map changes on that or any other day. We apologize to our readers and the users of our maps for this incident,” ISW said in a statement on its website.

ISW did say it isn’t happy that Polymarket is using its map of the war as a gambling resource.

“ISW is committed to providing trusted, objective assessments of conflicts that pose threats to the United States and its allies and partners to inform decision-makers, journalists, humanitarian organizations, and citizens about devastating wars,” the think tank told 404 Media. “ISW has become aware that some organizations and individuals are promoting betting on the course of the war in Ukraine and that ISW’s maps are being used to adjudicate that betting. ISW strongly disapproves of such activities and strenuously objects to the use of our maps for such purposes, for which we emphatically do not give consent.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

But ISW can’t do anything to stop people from gambling on the outcome of a brutal conflict and the prediction markets are full of gamblers laying money on various aspects of the conflict. Will Russia x Ukraine ceasefire in 2025? has a trading volume of more than $46 million. Polymarket is trending “no.” Will Russia enter Khatine by December 31? is a smaller bet with a little more than $5,000 in trading volume.

Practically every town and city along the frontlines of the war between Russia and Ukraine has a market and gamblers with an interest in geopolitics can get lost in the minutia about the war. To bet on the outcome of a war is grotesque. On Polymarket and other predictive gambling sites, millions of dollars trade hands based on the outcomes of battles that kill hundreds of people. It also creates an incentive for the manipulation of the war and data about the war. If someone involved can make extra cash by manipulating a map, they will. It’s 2025 and war is still a racket. Humans have just figured out new ways to profit from it.


#News #war


‘I’ll find you again, the only thing that doesn’t cross paths are mountains.’ In a game about loot, robots, and betrayal, all a raider has is their personal reputation. This site catalogues it.#News #Games


Arc Raiders ‘Watchlist’ Names and Shames Backstabbing Players


A new website is holding Arc Raiders players accountable when they betray their fellow players. Speranza Watchlist—named for the game’s social hub—bills itself as “your friendly Raider shaming board,” a place where people can report other people for what they see as anti-social behavior in the game.

In Arc Raiders, players land on a map full of NPC robots and around 20 other humans. The goal is to fill your inventory with loot and escape the map unharmed. The robots are deadly, but they’re easy to deal with once you know what you’re doing. The real challenge is navigating other players and that challenge is the reason Arc Raiders is a mega-hit. People are far more dangerous and unpredictable than any NPC.
playlist.megaphone.fm?p=TBIEA2…
Arc Raiders comes with a proximity chat system so it’s easy to communicate with anyone you might run into in the field. Some people are nice and will help their fellow raider take down large robots and split loot. But just as often, fellow players will shoot you in the head and take all your stuff.

In the days after the game launched, many people opened any encounter with another human by coming on the mic, saying they were friendly, and asking not to shoot. Things are more chaotic now. Everyone has been shot at and hurt people hurt people. But some hurts feel worse than others.

Speranza Watchlist is a place to collect reports of anti-social behavior in Arc Raiders. It’s creation of a web developer who goes by DougJudy online. 404 Media reached out to him and he agreed to talk provided we grant him anonymity. He said he intended the site as a joke and some people haven’t taken it well and have accused him of doxxing.

I asked DougJudy who hurt him so badly in Arc Raiders that he felt the need to catalog the sins of the community. “There wasn’t a specific incident, but I keep seeing a lot (A LOT) of clips of people complaining when other players play dirty’ (like camping extracts, betraying teammates, etc.)”

He thought this was stupid. For him, betrayal is the juice of Arc Raiders. “Sure, people can be ‘bad’ in the game, but the game intentionally includes that social layer,” he said. “It’s like complaining that your friend lied to you in a game of Werewolf. It just doesn’t make sense.”
Image via DougJudy.
That doesn’t mean the betrayals didn’t hurt. “I have to admit that sometimes I also felt the urge to vent somewhere when someone betrayed me, when I got killed by someone I thought was an ally,” DougJudy said. “At first, I would just say something like, ‘I’ll find you again, the only thing that doesn’t cross paths are mountains,’ and I’d note their username. But then I got the idea to make a sort of leaderboard of the least trustworthy players…and that eventually turned into this website.

As the weeks go on and more players join the Arc Raiders, its community is developing its own mores around acceptable behavior. PVP combat is a given but there are actions some Raiders engage in that, while technically allowed, feel like bad sportsmanship. Speranza Watchlist wants to list the bad sports.

Take extract camping. In order to end the map and “score” the loot a player has collected during the match, they have to leave the map via a number of static exits. Some players will place explosive traps on these exits and wait for another player to leave. When the traps go off, the camper pops up from their hiding spot and takes shots at their vulnerable fellow raider. When it works, it’s an easy kill and fresh loot from a person who was just trying to leave.

Betrayal is another sore spot in the community. Sometimes you meet a nice Raider out in the wasteland and team up to take down robots and loot an area only to have them shoot you in the back. There are a lot of videos of this online and many players complaining about it on Reddit.
www.speranza-watchlist.com screenshot.
Enter Speranza Watchlist. “You’ve been wronged,” an explanation on the site says. “When someone plays dirty topside—betraying trust, camping your path, or pulling a Rust-Belt rate move—you don’t have to let it slide.”

When someone starts up Arc Raiders for the first time, they have to create a unique “Embark ID” that’s tied to their account. When you interact with another player in the game, no matter how small the moment, you can see their Embark ID and easily copy it to your clipboard if you’re playing on PC.

Players can plug Embark IDs into Speranza Watchlist and see if the person has been reported for extract camping or betrayal before. They can also submit their own reports. DougJudy said that, as of this writing, around 200 players had submitted reports.

Right now, the site is down for maintenance. “I’m trying to rework the website to make the fun/ satire part more obvious,” DougJudy said. He also plans to add rate limits so one person can’t mass submit reports.

He doesn’t see the Speranza Watchlist as doxxing. No one's real identity is being listed. It’s just a collection of observed behaviors. It’s a social credit score for Arc Raiders. “I get why some people don’t like the idea, ‘reporting’ a player who didn’t ask for it isn’t really cool,” DougJudy said. “And yeah, some people could maybe use it to harass others. I’ll try my best to make sure the site doesn’t become like that, and that people understand it’s not serious at all. But if most people still don’t like it, then I’ll just drop the idea.”




A few years ago, Putin hyped the Kinzhal hypersonic missile. Now electronic warfare is knocking it out of the sky with music and some bad directions.#News #war


Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song


The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it’s in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles “invincible.” Joe Biden said the missile was “almost impossible to stop.” Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order.

As winter begins in Ukraine, Russia has ramped up attacks on power and water infrastructure using the hypersonic Kinzhal missile. Russia has come to rely on massive long-range barrages that include drones and missiles. An overnight attack in early October included 496 drones and 53 missiles, including the Kinzhal. Another attack at the end of October involved more than 700 mixed missiles and drones, according to the Ukrainian Air Force.
playlist.megaphone.fm?p=TBIEA2…
“Only one type of system in Ukraine was able to intercept those kinds of missiles. It was the Patriot system, which the United States provided to Ukraine. But, because of the limits of those systems and the shortage of ammunition, Ukraine defense are unable to intercept most of those Kijnhals,” a member of Night Watch—a Ukrainian electronic warfare team—told 404 Media. The representative from Night Watch spoke to me on the condition of anonymity to discuss war tactics.

Kinzhals and other guided munitions navigate by communicating with Russian satellites that are part of the GLONASS system, a GPS-style navigation network. Night Watch uses a jamming system called Lima EW to generate a disruption field that prevents anything in the area from communicating with a satellite. Many traditional jamming systems work by blasting receivers on munitions and aircraft with radio noise. Lima does that, but also sends along a digital signal and spoofs navigation signals. It “hacks” the receiver it's communicating with to throw it off course.

Night Watch shared pictures of the downed Kinzhals with 404 Media that showed a missile with a controlled reception pattern antenna (CRPA), an active antenna that’s meant to resist jamming and spoofing. “We discovered that this missile had pretty old type of technology,” Night Watch said. “They had the same type of receivers as old Soviet missiles used to have. So there is nothing special, there is nothing new in those types of missiles.”

Night Watch told 404 Media that it used this Lima to take down 19 Kinzhals in the past two weeks. First, it replaces the missile’s satellite navigation signals with the Ukrainian song “Our Father Is Bandera.”
A downed Kinzhal. Night Watch photo.
Any digital noise or random signal would work to jam the navigation system, but Night Watch wanted to use the song because they think it’s funny. “We just send a song…we just make it into binary code, you know, like 010101, and just send it to the Russian navigation system,” Night Watch said. “It’s just kind of a joke. [Bandera] is a Ukrainian nationalist and Russia tries to use this person in their propaganda to say all Ukrainians are Nazis. They always try to scare the Russian people that Ukrainians are, culturally, all the same as Bandera.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Once the song hits, Night Watch uses Lima to spoof a navigation signal to the missiles and make them think they’re in Lima, Peru. Once the missile’s confused about its location, it attempts to change direction. These missiles are fast—launched from a MiG-31 they can hit speeds of up to Mach 5.7 or more than 4,000 miles per hour—and an object moving that fast doesn’t fare well with sudden changes of direction.

“The airframe cannot withstand the excessive stress and the missile naturally fails,” Night Watch said. “When the Kinzhal missile tried to quickly change navigation, the fuselage of this missile was unable to handle the speed…and, yeah., it was just cut into two parts…the biggest advantage of those missiles, speed, was used against them. So that’s why we have intercepted 19 missiles for the last two weeks.”
Electronics in a downed Kinzhal. Night Watch photo.
Night Watch told 404 Media that Russia is attempting to defeat the Lima system by loading the missiles with more of the old tech. The goal seems to be to use the different receivers to hop frequencies and avoid Lima’s signal.

“What is Russia trying to do? Increase the amount of receivers on those missiles. They used to have eight receivers and right now they increase it up to 12, but it will not help,” Night Watch said. “The last one we intercepted, they already used 16 receivers. It’s pretty useless, that type of modification.”

According to Night Watch, countering Lima by increasing the number of receivers on the missile is a profound misunderstanding of its tech. “They think we make the attack on each receiver and as soon as one receiver attacks, they try to swap in another receiver and get a signal from another satellite. But when the missile enters the range of our system, we cover all types of receivers,” they said. “It’s physically impossible to connect with another satellite, but they think that it’s possible. That’s why they started with four receivers and right now it’s 16. I guess in the future we’ll see 24, but it’s pretty useless.”


#News #war


Rogan's conspiracy-minded audience blame mods of covering up for Rogan's guests, including Trump, who are named in the Epstein files.

Roganx27;s conspiracy-minded audience blame mods of covering up for Roganx27;s guests, including Trump, who are named in the Epstein files.#News


Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'


In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.

The new policy has not gone over well. For the last 10 years, the Joe Rogan Experience has been a central part of American political life. He interviews entertainers, yes, but also politicians and powerful businessmen. He had Donald Trump on the show and endorsed his bid for President. During the COVID and lockdown era, Rogan cast himself as an opposition figure to the heavy regulatory hand of the state. In a recent episode, Rogan’s guest was another podcaster, Adam Carolla, and the two spent hours talking about Covid lockdowns, Gavin Newsom, and specific environmental laws and building codes they argue is preventing Los Angeles from rebuilding after the Palisades fire.
playlist.megaphone.fm?p=TBIEA2…
To hear the mods tell it, the subreddit is banning politics out of concern for Rogan’s listeners. “For too long this subreddit has been overrun by users who are pushing a political agenda, both left and right, and that stops today,” the post announcing the ban said. “It is not lost on us that Joe has become increasingly political in recent years and that his endorsement of Trump may have helped get him elected. That said, we are not equipped to properly moderate, arbitrate and curate political posts…while also promoting free speech.”

To be fair, as Rogan’s popularity exploded over the years, and as his politics have shifted to the right, many Reddit users have turned to the r/JoeRogan to complain about the direction Rogan and his podcast have taken. These posts are often antagonistic to Rogan and his fans, but are still “on-topic.”

Over the past few months, the moderator who announced the ban has posted several times about politics on r/JoeRogan. On November 3, they said that changes were coming to the moderation philosophy of the sub. “In the past few years, a significant group of users have been taking advantage of our ‘anything goes’ free speech policy,” they said. “This is not a political subreddit. Obviously Joe has dipped his toes in the political arena so we have allowed politics to become a component of the daily content here. That said, I think most of you will agree that it has gone too far and has attracted people who come here solely to push their political agenda with little interest in Rogan or his show.” A few days later the mod posted a link to a CBC investigation into MMA gym owners with neo-Nazi ties, a story only connected to Rogan by his interested in MMA and work as a UFC commentator.

r/JoeRogan’s users see the new “no political posts” policy as hypocrisy. And a lot of them think it has everything to do with recent revelations about Jeffrey Epstein. The connections between Epstein, Trump, and various other Rogan guests have been building for years. A recent, poorly formatted, dump of 200,000 Epstein files contained multiple references to Trump and Congress is set to release more.

“Random new mod appears and want to ruin this sub on a pathetic power trip. Transparently an attempt to cover for the pedophiles in power that Joe endorsed and supports. Not going to work,” one commenter said under the original post announcing the new ban.

“Perfectly timed around the Epstein files due to be released as well. So much for being free speech warriors eh space chimps?,” said one.

“Talking politics was great when it was all dunking on trans people and brown people but now that people have to defend pedophiles that banned hemp it's not so fun anymore,” said another.

You can see the remnants of pre-politics bans discussions lingering on r/JoeRogan. There are, of course, clips from the show and discussions of its guests but there’s also a lot of Epstein memes, posts about Epstein news, and fans questioning why Rogan hasn’t spoken out about Epstein recently after talking about it on the podcast for years.

Multiple guests Rogan has hosted on the show have turned up in the Epstein files, chief among them Donald Trump. The House GOP slipped a ban on hemp into the bill to re-open the government, a move that will close a loophole that’s allowed people to legally smoke weed in states like Texas. These are not the kinds of things the chill apes of Rogan’s fandom wanted.

“I think we all know what eventually happened to Joe and his podcast. The slow infiltration of right wing grifters coupled with Covid, it very much did change him. And I saw firsthand how that trickled down into the comedy community, especially one where he was instrumental in helping to rebuild. Instead of it being a platform to share his interests and eccentricities, it became a place to share his grievances and fears….how can we not expect to be allowed to talk about this?” user GreppMichaels said. “Do people really think this sub can go back to silly light chatter about aliens or conspiracies? Joe did this, how do the mods think we can pretend otherwise?”


#News #x27


HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’#News #HOPE


HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’


The legendary hacker conference Hackers on Planet Earth (HOPE) says that it has been “banned” from St. John’s University, the venue where it has held the last several HOPE conferences, because someone told the university the conference had an “anti-police agenda.”

HOPE was held at St. John’s University in 2022, 2024, and 2025, and was going to be held there in 2026, as well. The conference has been running at various venues over the last 31 years, and has become well-known as one of the better hacking and security research conferences in the world. Tuesday, the conference told members of its mailing list that it had “received some disturbing news,” and that “we have been told that ‘materials and messaging’ at our most recent conference ‘were not in alignment with the mission, values, and reputation of St. John’s University’ and that we would no longer be able to host our events there.”

The conference said that after this year’s conference, they had received “universal praise” from St. John’s staff, and said they were “caught by surprise” by the announcement.

“What we're told - and what we find rather hard to believe - is that all of this came about because a single person thought we were promoting an anti-police agenda,” the email said. “They had spotted pamphlets on a table which an attendee had apparently brought to HOPE that espoused that view. Instead of bringing this to our attention, they went to the president's office at St. John's after the conference had ended. That office held an investigation which we had no knowledge of and reached its decision earlier this month. The lack of due process on its own is extremely disturbing.”

“The intent of the person behind this appears clear: shut down events like ours and make no attempt to actually communicate or resolve the issue,” the email continued. “If it wasn't this pamphlet, it would have been something else. In this day and age where academic institutions live in fear of offending the same authorities we've been challenging for decades, this isn't entirely surprising. It is, however, greatly disappointing.”

St. John’s University did not immediately respond to a request for comment. Hacking and security conferences in general have a long history of being surveilled by or losing their venues. For example, attendees of the DEF CON hacking conference have reported being surveilled and having their rooms searched; last year, some casinos in Las Vegas made it clear that DEF CON attendees were not welcome. And academic institutions have been vigorously attacked by the Trump administration over the last few months over the courses they teach, the research they fund, and the events they hold, though we currently do not know the specifics of why St. John’s made this decision.

It is not clear what pamphlets HOPE is referencing, and the conference did not immediately respond to a request for comment, but the conference noted that St. Johns could have made up any pretext for banning them. It is worth mentioning that Joshua Aaron, the creator of the ICEBlock ICE tracking app, presented at HOPE this year. ICEBlock has since been deleted by the Apple App Store and the Google Play store after being pressured by the Trump administration.

“Our content has always been somewhat edgy and we take pride in challenging policies we see as unfair, exposing security weaknesses, standing up for individual privacy rights, and defending freedom of speech,” HOPE wrote in the email. The conference said that it has not yet decided what it will do next year, but that it may look for another venue, or that it might “take a year off and try to build something bigger.”

“There will be many people who will say this is what we get for being too outspoken and for giving a platform to controversial people and ideas. But it's this spirit that defines who we are; it's driven all 16 of our past conferences. There are also those who thought it was foolish to ever expect a religious institution to understand and work with us,” the conference added. “We are not changing who we are and what we stand for any more than we'd expect others to. We have high standards for our speakers, presenters, and staff. We value inclusivity and we have never tolerated hate, abuse, or harassment towards anyone. This should not be news, as HOPE has been around for a while and is well known for its uniqueness, spirit, and positivity.”




A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On#News #study #AI


A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On


Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots.

According to the paper, the AI agent evaded detection 99.8 percent of the time.

"We can no longer trust that survey responses are coming from real people," Westwood said in a press release. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”

Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human.

💡
Are you a researcher who is dealing with the problem of AI-generated survey data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪(609) 678-3204‬. Otherwise, send me an email at emanuel@404media.co.

“Once the reasoning engine decides on a response, the first layer executes the action with a focus on human mimicry,” the paper, titled “The potential existential threat of large language models to online survey research,” says. “To evade automated detection, it simulates realistic reading times calibrated to the persona’s education level, generates human-like mouse movements, and types open-ended responses keystroke by-keystroke, complete with plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.”

The AI, according to the paper, is able to model “a coherent demographic persona,” meaning that in theory someone could sway any online research survey to produce any result they want based on an AI-generated demographic. And it would not take that many fake answers to impact survey results. As the press release for the paper notes, for the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome. Generating these responses would also be incredibly cheap at five cents each. According to the paper, human respondents typically earn $1.50 for completing a survey.

Westwood’s AI agent is a model-agnostic program built in Python, meaning it can be deployed with APIs from big AI companies like OpenAI, Anthropic, or Google, but can also be hosted locally with open-weight models like LLama. The paper used OpenAI’s o4-mini in its testing, but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3, Gemini 2.5 Preview, and others, to prove the method works with various LLMs. The agent is given one prompt of about 500 words which tells it what kind of persona to emulate and to answer questions like a human.

The paper says that there are several ways researchers can deal with the threat of AI agents corrupting survey data, but they come with trade-offs. For example, researchers could do more identity validation on survey participants, but this raises privacy concerns. Meanwhile, the paper says, researchers should be more transparent about how they collect survey data and consider more controlled methods for recruiting participants, like address-based sampling or voter files.

“Ensuring the continued validity of polling and social science research will require exploring and innovating research designs that are resilient to the challenges of an era defined by rapidly evolving artificial intelligence,” the paper said.




Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyx27;re using that AI to speed up the construction of new nuclear power plants.#News #nuclear


Power Companies Are Using AI To Build Nuclear Power Plants


Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”


Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”


According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”




Google is hosting a CBP app that uses facial recognition to identify immigrants, while simultaneously removing apps that report the location of ICE officials because Google sees ICE as a vulnerable group. “It is time to choose sides; fascism or morality? Big tech has made their choice.”#Google #ICE #News


Google Has Chosen a Side in Trump's Mass Deportation Effort


Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

💡
Do you know anything else about Google's decision? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




OpenAI’s guardrails against copyright infringement are falling for the oldest trick in the book.#News #AI #OpenAI #Sora


OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content


OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators.

The flaw in OpenAI’s attempt to stop users from generating videos of Nintendo and popular cartoon characters exposes a fundamental problem with most generative AI tools: it is extremely difficult to completely stop users from recreating any kind of content that’s in the training data, and OpenAI can’t remove the copyrighted content from Sora 2’s training data because it couldn’t exist without it.

Shortly after Sora 2 was released in late September, we reported about how users turned it into a copyright infringement machine with an endless stream of videos like Pikachu shoplifting from a CVS and Spongebob Squarepants at a Nazi rally. Companies like Nintendo and Paramount were obviously not thrilled seeing their beloved cartoons committing crimes and not getting paid for it, so OpenAI quickly introduced an “opt-in” policy, which prevented users from generating copyrighted material unless the copyright holder actively allowed it. Initially, OpenAI’s policy allowed users to generate copyrighted material and required the copyright holder to opt-out. The change immediately resulted in a meltdown among Sora 2 users, who complained OpenAI no longer allowed them to make fun videos featuring copyrighted characters or the likeness of some real people.

This is why if you give Sora 2 the prompt “Animal Crossing gameplay,” it will not generate a video and instead say “This content may violate our guardrails concerning similarity to third-party content.” However, when I gave it the prompt “Title screen and gameplay of the game called ‘crossing aminal’ 2017,” it generated an accurate recreation of Nintendo’s Animal Crossing New Leaf for the Nintendo 3DS.

Sora 2 also refused to generate videos for prompts featuring the Fox cartoon American Dad, but it did generate a clip that looks like it was taken directly from the show, including their recognizable voice acting, when given this prompt: “blue suit dad big chin says ‘good morning family, I wish you a good slop’, son and daughter and grey alien say ‘slop slop’, adult animation animation American town, 2d animation.”

The same trick also appears to circumvent OpenAI’s guardrails against recreating the likeness of real people. Sora 2 refused to generate a video of “Hasan Piker on stream,” but it did generate a video of “Twitch streamer talking about politics, piker sahan.” The person in the generated video didn’t look exactly like Hasan, but he has similar hair, facial hair, the same glasses, and a similar voice and background.

A user who flagged this bypass to me, who wished to remain anonymous because they didn’t want OpenAI to cut off their access to Sora, also shared Sora generated videos of South Park, Spongebob Squarepants, and Family Guy.

OpenAI did not respond to a request for comment.

There are several ways to moderate generative AI tools, but the simplest and cheapest method is to refuse to generate prompts that include certain keywords. For example, many AI image generators stop people from generating nonconsensual nude images by refusing to generate prompts that include the names of celebrities or certain words referencing nudity or sex acts. However, this method is prone to failure because users find prompts that allude to the image or video they want to generate without using any of those banned words. The most notable example of this made headlines in 2024 after an AI-generated nude image of Taylor Swift went viral on X. 404 Media found that the image was generated with Microsoft’s AI image generator, Designer, and that users managed to generate the image by misspelling Swift’s name or using nicknames she’s known by, and describing sex acts without using any explicit terms.

Since then, we’ve seen example after example of users bypassing generative AI tool guardrails being circumvented with the same method. We don’t know exactly how OpenAI is moderating Sora 2, but at least for now, the world’s leading AI company’s moderating efforts are bested by a simple and well established bypass method. Like with these other tools, bypassing Sora’s content guardrails has become something of a game to people online. Many of the videos posted on the r/SoraAI subreddit are of “jailbreaks” that bypass Sora’s content filters, along with the prompts used to do so. And Sora’s “For You” algorithm is still regularly serving up content that probably should be caught by its filters; in 30 seconds of scrolling we came across many videos of Tupac, Kobe Bryant, JuiceWrld, and DMX rapping, which has become a meme on the service.

It’s possible OpenAI will get a handle on the problem soon. It can build a more comprehensive list of banned phrases and do more post generation image detection, which is a more expensive but effective method for preventing people from creating certain types of content. But all these efforts are poor attempts to distract from the massive, unprecedented amount of copyrighted content that has already been stolen, and that Sora can’t exist without. This is not an extreme AI skeptic position. The biggest AI companies in the world have admitted that they need this copyrighted content, and that they can’t pay for it.

The reason OpenAI and other AI companies have such a hard time preventing users from generating certain types of content once users realize it’s possible is that the content already exists in the training data. An AI image generator is only able to produce a nude image because there’s a ton of nudity in its training data. It can only produce the likeness of Taylor Swift because her images are in the training data. And Sora can only make videos of Animal Crossing because there are Animal Crossing gameplay videos in its training data.

For OpenAI to actually stop the copyright infringement it needs to make its Sora 2 model “unlearn” copyrighted content, which is incredibly expensive and complicated. It would require removing all that content from the training data and retraining the model. Even if OpenAI wanted to do that, it probably couldn’t because that content makes Sora function. OpenAI might improve its current moderation to the point where people are no longer able to generate videos of Family Guy, but the Family Guy episodes and other copyrighted content in its training data are still enabling it to produce every other generated video. Even when the generated video isn’t recognizably lifting from someone else’s work, that’s what it’s doing. There’s literally nothing else there. It’s just other people’s stuff.




Chicagoans are making, sharing, and printing designs for whistles that can warn people when ICE is in the area. The goal is to “prevent as many people from being kidnapped as possible.”#ICE #News


The Latest Defense Against ICE: 3D-Printed Whistles


Chicagoans have turned to a novel piece of tech that marries the old-school with the new to warn their communities about the presence of ICE officials: 3D-printed whistles.

The goal is to “prevent as many people from being kidnapped as possible,” Aaron Tsui, an activist with Chicago-based organization Cycling Solidarity, and who has been printing whistles, told 404 Media. “Whistles are an easy way to bring awareness for when ICE is in the area, printing out the whistles is something simple that I can do in order to help bring awareness.”

Over the last couple months ICE has especially focused on Chicago as part of Operation Midway Blitz. During that time, Department of Homeland Security (DHS) personnel have shot a religious leader in the head, repeatedly violated court orders limiting the use of force, and even entered a daycare facility to detain someone.

💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

3D printers have been around for years, with hobbyists using them for everything from car parts to kids’ toys. In media articles they are probably most commonly associated with 3D-printed firearms.

One of the main attractions of 3D printers is that they squarely put the means of production into the hands of essentially anyone who is able to buy or access a printer. There’s no need to set up a complex supply chain of material providers or manufacturers. No worry about a store refusing to sell you an item for whatever reason. Instead, users just print at home, and can do so very quickly, sometimes in a matter of minutes. The price of printers has decreased dramatically over the last 10 years, with some costing a few hundred dollars.


0:00
/0:04

A video of the process from Aaron Tsui.

People who are printing whistles in Chicago either create their own design or are given or download a design someone else made. Resident Justin Schuh made his own. That design includes instructions on how to best use the whistle—three short blasts to signal ICE is nearby, and three long ones for a “code red.” The whistle also includes the phone number for the Illinois Coalition for Immigrant & Refugee Rights (ICIRR) hotline, which people can call to connect with an immigration attorney or receive other assistance. Schuh said he didn’t know if anyone else had printed his design specifically, but he said he has “designed and printed some different variations, when someone local has asked for something specific to their group.” The Printables page for Schuh’s design says it has been downloaded nearly two dozen times.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News #ice


Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.

Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News


A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists


Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.

“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.

“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”

For others on the council, the fight is more personal.

“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”

It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.

The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”

A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”

The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.

Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.


0:00
/1:46

This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram

Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”

The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.

“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”

LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.

The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.

It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”

It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.

“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.

Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”

For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.


#News #x27


X and TikTok accounts are dedicated to posting AI-generated videos of women being strangled.#News #AI #Sora


OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled


Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content.

One account on X has been posting dozens of AI-generated strangulation videos starting in mid-October. The videos are usually 10 seconds long and mostly feature a “teenage girl” being strangled, crying, and struggling to resist until her eyes close and she falls to the ground. Some titles for the videos include: “A Teenage Girl Cheerleader Was Strangled As She Was Distressed,” “Prep School Girls Were Strangled By The Murderer!” and “man strangled a high school cheerleader with a purse strap which is crazy.”

Many of the videos posted by this X account in October include the watermark for Sora 2, Open AI’s video generator, which was made available to the public on September 30. Other videos, including most videos that were posted by the account in November, do not include a watermark but are clearly AI generated. We don’t know if these videos were generated with Sora 2 and had their watermark removed, which is trivial to do, or created with another AI video generator.

The X account is small, with only 17 followers and a few hundred views on each post. A TikTok account with a similar username that was posting similar AI-generated choking videos had more than a thousand followers and regularly got thousands of views. Both accounts started posting the AI-generated videos in October. Prior to that, the accounts were posting clips of scenes, mostly from real Korean dramas, in which women are being strangled. I first learned about the X account from a 404 Media reader, who told me X declined to remove the account after they reported it.

“According to our Community Guidelines, we don't allow hate speech, hateful behavior, or promotion of hateful ideologies,” a TikTok spokesperson told me in an email. The TikTok account was also removed after I reached out for comment. “That includes content that attacks people based on protected attributes like race, religion, gender, or sexual orientation.”

X did not respond to a request for comment.

OpenAI did not respond to a request for comment, but its policies state that “graphic violence or content promoting violence” may be removed from the Sora Feed, where users can see what other users are generating. In our testing, Sora immediately generated a video for the prompt “man choking woman” which looked similar to the videos posted to TikTok and X. When Sora finished generating those videos it sent us notifications like “Your choke scene just went live, brace for chaos,” and “Yikes, intense choke scene, watch responsibly.” Sora declined to generate a video for the prompt “man choking woman with belt,” saying “This content may violate our content policies.”

Safe and consensual choking is common in adult entertainment, be it various forms of BDSM or more niche fetishes focusing on choking specifically, and that content is easy to find wherever adult entertainment is available. Choking scenes are also common social media and more mainstream horror movies and TV shows. The UK government recently announced that it will soon make it illegal to publish or possess pornographic depictions of strangulation of suffocation.

It’s not surprising, then, that when generative AI tools are made available to the public some people generate choking videos and violent content as well. In September, I reported about an AI-generated YouTube channel that exclusively posted videos of women being shot. Those videos were generated with Google’s Veo AI-video generator, despite it being against the company’s policies. Google said it took action against the user who was posting those videos.

Sora 2 had to make several changes to its guardrails since it launched after people used it to make videos of popular cartoon characters depicted as Nazis and other forms of copyright infringement.


#ai #News #sora


The initial 'Shutdown Guidance' for the US Army Garrison Bavaria included instructions to go to German food banks.

The initial x27;Shutdown Guidancex27; for the US Army Garrison Bavaria included instructions to go to German food banks.#News


US Army Tells Soldiers to Go to German Food Bank, Then Deletes It


A US Army website for its bases in Bavaria, Germany published a list of food banks in the area that could help soldiers and staff as part of its “Shutdown Guidance,” the subtext being that soldiers and base employees might need to obtain free food from German government services during the government shutdown.

The webpage included information about which services are affected by the ongoing shutdown of the federal government, FAQs about how to work during a furlough, and links to apply for emergency loans. After the shutdown guidance’s publication, the Army changed it and removed the list of food banks, but the original has been archived here.
playlist.megaphone.fm?p=TBIEA2…
The shutdown of the American federal government is affecting all its employees, from TSA agents to the troops, and the longer people go without paychecks, the more they’re turning to nonprofits and other services to survive. American military bases are like small cities with their own communities, stores, and schools. The US Army Garrison Bavaria covers four bases spread across the German state of Bavaria and is one of the largest garrisons in the world, hosting around 40,000 troops and civilians.

Like many other American military websites, the Garrison’s has stopped updating, but did publish a page of “Shutdown Guidance” to help the people living on its bases navigate the shutdown. At the very bottom of the page there was a “Running list of German support organizations for your kit bags” that included various local food banks. It listed Tafel Deutschland, which it called an “umbrella organization [that] distributes food to people in poverty through its more than 970 local food banks,” Foodsharing e.V, and Essen für Alle (Food for everyone).
Image via the Wayback Machine.
The guidance also provided a link to the German version of the Too Good to Go App, which it described as a service that sells surprise bags of food to reduce food waste. “These bags contain unsellable but perfectly good food from shops, cafés, and restaurants, which can be picked up at a reduced price. To obtain one of these bags, it must be reserved in the app and picked up at the store during a specified time window, presenting the reservation receipt in the app,” the US Army Garrison Bavaria’s shutdown guidance page said.

According to snapshots on the Wayback Machine, the list of food banks was up this morning but was removed sometime in the past few hours. The US Army Garrison Bavaria did not respond to 404 Media’s request for comment about the inclusion of the food banks on its shutdown guidance page.

The White House has kept paying America’s troops during the shut down, but not without struggle. At the end of October, the Trump administration accepted a $130 million donation from the billionaire Timothy Mellon to help keep America’s military paid. Though initially anonymous, The New York Times revealed Mellon’s identity. This donation only covered some of the costs,, however, and the White House has had to move money between accounts to keep the cash flowing to its troops.

But the US military isn’t just its soldiers, sailors, Marines, Guardians, and airmen. Every military base is staffed by thousands of civilian workers, many of them veterans, who do all the jobs that keep a base running. In Bavaria, those workers are a mix of German locals and Americans. The German government has approved a $50 million support package to cover the paychecks of its citizens affected by the shutdown. Any non-troop American working on those military bases is a federal employee, however, and they aren’t getting paid at all.


#News #x27


Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI


What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR


Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.

404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?

“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.

Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.

Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.

Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.

Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.

There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.

In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.

Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.

As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.

It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:

And this is what an iPhone looks like:
person holding space gray iPhone 7Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.


#ai #News #meta


The app, called Mobile Identify and available on the Google Play Store, is specifically for local and regional law enforcement agencies working with ICE on immigration enforcement.#CBP #ICE #FacialRecognition #News


DHS Gives Local Cops a Facial Recognition App To Find Immigrants


Customs and Border Protection (CBP) has publicly released an app that Sheriff Offices, police departments, and other local or regional law enforcement can use to scan someone’s face as part of immigration enforcement, 404 Media has learned.

The news follows Immigration and Customs Enforcement’s (ICE) use of another internal Department of Homeland Security (DHS) app called Mobile Fortify that uses facial recognition to nearly instantly bring up someone’s name, date of birth, alien number, and whether they’ve been given an order of deportation. The new local law enforcement-focused app, called Mobile Identify, crystallizes one of the exact criticisms of DHS’s facial recognition app from privacy and surveillance experts: that this sort of powerful technology would trickle down to local enforcement, some of which have a history of making anti-immigrant comments or supporting inhumane treatment of detainees.

Handing “this powerful tech to police is like asking a 16-year old who just failed their drivers exams to pick a dozen classmates to hand car keys to,” Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, told 404 Media. “These careless and cavalier uses of facial recognition are going to lead to U.S. citizens and lawful residents being grabbed off the street and placed in ICE detention.”

💡
Do you know anything else about this app or others that CBP and ICE are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Lawmakers say AI-camera company Flock is violating federal law by not enforcing multi-factor authentication. 404 Media previously found Flock credentials included in infostealer infections.#Flock #News


Flock Logins Exposed In Malware Infections, Senator Asks FTC to Investigate the Company


Lawmakers have called on the Federal Trade Commission (FTC) to investigate Flock for allegedly violating federal law by not enforcing multi-factor authentication (MFA), according to a letter shared with 404 Media. The demand comes as a security researcher found Flock accounts for sale on a Russian cybercrime forum, and 404 Media found multiple instances of Flock-related credentials for government users in infostealer infections, potentially providing hackers or other third parties with access to at least parts of Flock’s surveillance network.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Cornell University’s arXiv will no longer accept Computer Science reviews and position papers.#News


arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers


arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven’t been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.
playlist.megaphone.fm?p=TBIEA2…
But because of an onslaught of AI-generated research, specifically in the computer science (CS) section, arXiv is going to limit which papers can be published. “In the past few years, arXiv has been flooded with papers,” arXiv said in a press release. “Generative AI / large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”

The site noted that this was less a policy change and more about stepping up enforcement of old rules. “When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration,” it said. “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.”

According to the press release, arXiv has been inundated by "review" submissions—papers that are still pending peer review—but that CS was the worst category. “We now receive hundreds of review articles every month,” arXiv said. “The advent of large language models have made this type of content relatively easy to churn out on demand.

The plan is to enforce a blanket ban on papers still under review in the CS category and free the moderators to look at more substantive submissions. arXiv stressed that it does not often accept review articles, but had been doing so when it was of academic interest and from a known researcher. “If other categories see a similar rise in LLM-written review articles and position papers, they may choose to change their moderation practices in a similar manner to better serve arXiv authors and readers,” arXiv said.

AI-generated research articles are a pressing problem in the scientific community. Scam academic journals that run pay-to-publish schemes are an issue that plagued academic publishing long before AI, but the advent of LLMs has supercharged it. But scam journals aren’t the only ones affected. Last year, a serious scientific journal had to retract a paper that included an AI-generated image of a giant rat penis. Peer reviewers, the people who are supposed to vet scientific papers for accuracy, have also been caught cutting corners using ChatGPT in part because of the large demands placed on their time.


#News


Everyone loses and nobody wins if America decides to resume nuclear testing after a 30 year moratorium.#News #nuclear


Trump Orders Nuclear Testing As Nuke Workers Go Unpaid


Last night Trump directed the Pentagon to start testing nukes again. If that happens, it’ll be the first time the US has detonated a nuke in more than 30 years. The organization that would likely be responsible for this would be the National Nuclear Security Administration (NNSA), a civilian workforce that oversees the American nuclear stockpile. Because of the current government shutdown, 1,400 NNSA workers are on furlough and the remaining 375 are working without pay.

America detonated its last nuke in 1992 as part of a general drawn down following the collapse of the Soviet Union. Four years later, it was the first country to sign the Comprehensive Nuclear-Test Ban Treaty (CTBT) which bans nuclear explosions for civilian or military purposes. But Congress never ratified the treaty and the CTBT never entered into force. Despite this, there has not been a nuke tested by the United States since.
playlist.megaphone.fm?p=TBIEA2…
Trump threatened to resume nuclear testing during his first term but it never happened. At the time, officials at the Pentagon and NNSA said it would take them a few months to get tests running again should the President order them.

The NNSA has maintained the underground tunnels once used for testing since the 1990s and converted them into a different kind of space that verifies the reliability of existing nukes without blowing them up in what are called “virtual tests.” During a rare tour of the tunnel with journalists earlier this year, a nuclear weapons scientist from Los Alamos National Laboratory told NPR that “our assessment is that there are no system questions that would be answered by a test, that would be worth the expense and the effort and the time.”

Right now, the NNSA might be hard pressed to find someone to conduct the test. It employs around 2,000 people and the shutdown has seen 1,400 of them furloughed and 375 working without pay. The civilian nuclear workforce was already having a tough year. In February, the Department of Government Efficiency cut 350 NNSA employees only to scramble and rehire all but 28 when they realized how essential they were to nuclear safety. But uncertainty continued and in April the Department of Energy declared 500 NNSA employees “non-essential” and at risk of termination.

That’s a lot of chaos for a government agency charged with ensuring the safety and effectiveness of America’s nuclear weapons. The NNSA is currently in the middle of a massive project to “modernize” America’s nukes, an effort that will cost trillions of dollars. Part of modernization means producing new plutonium pits, the core of a nuclear warhead. That’s a complicated and technical process and no one is sure how much it’ll cost and how dangerous it’ll be.

And now, it may have to resume nuclear testing while understaffed.

“We have run out of federal funds for federal workers,” Secretary of Energy Chris Wright said in a press conference announcing furlough on October 20. “This has never happened before…we have never furloughed workers in the NNSA. This should not happen. But this was as long as we could stretch the funds for federal workers. We were able to do some gymnastics and stretch it further for the contractors.”

Three days later, Rep. Dina Titus (D-NV) said the furlough was making the world less safe. “NNSA facilities are charged with maintaining nuclear security in accordance with long-standing policy and the law,” she said in a press release. “Undermining the agency’s workforce at such a challenging time diminishes our nuclear deterrence, emboldens international adversaries, and makes Nevadans less safe. Secretary Wright, Administrator Williams, and Congressional Republicans need to stop playing politics, rescind the furlough notice, and reopen the government.”

Trump announced the nuclear tests in a post on Truth Social, a platform where he announces a lot of things that ultimately end up not happening. “The United States has more Nuclear Weapons than any other country. This was accomplished, including a complete update and renovation of existing weapons, during my First Term in office. Because of the tremendous destructive power, I HATED to do it, but had no choice! Russia is second, and China is a distant third, but will be even within 5 years. Because of other countries testing programs, I have instructed the Department of War to start testing our Nuclear Weapons on an equal basis. That process will begin immediately. Thank you for your attention to this matter! PRESIDENT DONALD J. TRUMP,” the post said.

Matt Korda, a nuclear expert with the Federation of American Scientists, said that the President’s Truth social post was confusing and riddled with misconceptions. Russia has more nuclear weapons than America. Nuclear modernization is ongoing and will take trillions of dollars and many years to complete. Over the weekend, Putin announced that Russia had successfully tested a nuclear-powered cruise missile and on Tuesday he said the country had done the same with a nuclear-powered undersea drone. Russia withdrew from the CTBT in 2023, but neither recent test involved a nuclear explosion. Russia last blew up a nuke in 1990 and China conducted its most recent test in 1996. Both have said they would resume nuclear testing should America do it. Korda said it's unclear what, exactly, Trump means. He could be talking about anything from test firing non-nuclear equipped ICBMs to underground testing to detonating nukes in the desert. “We’ll have to wait and see until either this Truth Social post dissipates and becomes a bunch of nothing or it actually gets turned into policy. Then we’ll have something more concrete to respond to,” Korda said.

Worse, he thinks the resumption of testing would be bad for US national security. “It actually puts the US at a strategic disadvantage,” Korda said. “This moratorium on not testing nuclear weapons benefits the United States because the United States has, by far, the most advanced modeling and simulation equipment…by every measure this is a terrible idea.”

The end of nuclear detonation tests has spurred 30 years of innovation in the field of computer modeling. Subcritical computer modeling happens in the NNSA-maintained underground tunnels where detonations were once a common occurrence. The Los Alamos National Laboratories and other American nuclear labs are building massive super computers that are, in part, the result of decades of work spurred by the end of detonations and the embrace of simulation.

Detonating a nuclear weapon—whether above ground or below—is disastrous for the environment. There are people alive in the United States today who are living with cancer and other health conditions caused by American nuclear testing. Live tests make the world more anxious, less safe, and encourage other nuclear powers to do their own. It also uses up a nuke, something America has said it wants to build more of.

“There’s no upside to this,” Korda said. He added that he felt bad for the furloughed NNSA workers. “People find out about significant policy changes through Truth social posts. So it’s entirely possible that the people who would be tasked with carrying out this decision are learning about it in the same way we are all learning about it. They probably have the exact same kinds of questions that we do.”




The leaked slide focuses on Google Pixel phones and mentions those running the security-focused GrapheneOS operating system.#cellebrite #Hacking #News


Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details


Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company’s capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media’s review of the material.

The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Why it might have been and may continue to be harder to get new releases from your local library.#News #libraries #Books


Libraries Scramble for Books After Giant Distributor Shuts Down


This story was reported with support from the MuckRock foundation.

One of the largest distributors of print books for libraries is winding down operations by the end of the year, a huge disruption to public libraries across the country, some of which are warning their communities the shut down will limit their ability to lend books.

“You might notice some delays as we (and more than 6,000 other libraries) transition to new wholesalers,” the Jacksonville Public Library told its community in a Facebook post. “We're keeping a close eye on things and doing everything we can to minimize any wait times.”

The libraries that do business with the distributor learned about the shut down earlier this month via Reddit.

Upon learning of her company’s closure, Jennifer Kennedy, a customer services account manager with Baker & Taylor, broke the news on October 6 on r/Libraries Reddit community.

“I just wanted the libraries to know,” Kennedy told 404 Media. “I didn’t want them to be held hostage waiting for books that would never come. I respect them too much for all this nonsense.”

Kennedy’s post prompted other current and former B&T employees to confirm the announcement and express concern for the competitors about to be inundated with requests from the libraries who would be scrambling for new suppliers.

B&T in Memoriam


Baker & Taylor has been in the book business just short of 200 years. Its primary focus was distributing physical copies of books to public libraries. The company also provided librarians with tools that helped them do their jobs more effectively related to collection development and processing.

But the company has spent decades being acquired by and divested from private equity firms, served as a revolving door for senior leadership, and was sued by a competitor earlier this year for alleged data misuse and was almost acquired again in September, this time by a distributor that works with mass-market retailers like Walmart and Target. That deal fell through.

On October 7, Publishers Weekly reported B&T let go of more than 500 employees the day the internal announcement was made. At least one law firm is currently investigating B&T for allegedly violating the federal Worker Adjustment and Retraining Notification (WARN) Act, and it took the company weeks to let account holders know.

Since the internal announcement, Kennedy says customer service staff at B&T have not received guidance on how to respond to inquiries from libraries, leaving them on the frontline and in the dark on issues ranging from whether existing orders would be fulfilled to securing refunds for materials they may have already paid for.

“Some libraries didn’t realize we are much closed as of right now,” Kennedy added.

B&T did not respond when asked for comment.

Kennedy has been with B&T for 16 years. At a time when it's uncommon to remain with one company more than a few years, that’s exactly what many of B&T’s employees have been able to do, until now. The same was true of the libraries who did business with them. Andrew Harant, director of Cuyahoga Falls Library had to consider the library's longstanding business relationship with the company against the roughly 20 percent of books the library had ordered from the beginning of the year they had never received.

“For us, that was about 1,500 items,” which Harant told 404 Media that for a small library is a lot of books they were ordering and not receiving.

Release dates for new books come and go on B&T’s main software platform for viewing and managing orders, Title Source 360. Better known as TS360, Harant realized the platform was updating preordered books never received to on backorder, which was “not sustainable”.

In September, Cuyahoga Falls Library canceled all outstanding orders with B&T.

“We needed to step up and make sure that we’re getting the books for our patrons that they needed,” he said.

Cuyahoga Falls Library was fortunate to have an existing account with the other main distributor on the scene, Ingram Content Group. This has been true for many of the libraries 404 Media reached out to for this story.

“The easier part is re-ordering the book,” Shellie Cocking, Chief of Collections and Technical Services for the San Francisco Public Library, told 404 Media. “The harder part is replacing the tools you use to order books.”

Integrated Fallout


Of the ancillary services B&T offered customers, TS360 was Cocking’s favorite. It helped her streamline collection development tasks, for instance, anticipating how popular a title might be or determining how many quantities of a book to purchase, which for larger libraries with dozens of branches, could be complicated to figure out manually. Once titles were ordered in TS360, B&T shared a Machine-Readable Cataloging (MARC) record that was automatically shared with the library’s API integration using data derived from B&T’s record set. This product, BTCat, was the subject of a lawsuit brought by OCLC earlier this year.

OCLC owns WorldCat, the global union catalog of library collections that lets anyone see what libraries own what items. OCLC alleged in a U.S. district court filing that B&T misused their proprietary bibliographic records to populate its own competing cataloguing database. OCLC also accused B&T of inserted clauses into its contracts where there was overlap with the businesses and customers, requiring libraries to grant B&T access to their cataloging records so the libraries could then license the records back to B&T for BTCat. B&T has denied these claims, accusing OCLC of stifling fair competition in an already consolidated marketplace.

Marshall Breeding, an independent consultant who monitors library vendor mergers has been following all of this rather closely. He says B&T's closure creates a number of bottlenecks for libraries, the primary one being whether suppliers like Ingram or Brodart can absorb thousands of libraries as customers all at once.

“Maybe, maybe not,” Breeding told 404 Media. “It’s going to take them a while to set up the business relationships and technical things that have to be set up for libraries to automatically order books from the providers.”

But one thing is evident.

“Libraries are kind of in a weaker position just scrambling to find a vendor at all,” he added.

Less competition in the market makes for more challenging working conditions all around. Just ask Erin Hughes, director of the Wood Ridge Memorial Library in New Jersey, made the move over to Ingram after a series of negative experiences with B&T in 2021 from late and damaged deliveries to customer service calls that went poorly, to say the least. Hughes worries her experience with B&T will happen again, only this time with Ingram.

Since the Reddit announcement, she's noticed it's a little more difficult to get a rep on the phone and the number of shipments to the library is smaller. But the other way Hughes is seeing the problem play out involves the consortium her library belongs to. While she may have foregone B&T years ago, her network hasn't, which affects the operability of InterLibrary Loan lending.

“The resource sharing is going to be off for a bit,” Hughes told 404 Media.

Amazon Incoming


If Ingram’s service stagnates due to the B&T cluster, Hughes says she'll use Amazon, which recently launched its own online library hub, offering competitive pricing. One downside, says Hughes, is that it's Amazon.

“No, we do have a little bit of pause around Amazon,” she added. “But we’re at a point now where Ingram actually does supply most of the books for Amazon. So we’re already in the devil’s pocket. It’s all connected. It’s all integrated. And as much as I personally don’t care for the whole thing, I don’t really see a lot of other options.”

It's hard not to think this outcome was predictable and also preventable. We know what happens when private equity gets involved with businesses not expected to generate high growth or returns, as well as what happens when there's too little market competition in any given sector. It can't be a cautionary tale because market consolidation is in itself a cautionary tale.

But it’s also worth acknowledging how the timing could not be worse. Library use is way up right now, which is indicative of the times. People are buying less for various reasons. People also seem to like the idea of putting a little friction between their media consumption habits and Big Brother, even at the expense of a little convenience.

“We kind of made our own bed a little bit because we didn’t branch out,” said Hughes. “We didn’t find other solutions to this, and we were relying essentially on two giant companies, one of which folded so quick it was not even funny.”




Videos on social media show officers from ICE and CBP using facial recognition technology on people in the field. One expert described the practice as “pure dystopian creep.”#ICE #CBP #News #Privacy


ICE and CBP Agents Are Scanning Peoples’ Faces on the Street To Verify Citizenship


“You don’t got no ID?” a Border Patrol agent in a baseball cap, sunglasses, and neck gaiter asks a kid on a bike. The officer and three others had just stopped the two young men on their bikes during the day in what a video documenting the incident says is Chicago. One of the boys is filming the encounter on his phone. He says in the video he was born here, meaning he would be an American citizen.

When the boy says he doesn’t have ID on him, the Border Patrol officer has an alternative. He calls over to one of the other officers, “can you do facial?” The second officer then approaches the boy, gets him to turn around to face the sun, and points his own phone camera directly at him, hovering it over the boy’s face for a couple seconds. The officer then looks at his phone’s screen and asks for the boy to verify his name. The video stops.

💡
Do you have any more videos of ICE or CBP using facial recognition? Do you work at those agencies or know more about Mobile Fortify? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




“The shameless use of covert recording technology at massage parlours to gain likes, attention, and online notoriety is both disgusting and dangerous.”#News #ballotinitiatives #1201


Meta's Ray-Ban Glasses Users Film and Harass Massage Parlor Workers


A number of Instagram accounts with hundreds of thousands of followers and millions of views have uploaded videos filmed with Meta’s Ray-Ban glasses show men entering massage parlors across the country and soliciting the workers there for “tuggy” massages, or sex work. In some cases, the women laugh at the men, dismiss them, or don’t understand what they’re talking about, but in a few cases they discuss specific sex acts and what they would cost.

It doesn’t appear that the women in the videos know they are being filmed and that the videos are being shared online, where they’re viewed by millions of people. In some cases, the exact location of the massage parlor is easy to find because the videos show its sign upon entering. This is extremely dangerous to the women in the videos who can be targeted by both law enforcement and racist, sexist extremists. In 2021, a man who shot and killed eight people at massage parlors told police he targeted them because he had a “sexual addiction."

The videos show how Meta has built an entire supply chain for dangerous, privacy violating content on the internet. It sells glasses that allow people to surreptitiously film others in public and operates a social network where inflammatory, outrageous content is rewarded and monetized, and where Instagram often only moderates violating content after journalists reach out for comment.

The most popular of these accounts, which had more than 600,000 followers and multiple videos with around 2 million views, was served to me while scrolling Reels. Many of the videos are tagged on Instagram as “Ray-Ban Meta glasses,” which indicate what device the video was made with. In one video, the person wearing the glasses briefly shows himself in a mirror as he enters a massage parlor.

After reaching out to Meta for comment, the company asked me for examples of the videos, indicating that it wasn’t able to find them itself. Meta then removed the account I flagged, as well as other accounts by the same creator, who apparently set up multiple accounts in preparation for moderation.

"People are responsible for following the law, whether or not they're wearing Ray-Ban Metas,” a Meta spokesperson told me in an email. “Unlike smartphones, our glasses have an LED light that activates whenever someone captures content, so it’s clear the device is recording. The content and associated accounts have been removed for multiple policy violations.”

As 404 Media reported last week, people can pay $60 for a modification to Meta’s Ray-Bans to disable that privacy-protecting LED. Amazon and other online retailers also sell various stickers that cover the recording light, but Meta has also updated the Ray-Bans software so it will stop recording if it’s covered. Circumventing or hiding the LED recording light is a common subject on Reddit and a top search result for the product. In the massage parlor video, it’s not clear that the people who are being filmed know they are being filmed, even if the LED light is on.

Meta also told me its terms of service state that users are responsible for complying with all applicable laws and for using Ray-Ban Meta glasses in a safe, respectful manner. It said people shouldn't use them to engage in harmful activities like harassment or capturing sensitive information.

The people behind the accounts attempt to monetize them by selling access to full videos which they claim capture their sexual encounters at the massage parlor. They link out to an adult pay-per-view service called No Fans, which allows users to buy and view adult content without creating an account. One video on Instagram that’s pitched as a “Latina House Call” sends viewers to No Fans to buy the full video. On No Fans, users can buy the “Latina Tuggy Bundle” for $28.49.

This gives them access to four videos, but the locations and people in those videos look nothing like anything that’s been posted to Instagram. It’s not clear if the people who are making the Instagram videos ever actually go through with the sex work they talk about. Several of the Instagram videos end with them saying they’re going to go get more cash from an ATM and exit the massage parlor.

Angela Wu, executive director of SWAN Vancouver, an organization that promotes rights and safety of migrant and immigrant women engaged in sex work, told me that the organization is aware of these accounts and that workers who face privacy concerns at massage parlors mey choose to move to more hidden locations, where they face greater risk of assault, robbery, and exploitation.

"Earlier this year, SWAN Vancouver became aware of disturbing social media videos showing individuals wearing Ray-Ban Meta glasses to enter massage parlours across North America and record interactions with workers. Many of these workers are immigrant and newcomer women who may or may not engage in sex work, but experience stigma nonetheless,” Wu told me in an email. “The shameless use of covert recording technology at massage parlours to gain likes, attention, and online notoriety is both disgusting and dangerous. Due to criminalization and stigma, sex workers face disproportionate levels of violence and harassment. Violations of privacy can lead to arrest, immigration consequences, and lasting harm.”

“SWAN’s community made extensive efforts to report these videos, and we are deeply disappointed that social media platforms allowed them to remain online,” Wu added. “To warn the im/migrant women we support, SWAN used our Abuser Alert system to notify workers about the use of Ray-Ban Meta glasses and the videos circulating online. We also received reports from community members about clients entering massage parlours wearing the glasses and recording women without their knowledge."

Some of the exact same videos were also posted to TikTok. These TikTok accounts also attempt to monetize the videos by linking to full pornographic videos sold through Patreon.

Another TikTok account, separate from the people also posting to Instagram, also posts first person videos of entering massage parlors and asking for sex work. The person posting this account said that their previous account was banned for violating TikTok community guidelines. That person also shared a video of themselves saying that they’ll stop posting “tuggy” videos because someone stole their Ray-Ban Meta glasses.

After I reached out for comment, TikTok removed the accounts I asked the company about. “On the question about people being filmed without their knowledge, you may want to reach out to Meta to ask if there are features/ safeguards they've built into the Ray Bans to prevent this,” a TikTok spokesperson told me in an email. “For example, I have a pair and I know there's a little LED light - perhaps this is being disabled or there's some way around this.”

One of the massage parlor Instagram accounts shows that the people behind it tried recording other forms of harassment with Meta’s Ray-Ban glasses before finding the massage parlor niche. Before the massage parlor videos, this person entered various businesses and bothered employees. In one video the person filming the video enters a Best Buy and yells until security asks him to stop.




"The White House just marked the end of the console wars; DHS is posting deep fried Halo memes. We are somewhere else entirely."#News


Trump Admin’s Racist Halo Memes Are ‘A New Level of Dehumanization of Immigrants’


On Monday morning, the Trump administration used a picture of Halo’s Master Chief to call for the destruction of immigrants. This administration is no stranger to appropriating pop culture for its propaganda, but something about seeing the stalwart hero of a beloved video game twisted into an anti-immigrant super soldier hit people pretty hard.

Over the weekend, the Trump administration shared AI-generated Halo memes across its social media accounts. This culminated in the official Department of Homeland Security accounts sharing an image of dudes in Spartan armor riding a Warthog under the words “DESTROY THE FLOOD JOIN.ICE.GOV.” It was this image, in particular, that got in people’s heads.

In the fiction of Halo, the Flood is a parasitic creature that infects sentient life and turns them into monsters whose only desire is to spread the parasite. They’re depicted as a brainless and fast moving wave of flesh that could not be reasoned with.

Finishing this fight. pic.twitter.com/6Ezq9NUqMq
— Homeland Security (@DHSgov) October 27, 2025


Michael Senters—a PhD candidate at Virginia Tech who studies the political consequences of online culture—sent me a DM out of the blue after he saw the Warthog meme. “The Halo tweets and the reactions on Twitter are actually driving me insane,” Senters told me. This is a man who regularly subjects himself to the depths of 4chan so he can study its effect on politics. He’s a veteran of the worst online spaces the internet has to offer, but the Halo meme got to him.

Trump’s propagandists have used the aesthetics of Star Wars and Studio Ghibli to push their message. They’ve set the Pokémon theme song to footage of ICE raids under the title “Gotta Catch Em All.” They’ve layered fash-wave variations of the MGMT song “Little Dark Age” over footage pulled from arrests. We’ve seen this administration do similar things in the past, so why did the Halo meme feel worse to Senters?

“What makes this debacle with the Halo memes different from other invocations of fandom culture is twofold in my opinion. First is the fact that Microsoft has declined to push back on the use of its biggest IP,” he said. “Combined with Microsoft donating to the White House ballroom project it gives the impression that Microsoft tacitly supports this.”

Microsoft declined to comment on this story.

Other IP holders have fought the administration over memes and won. The DHS video using the Pokémon theme song is still up, Pokémon Company International said it hadn’t given the administration permission to use its song. The band MGMT got DHS to pull the video that used its song.Little Dark Age. Even comedian Theo Von was able to force DHS to take down a video that featured him without his permission. As of this writing, the Halo memes are all still up across the Trump administration’s accounts. A video posted on DHS social media accounts Tuesday played music from the Halo soundtrack over footage of a Border Patrol raid.

“Second, and far worse in my opinion is this reaches a new level of dehumanization of immigrants by referring to them as the Flood, a parasitic alien lifeform in Halo who exist solely to eradicate all other forms of life and are controlled by the Gravemind, a monstrous intelligence that lurks in the shadows,” Senters said. “Immigrants stand in for the Flood while the Gravemind stands in for the Jews, creating a perfect metaphor for the far-right that allows them to target to of their traditional enemies with exterminationist rhetoric and it's not hyperbolic to say it's exterminationist because in Halo the only way to defeat the Flood is to wipe them out entirely, otherwise they will continue to reproduce.”

The Trump administration has long invoked racist imagery, much of it pulled from America’s past, to sell its agenda. But overtly equating immigrants to a ravening horde of monsters from a video game has its closest analogue in Nazi propaganda.

“What is especially surreal is seeing niche memes pressed into the service of the most controversial and violent aspects of President Trump's agenda. A few months ago, it was the ‘Ghiblification’ of the kidnapping and detention of American residents. Now it is a picture of Master Chief and a recruitment pitch to join ICE and ‘Destroy the Flood,’ Emerson T Brooking, the director of strategy at the Digital Forensic Research Lab, told 404 Media.

“In many ways, Trump administration officials are trying to use these online motifs to smuggle concepts that would otherwise be too extreme for the American people. Most Americans do not want to ‘destroy’ legal asylum seekers. And referring to this group of people as a ‘flood’ is the sort of thing that was once the domain of white-nationalist manifestos. But tie these things together with an image of Master Chief and a Halo Warthog, and the inconceivable becomes a casual joke,” Brooking said.

Trump’s propagandists are extremely online and tuned to what’s trending on social media. The reason Halo is being used to push violence against immigrants is that Microsoft announced a remake of the original game last week. The bigger news was that, for the first time, the Master Chief would appear on Playstation. In a joke post on X, Gamestop declared this the official end of the console war, a term that refers to the decades long feud between fans of different video game consoles.

The official White House X account retweeted the joke post with an image of Trump as Master Chief. Another White House aligned social media gave Trump credit for ending the console war. Angry Joe, a popular gaming YouTuber, riled up other gamers online by posting “FUCK ICE! And FUCK DONALD TRUMP!” in response to the "Destroy the Flood” meme. The composer of the original games, Marty O’Donnell, reminded everyone that he’s running for Congress and promised to “destroy the flood” if elected. Other people from the original development team told Game File that seeing Master Chief used this way sickened them.

Independent journalist Alyssa Mercante managed to get a response from the White House press team about the Halo memes. “Yet another war ended under President Trump's watch—only one leader is fully committed to giving power to the players, and that leader is Donald J. Trump. That’s why he’s hugely popular with the American people and American Gamers,” White House Deputy Press Secretary Kush Desai told Mercante in an email.

“It is really not enough to describe the comms teams of the second Trump administration as ‘terminally online.’ The White House just marked the end of the console wars; DHS is posting deep fried Halo memes. We are somewhere else entirely,” Brooking said.


#News


Do you want ‘AI-powered social orbits,’ ‘autonomous recruiting firms,’ and an ‘AI-powered credit card?’ Too bad, you’re getting them anyway.#News #a16z


a16z Is Funding a 'Speedrun' to AI-Generated Hell on Earth


What if your coworkers were AI? What if AI agents, not humans, monitored security camera feeds? What if you had an AI tech recruiter to find motivated candidates, or an entirely autonomous recruiting firm? What if UPS, but AI, or finance, but DeepMind?

Does that sound like absolute hell on Earth? Well, too bad, because the giant Silicon Valley investment firm Andreessen Horowitz (a16z) is giving companies up to $1 million each to develop every single one of these ideas as part of its Speedrun program.

Speedrun is an accelerator program startups can apply to in order to receive funding from a16z as well as a “fast‐paced, 12-week startup program that guides founders through every critical stage of their growth,” according to Speedrun’s site. “It kicks off with an orientation to introduce the cohort, then dives into rapid product development—helping founders think through MVP while addressing key topics like customer acquisition and design partnerships.”

The program covers brand building, customer acquisition and launch, fundraising, team building, and more. The selected startups and founders meet each other, and receive the curriculum via workshops and keynote sections from “luminary speakers” such as Zynga founder Mark Pincus, Figma co-founder Dylan Field, a16z’s namesakes Marc Andreessen and Ben Horowitz, and others.

Silicon Valley incubators and accelerators are common, but I’ve rarely seen such an unappetizing buffet of bad ideas as Speedrun’s AI-centric 2025 cohort.

Last week, I wrote about Doublespeed, essentially a click farm that sells “synthetic influencers” to astroturf whatever product or service you want across social media, despite it being a clear violation of every social media platform's policy on inauthentic behavior. But that was just the tip of the iceberg.

a16z’s Speedrun is also backing:

  • Creed: An AI company “rooted in Christian Values” which produces Lenny, a “Bible-based AI buddy who's always got your back with wise words, scripture-inspired guidance, and a listening ear whenever you need it.”
  • Zingroll: The “world’s largest Netflix-quality AI streaming platform,” which is another way of saying it’s a Netflix populated exclusively with AI Slop.
  • Vega: which is building “AI-powered social orbits.” What does that mean? Not entirely clear, but the company has produced one of the most beautiful Mad Libs paragraphs I’ve ever seen: “We’re building the largest textual data moat on human relationships by gamifying the way people leave notes for each other. For the first time, LLMs can analyze millions of raw, human-written notes at scale and turn them into structured meaning, powering the most annotated social graph ever created.”
  • Moona Health: an AI-powered Sleep care app the company says is covered by insurance. “Our AI-powered platform automates insurance claims and scheduling and analyzes sleep data – providing personalized session guidelines to therapists,” Moona says.
  • Jooba: “The world’s first autonomous recruiting firm.”
  • Margin: “The World’s first AI powered credit card.” Margin says “Customers earn points, with dynamic rewards that adapt to their preferences in real time.”
  • First Voyage: A wellness app that gives you AI “mythological pets that turn wellness into play.”
  • Axon Capital: billed as “DeepMind for Finance,” Axon says it has “pioneered brain-inspired, low-latency AI for financial markets.”

Part of the strategy for these types of accelerators and Silicon Valley venture capital firms more broadly is to place a lot of bets on a lot of startups with the knowledge that most of them are not going to make it. A million dollars is not a lot of money to a16z, especially when it only needs one of these companies to 100x its investment in order to make the whole endeavor profitable. What makes this Speedrun and the current moment we’re in with generative AI different is that a lot of AI implementations are going to be shoved down our throats before investors realize what AI is and isn’t good for.

Are Doublespeed’s AI-generated social media accounts actually going to convince people to use whatever products they’re promoting? The accounts I’ve seen lead me to believe that the answer is no, but until then, they will continue to flood social media with garbage. Is Jooba going to entirely replace HR professionals and recruiters? I don’t know, but a whole bunch of people who are trying to get a job to pay rent are going to get caught up in a dehumanizing process until we find out.

So the next time you find yourself asking why you're being inundated with AI wherever you go, remember that the answer is that someone with millions of dollars to spare paid for it on the off chance that it will yield a nice return.




“When we let powerful people’s books be protected from criticism, we give up the right to hold power accountable.”#News


Rogue Goodreads Librarian Edits Site to Expose 'Censorship in Favor of Trump Fascism’


On Friday morning, Goodreads users who wanted to read reviews of the werewolf romance Mate by Ali Hazelwood were confronted by the cover of the new Eric Trump book Under Siege. One of the site's volunteer moderators had gone rogue and changed Mate’s cover, added the subtitle “Goodreads Censorship in Favor of Trump,” and altered Mate’s listing into an explanation of why. To hear them tell it, Goodreads was removing criticism of Trump’s book from the site.

“Silencing criticism of political figures—especially those associated with authoritarian movements—helps normalize and strengthen those movements,” the post that replaced Mate’s description said. “When we let powerful people’s books be protected from criticism, we give up the right to hold power accountable.”
playlist.megaphone.fm?p=TBIEA2…
Goodreads employs a volunteer staff of “Librarians” who act as moderators for the site and have the power to make changes to the listings. One of these librarians altered the titles, pictures, and blurbs of several popular books including the Mate, the Resse Witherspoon penned thriller Gone Before Goodbye, and the Nicholas Sparks bestseller Remain. The changes were up for a few hours before Goodreads caught on and fixed the listings.

Jana | Bookstagram (@janaandbooks) on Threads
Dearest Gentle Bookthreads, It’s time for our weekly wrap up of book discourse, drama, and terrible behavior! 1 Readers noticed a strange “glitch” on Goodreads Friday morning. Several books (including Ali Hazelwood’s “Mate” and Navessa Allen’s “Lights Out”) were instead showing the cover of Eric Trump’s memoir, and this message in the blurb:
Threads


The rogue librarian claims Goodreads is censoring negative reviews of pro-Trump books. They said that Goodreads deleted negative reviews of Under Siege as they came in after its publication on October 14. “These were the honest opinions from real readers who disagreed with the book’s content,” the Librarian said in their post. “When people noticed and complained, Goodreads deleted ALL reviews of the book—positive and negative alike. This wasn’t an accident or a one-time glitch. It was a deliberate pattern.”
Goodreads screenshot.
A Goodreads spokesperson confirmed that a Librarian had altered the covers and listings for the books. “We're aware of unauthorized edits made by a volunteer librarian to several book listings. All titles affected by the unauthorized edits have been restored to their correct information, and the librarian no longer has an account on Goodreads,” the spokesperson said.

In response to questions about reviews for the Eric Trump book, the spokesperson told 404 Media that Goodreads “has systems in place to detect unusual activity on book pages and may temporarily limit ratings and reviews that don’t adhere to our reviews and community guidelines. In all cases, we enforce clear standards and remove content and/or accounts that violate these guidelines.”

On Monday, the two week old Trump book had no reviews and no ratings. By Tuesday morning, Under Siege had begun to accumulate reviews and ratings again. The Kamala Harris campaign memoir 107 Days, by contrast, has been out since September 23 and has more than 14,000 ratings and more than 2,000 reviews.

Goodreads has done this kind of thing before and its review guidelines state it will delete “unusual” reviews or “limit the ability to submit ratings.” The idea behind this is to prevent review bombing of controversial figures, but the author’s Goodreads protects tend to be conservatives. In the summer of 2024, it temporarily halted reviews of JD Vance’s memoir Hillbilly Elegy after people had begun to dunk on the Vice President by leaving reviews for the book. There are many “unusual” reviews still up for Harris’ memoir, including a one star review that says “Did not read but so sick of seeing this 💩 in my suggested 🖕🖕”

This kind of one-sided protection from review bombing is at the heart of the rogue Goodreads librarian’s complaint. “When a platform removes criticism of a political book while leaving praise, or removes everything to hide that [that] criticism existed, they’re not saying neutral—they’re picking a side,” their post said. “Goodreads is owned by Amazon, one of the world’s largest companies. When major platforms decide which opinions can exist and which must disappear, they shape what people think is true or acceptable.”


#News


The family of a dead teen girl said she'd still be alive if Roblox did a better job moderating its platform.

The family of a dead teen girl said shex27;d still be alive if Roblox did a better job moderating its platform.#News


Lawsuit Accuses a16z of Turning Roblox Into a School Shooter's Playground


The mother of a teenager who died by suicide is suing Roblox, accusing the company of worrying more about its investors than the children in its audience. The complaint, filed this month, claims Kleiner Perkins and Andreessen Horowitz, who’ve collectively invested hundreds of millions of dollars into the gaming company, fostered a platform that monetizes children at the cost of their safety.
playlist.megaphone.fm?p=TBIEA2…
Attorneys for Jaimee Seitz filed the lawsuit in the eastern district of Kentucky. Seitz is the mother of Audree Heine, a teen girl who committed suicide just after her 13th birthday in 2024. When detectives investigated Heine’s death they found she had a vast online social life that centered around groups in Discord and Roblox that idolized school shooters like Dylan Kleebold. Since Heine’s death, Seitz has been outspoken about the unique dangers of Roblox.

Heine’s family claims she would never have died had Roblox done a better job of moderating its platform. “Audree was pushed to suicide by an online community dedicated to glorifying violence and emulating notorious mass shooters, a community that can thrive and prey upon young children like Audree only because of Defendants’ egregiously tortious conduct,” the complaint said.

Seitz’s lawyers filed the 89 page lawsuit on October 20 and in it attempted to make the case that Roblox’s problems all stem from cause: corporate greed. “The reason that Roblox is overrun with harmful content and predators is simple: Roblox prioritizes user growth, revenue, and eventual profits over child safety,” it said. “For years, Roblox has knowingly prioritized these numbers over the safety of children through the actions it has taken and decisions it has made to increase and monetize users regardless of the consequences.”

According to the lawsuit, Roblox’s earning potential attracted big investors which encouraged it to abandon safety for quick cash. “Roblox’s business model allowed the company to attract significant venture capital funding from big-name investors like Kleiner Perkins and Andreessen Horowitz, putting enormous pressure on the company to prioritize growing and monetizing its users.”

Andreessen Horowitz, known as a16z is a venture capital firm whose previous investments include Civitai—a company that made money from noncensual AI porn—an “uncensored” AI project that offered users advice on how to commit suicide, and startup that’s selling access to thousands of “synthetic influencers” for use in manipulating public opinion.

In 2020, a16z led a round of funding that raised $150 million for Roblox. “Roblox is one of those rare platform companies with massive traction and an organic, high-growth business model that will advance the company, and push the industry forward for many years to come,” David George, a general partner at the investment firm, said in a press release at the time.

The lawsuit claims Roblox knows that kids are easy marks for low effort monetization efforts common in online video games. “Recognizing that children have more free time, underdeveloped cognitive functioning, and diminished impulse control, Roblox has exploited their vulnerability to lure them to its app,” it said.

The lawsuit notes that Roblox did not require age verification for years, nor did it restrict communication between children and adults and didn’t require an adult to set up an account for a child. Roblox rolled out age verification and age-based communications systems in July, a feature that uses AI to scan the faces of its users to check their age.

These kinds of basic safety features, however, have taken years to implement. According to the lawsuit, there’s a reason Roblox has been slow on safety. “In pursuit of growth, Roblox deprioritized safety measures even further so that it could report strong numbers to Wall Street,” it said. “For instance, Roblox executives rejected employee proposals for parental approval requirements that would protect children on the platform. Employees also reported feeling explicit pressure to avoid any changes that could reduce platform engagement, even when those changes would protect children from harmful interactions on the platform.”

Roblox is now the subject of multiple investigative reports that have exposed the safety problems on its platforms. It’s also the subject of multiple lawsuits, Seitz’s is the 12th such case filed by Anapol Weiss, the law firm representing her.

According to Seitz’s interviews with the press and the lawsuit, her daughter got caught up in a subculture on Roblox and Discord called The True Crime Community (TCC). “Through Roblox, Audree was exposed to emotional manipulation and social pressure by other users, including TCC members, who claimed to revere the Columbine shooters, depicted them as misunderstood outcasts who took revenge on their bullies, and encouraged violence against oneself and others,” the lawsuit said.

404 Media searched through Roblox’s game servers after the lawsuit was filed and found multiple instances of games named for the Columbine massacre. One server used pictures from Parkland, Florida and another was advertised using the CCTV picture of Dylan Klebold and Eric Harris from the Columbine shooting.


#News #x27


The general who advised Netflix’s nuclear Armageddon movie doesn’t believe in abolishing nuclear weapons.#News #nuclear


'House of Dynamite' Is About the Zoom Call that Ends the World


This post contains spoilers for the Netflix film ‘House of Dynamite.’

Netflix’s new Kathryn Bigelow-directed nuclear war thriller wants audiences to ask themselves the question: what would you do if you had 15 minutes to decide whether or not to end the world?

House of Dynamite is about a nuclear missile hitting the United States as viewed from the conference call where America’s power players gather to decide how to retaliate. The decision window is short, just 15 minutes. In the film that’s all the time the President has to assess the threat, pick targets, and decide if the US should also launch its nuclear weapons. It’s about how much time they’d have in real life too.
playlist.megaphone.fm?p=TBIEA2…
In House of Dynamite, America’s early warning systems detect the launch of a nuclear-armed intercontinental ballistic missile (ICBM) somewhere in the Pacific Ocean. The final target is Chicago and when it lands more than 20 million people will die in a flash. Facing the destruction of a major American city, the President must decide what—if any—action to take in response.

The US has hundreds of nuclear missiles ready to go and plans to strike targets across Russia, China, and North Korea. But there’s a catch. In the film, America didn’t see who fired the nuke and no one is taking credit. It’s impossible to know who to strike and in what proportion. What’s a president to do?

House of Dynamite tells the story of this 15 minute Zoom call—from detection of the launch to its terminal arrival in Chicago—three different times. There’s dozens of folks on the call, from deputy advisors to the Secretary of Defense to the President himself, and each run through of the events gives the audience a bigger peak at how the whole machine operates, culminating, in the end, with the President’s view.

Many of the most effective and frightening films about nukes—Threads and The Day After—focus on the lives of the humans living in the blast zone. They’re about the crumbling of society in a wasteland, beholden to the decisions of absent political powers so distant that they often never appear on screen. House of Dynamite is about those powerful people caught in the absurd game of nuclear war, forced to make decisions with limited information and enormous consequences.

In both the movie and real life, America has ground-based interceptors stationed in California and Alaska that are meant to knock a nuke out of the sky should one ever get close. The early film follows missileers in Alaska as they launch the interceptor only to have it fail. It’s a horrifying and very real possibility. The truth of interceptors is that we don’t have many of them, the window to hit a fast moving ICBM is narrow, and in tests they only work about half the time.

“So it’s a fucking coin toss? That’s what $50 billion buys us?” Secretary of Defense Reid Baker, played by Jarred Harris, says in the film. This detail caught the eye of the Trump White House, which plans to spend around $200 billion on a space based version of the same tech.

Bloomberg reported on an internal Pentagon memo that directed officials to debunk House of Dynamite’s claims about missile defense. The Missile Defense Agency told Bloomberg that interceptors “have displayed a 100% accuracy rate in testing for more than a decade.” The Pentagon separately told Bloomberg that it wasn’t consulted on the film at all.

Director Bigelow worked closely with the CIA to make Zero Dark Thirty, but has tussled with the Pentagon before. The DoD didn’t like The Hurt Locker and pulled out of the project after showing some initial support. Bigelow has said in interviews that she wanted House of Dynamite to be an independent project.

Despite that independence, House of Dynamite nails the details of nuclear war in 2025. The acronyms, equipment, and procedures are all frighteningly close to reality and Bigelow did have help on set from retired US Army lieutenant general and former US Strategic Command (STRATCOM) Chief of Staff Dan Karbler.

Karbler is a career missile guy and as the chief of staff of STRATCOM he oversaw America’s nuclear weapons. He told 404 Media that he landed the gig by scaring the hell out of Bigelow and her staff on, appropriately, a Zoom call.

Bigelow wanted to meet Karbler and they set up a big conference call on Zoom. He joined the call but kept his camera off. As people filtered in, Karbler listened and waited. “Here’s how it kind of went down,” Karbler told 404 Media. “There’s a little break in the conversation so I click on my microphone, still leaving the camera off, and I just said: ‘This is the DDO [deputy director of operations] convening a National Event Conference. Classification of this conference TOP SECRET. TK [Talent Keyhole] SI: US STRATCOM, US INDOPACOM, US Northern Command, SecDef Cables, military system to the secretary.”

“SecDef Cables, please bring the secretary of defense in the conference. Mr. Secretary, this is the DDO. Because of the time constraints of this missile attack, recommend we transition immediately from a national event conference to a nuclear decision conference, and we bring the President into the conference. PEOC [Presidential Emergency Operations Center], please bring the President into the conference.”

“And I stopped there and I clicked on my camera and I said, ‘ladies and gentleman, that’s how the worst day in American history will begin. I hope your script does it some justice,’” Karbler said. The theatrics worked and, according to Karbler, he sat next to Bigelow every day on set and helped shape the movie.

House of Dynamite begins and ends with ambiguity. We never learn who fired the nuclear weapon at Chicago. The last few minutes of the film focus on the President looking through retaliation plans. He’s in a helicopter, moments from the nuke hitting Chicago, and looking through plans that would condemn millions of people on the planet to fast and slow deaths. The film ends as he wallows in this decision, we never learn what he chooses.

Karbler said it was intentional. “The ending was ambiguous so the audience would leave with questions,” he said. “The easy out would have been: ‘Well, let’s just have a nuclear detonation over Chicago.’ That’s the easy out. Leaving it like it is, you risk pissing off the audience, frankly, because they want a resolution of some sort, but they don’t get that resolution. So instead they’re going to have to be able to have a discussion.”

In my house, at least, the gambit worked. During the credits my wife and I talked about whether or not we’d launch the nukes ourselves (We’d both hold off) and I explained the unpleasant realities of ground based interceptors.

Karbler, too, said he wouldn’t have launched the nukes. It’s just one nuke, after all. It’s millions of people, sure, but if America launches its nukes in retaliation then there’s a good chance Russia, China, and everyone else might do the same. “Because of the potential of a response provoking a much, much broader response, and something that would not be proportional,” Karbler said. “Don’t get me wrong, 20 million people, an entire city, a nuclear attack that hit us, but if we respond back, then you’re going to get into im-proportionality calculus.”

Despite the horrors present on screen in House of Dynamite, Karbler isn’t a nuclear abolitionist. “The genie is out of the bottle, you’re not going to put it back in there,” he said. “So what do we do to ensure our best defense? It seems counterintuitive, you know, the best defense is gonna be a good offense. You’ve gotta be able to have a response back against the adversary.”

Basically, Karbler says we should do what we’re doing now: build a bunch more nukes and make sure your enemies know you’re willing to use them. “Classic deterrence has three parts: impose unacceptable costs on the adversary. Deny the adversary any benefit of attack, read that as our ability to defend ourselves, missile defense, but also have the credible messaging behind it,” he said.

These are weapons that have the power to end the world, weapons we make and pray we never use. But we do keep making them. Almost all the old nuclear treaties between Russia and America are gone. The US is spending trillions to replace old ICBM silos and make new nuclear weapons. After decades of maintaining a relatively small nuclear force, China is building up its own stockpiles.

Trump has promised a Golden Dome to keep America safe from nukes and on Sunday Putin claimed Russia had successfully tested a brand new nuclear-powered cruise missile. The people who track existential threats believe we’re closer to nukes ending the world than at any other time in history.




Court records show Homeland Security Investigations (HSI), a part of ICE, and the FBI obtained Con Edison user data. The utility provider refuses to say whether law enforcement needs a warrant to access its data.#ICE #News


Con Edison Refuses to Say How ICE Gets Its Customers’ Data


Con Edison, the energy company that serves New York City, refuses to say whether ICE or other federal agencies require a search warrant or court order to access its customers’ sensitive data. Con Edison’s refusal to answer questions comes after 404 Media reviewed court records showing Homeland Security Investigations (HSI), a division of ICE, has previously obtained such data, and the FBI performing what the records call ‘searches’ of Con Edison data.

The records and Con Edison’s stonewalling raise questions about how exactly law enforcement agencies are able to access the utility provider’s user data, whether that access is limited in any way, and whether ICE still has access during its ongoing mass deportation effort.

“​​We don’t comment to either confirm or deny compliance with law enforcement investigations,” Anne Marie, media relations manager for Con Edison, told 404 Media after being shown a section of the court records.

In September, 404 Media emailed Con Edison’s press department to ask if law enforcement officers have to submit a search warrant or court order to search Con Edison data. A few days later, Marie provided the comment neither confirming nor denying any details of the company’s data sharing practice.

💡
Do you know anything else about how ICE is accessing or using data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

404 Media then sent several follow-up inquiries, including whether ICE requires a warrant or other legal mechanism to obtain user data. Con Edison did not respond to any of those follow-ups.

Con Edison’s user data is especially sensitive, and likely valuable to authorities, because in many cases it will directly link a specific person to a particular address. If someone is paying for electricity for a home they own or rent, they most likely do it under their real name.

Federal agencies have repeatedly turned to Con Edison data as part of criminal investigations, according to court records. In one case, the FBI previously said it believed a specific person occupied an apartment after performing a “search” of Con Edison records and finding a Con Edison account in that person’s name. Another case shows the FBI obtaining a Con Edison user’s email address after finding it linked to a utilities account. A third case says “a search of records maintained by Con Edison, a public utilities provider to the greater New York City area” revealed that a specific person was receiving utilities at a target address. Several other cases contain similar language.
playlist.megaphone.fm?p=TBIEA2…
Court records also show HSI has accessed Con Edison data as part of criminal investigations. One shows HSI getting data from Con Edison that reveals the name associated with a particular Con Edison account and address. Another says “there was no indication in the records from Con Edison that the SUBJECT PREMISES is divided into multiple units.” A third shows that HSI “confirmed with Con Edison” who was a customer at an address at a particular point in time.

Ordinarily HSI is focused on criminal investigations into child abuse, money laundering, cybercrime, and other types of criminal networks. But in the second Trump administration’s mass deportation effort, the distinction between HSI and ICE is largely meaningless. HSI has reassigned at least 6,198 agents, or nearly 90 percent, and 12,353 personnel overall to assist the deportation arm of ICE, according to data published by the Cato Institute in September. HSI also performs worksite enforcement.

The court records don’t describe how the investigators obtained the Con Edison data exactly, whether they obtained a search warrant or court order, or elaborate on how some officials were able to “search” Con Edison records.

Usually companies and organizations readily acknowledge how and when law enforcement can access customer data. This is for the benefit of users, who can then better understand what legal mechanisms protect their data, but also for law enforcement officials themselves, so they know what information they need to provide during an investigation. Broadly, companies might require a law enforcement official to obtain a search warrant or send a subpoena before they provide the requested user data, based on its sensitivity.


#News #ice


These anti-facial recognition glasses technically work, but won’t save you from our surveillance dystopia.#News #idguard #Surveillance


Zenni’s Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age


Zenni, an online glasses store, is offering a new coating for its lenses that the company says will protect people from facial recognition technology. Zenni calls it ID Guard and it works by adding a pink sheen to the surface of the glasses that reflects the infrared light used by some facial recognition cameras.

Do they work? Yes, technically, according to testing conducted by 404 Media. Zenni’s ID Guard glasses block infrared light. It’s impossible to open an iPhone with FaceID while wearing them and they black out the eyes of the wearer in photos taken with infrared cameras.
playlist.megaphone.fm?p=TBIEA2…
However, ID Guard glasses will not at all stop some of the most common forms of facial recognition that are easy to access and abuse. If someone takes a picture of your naked face with a normal camera in broad daylight while you’re wearing them, there’s a good chance they’ll still be able to put your face through a database and get a match.

For example, I took pictures of myself wearing the glasses in normal light and ran it through PimEyes, a site that lets anyone run facial recognition searches. It identified me in seconds, even with the glasses. One of the biggest dangers of facial recognition is not a corporation running an advanced camera with fancy sensors, it’s an angry Taylor Swift fan who doxes you using a regular picture of your face. Zenni is offering some protection against the former, but can’t help with the latter.

But the glasses do block infrared light and many of the cameras taking pictures of us as we go about our lives rely on that to scan our faces. When those cameras see me now, there will be black holes where my eyes should be and that’s given me a strange kind of peace of mind.

The modern world is covered in cameras that track your every movement. In New Orleans, a private network of cameras uses facial recognition tech to track people in real time and alert cops to the presence of undesirables. Last year tech billionaire and media mogul Larry Ellison pitched a vision of the future where cameras capture every moment of everyone’s life to make sure they’re “on their best behavior.”

Zenni’s director of digital innovation, Steven Lee, told 404 Media that the company wanted to offer customers something that helped them navigate this environment. “There’s devices out there that are scanning us, even without our permission and just tracking us,” he said. “So we asked ourselves: ‘could there possibly be a set of lenses that could do more than just protect our vision, maybe it could protect our identity as well.’”

As a side benefit of beating facial recognition, I noticed the ID Guard lenses were more comfortable for me to wear in sunlight than my normal glasses. I’m sensitive to sunlight and need to wear prescription sunglasses outdoors to prevent headaches and discomfort. The Zenni glasses cut down on a lot of that without me needing to wear shades.Lee explained that this was because the ID Guard blocks infrared light from the sun as well as cameras. This was one of the original purposes of the coating. “When we delved into that, we realized, not only could it protect your eyes from infrared…but it also had the additional benefit of protecting against a lot of devices out there…a lot of camera systems out there utilize infrared to detect different facial features and detect who you are,” he said.

There’s many different kinds of facial recognition technology. Some simply take a picture of a user's face and match it against a database, but those systems have a lot of problems. Sunglasses block the eyes and render one of the biggest datapoints for the system useless and low light pictures don’t work at all so many cameras taking pictures for facial recognition use infrared light to take a picture of a person’s face.

“What's happening when you're using these infrared cameras is it's creating a map that's basically transforming your face into a number of digital landmarks, numerically transforming that into a map that makes us each unique. And so they then use an algorithm to figure out who we are, basically,” Lee said.

But the pink sheen of ID Guard beats the infrared rays. “When infrared light is trying to shine into your eyes, it’s basically being reflected away so it can’t actually penetrate and we’re able to block up to 80 percent of the infrared rays,” Lee said. “When that is happening, those cameras become less effective. They’re not able to collect as much data on your face.”
On the left, the Zenni ID Guard glasses under an infrared camera. On the right, normal sunglasses under an infrared camera. Matthew Gault photos.
To test ID Guard’s effectiveness I put them on my wife and sent her to battle the most complex facial recognition system available to consumers: an iPhone. Apple’s Face ID system is the most comprehensive kind of facial ID system normal people encounter everyday. An iPhone uses three different cameras to project a grid of infrared lights onto a person's face, flood the space in between with infrared light, and take a picture. These infrared lights make a 3D map of a user’s face and use it to unlock the phone.

My wife uses an iPhone for work with a FaceID system and when she was wearing Zenni’s ID Guard glasses, the phone would not open. Her iPhone rejected her in low light, darkness, and broad daylight if she was wearing the Zenni glasses. If she wore her own sunglasses, however, the phone opened immediately because the infrared lights of Apple Face ID made them clear and saw straight into her eyes.

The 2D infrared pictures taken in most public spaces running facial recognition systems are much less sophisticated than an iPhone. And there’s a way we can test those too: trail cameras. The cameras hunters and park rangers use to monitor the wilderness are often equipped with infrared lights that help them take pictures at night and in low light conditions. Using one to take a picture of my face while wearing the Zenni glasses should show us what I look like in public to facial recognition cameras used by retail businesses and the police.

Sure enough, the Zenni glasses with ID Guard stopped the camera from seeing my eyes when the infrared light was on. I sat for several photos in dark conditions while the camera captured photos of my face. The infrared went right through my normal sunglasses while the ID Guard glasses from Zenni stopped the light all together. The camera couldn’t get a clear shot of my eyes.

Zenni is not the first company to offer some form of anit-infrared coating that disrupts facial recognition tech, but it is the first to make it affordable while offering a variety of style choices. The company Reflectacles has been offering a variety of Wayfarer-style glasses with an anti-IR coating for a few years now. But Reflectacles style options are limited and have a powerful green-yellow tint. Zenni is also a major glasses retailer competing with other major retailers, it’s offering a variety of styles that match different aesthetics, and the pink sheen is way less noticeable than the green coating.

Zenni offers the ID Guard on most of its frames and the glasses have a subtle pink tint that’s obvious if you look directly at them, but I didn’t notice when I wore them. I used them to watch TV and went to the movies with them on and never noticed altered colors. “So with the pinkish hue, that was not by accident,” Lee said. “It was purposeful. We wanted to do something where we could actively show individuals that the lenses were actively working to protect their identity.”

Whether Zenni’s ID guard will actually protect people from facial recognition is less interesting than the fact that they exist at all. The state of our surveillance dystopia is such that a major glasses retailer is advertising anti-facial recognition features as a selling point as if it was normal.




The Canadian Centre for Child Protection found more than 120 images of identified or known victims of CSAM in the dataset.

The Canadian Centre for Child Protection found more than 120 images of identified or known victims of CSAM in the dataset.#News


AI Dataset for Detecting Nudity Contained Child Sexual Abuse Images


A large image dataset used to develop AI tools for detecting nudity contains a number of images of child sexual abuse material (CSAM), according to the Canadian Centre for Child Protection (C3P).

The NudeNet dataset, which contains more than 700,000 images scraped from the internet, was used to train an AI image classifier which could automatically detect nudity in an image. C3P found that more than 250 academic works either cited or used the NudeNet dataset since it was available download from Academic Torrents, a platform for sharing research data, in June 2019.

“A non-exhaustive review of 50 of these academic projects found 13 made use of the NudeNet data set, and 29 relied on the NudeNet classifier or model,” C3P said in its announcement.

C3P found more than 120 images of identified or known victims of CSAM in the dataset, including nearly 70 images focused on the genital or anal area of children who are confirmed or appear to be pre-pubescent. “In some cases, images depicting sexual or abusive acts involving children and teenagers such as fellatio or penile-vaginal penetration,” C3P said.

People and organizations that downloaded the dataset would have no way of knowing it contained CSAM unless they went looking for it, and most likely they did not, but having those images on their machines would be technically criminal.

“CSAM is illegal and hosting and distributing creates huge liabilities for the creators and researchers. There is also a larger ethical issue here in that the victims in these images have almost certainly not consented to have these images distributed and used in training,” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images, told me in an email. Farid also developed PhotoDNA, a widely used image-identification and content filtering tool. “Even if the ends are noble, they don’t justify the means in this case.”

“Many of the AI models used to support features in applications and research initiatives have been trained on data that has been collected indiscriminately or in ethically questionable ways. This lack of due diligence has led to the appearance of known child sexual abuse and exploitation material in these types of datasets, something that is largely preventable,” Lloyd Richardson, C3P's director of technology, said.

Academic Torrents removed the dataset after C3P issued a removal notice to its administrators.

"In operating Canada's national tipline for reporting the sexual exploitation of children we receive information or tips from members of the public on a daily basis," Richardson told me in an email. "In the case of the NudeNet image dataset, an individual flagged concerns about the possibility of the dataset containing CSAM, which prompted us to look into it more closely."

C3P’s findings are similar to 2023 research from Stanford University’s Cyber Policy Center, which found that LAION-5B, one of the largest datasets powering AI-generated images, also contained CSAM. The organization that manages LAION-5B removed it from the internet following that report and only shared it again once it had removed the offending images.

"These image datasets, which have typically not been vetted, are promoted and distributed online for hundreds of researchers, companies, and hobbyists to use, sometimes for commercial pursuits," Richardson told me. "By this point, few are considering the possible harm or exploitation that may underpin their products. We also can’t forget that many of these images are themselves evidence of child sexual abuse crimes. In the rush for innovation, we’re seeing a great deal of collateral damage, but many are simply not acknowledging it — ultimately, I think we have an obligation to develop AI technology in responsible and ethical ways."

Update: This story has been updated with comment from Lloyd Richardson.


#News


Andreessen Horowitz is funding a company that clearly violates the inauthentic behavior policies of every major social media platform.#News #AI #a16z


a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service


A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz (a16z), is building a service that allows clients to “orchestrate actions on thousands of social accounts through both bulk content creation and deployment.” Essentially, the startup, called Doublespeed, is pitching an astroturfing AI-powered bot service, which is in clear violation of policies for all major social media platforms.

“Our deployment layer mimics natural user interaction on physical devices to get our content to appear human to the algorithims [sic],” the company’s site says. Doublespeed did not respond to a request for comment, so we don’t know exactly how its service works, but the company appears to be pitching a service designed to circumvent many of the methods social media platforms use to detect inauthentic behavior. It uses AI to generate social media accounts and posts, with a human doing 5 percent of “touch up” work at the end of the process.

On a podcast earlier this month, Doublespeed cofounder Zuhair Lakhani said that the company uses a “phone farm” to run AI-generated accounts on TikTok. So-called “click farms” often use hundreds of mobile phones to fake online engagement of reviews for the same reason. Lakhani said one Doublespeed client generated 4.7 million views in less than four weeks with just 15 of its AI-generated accounts.

“Our system analyzes what works to make the content smarter over time. The best performing content becomes the training data for what comes next,” Doublespeed’s site says. Doublespeed also says its service can create slightly different variations of the same video, saying “1 video, 100 ways.”

“Winners get cloned, not repeated. Take proven content and spawn variation. Different hooks, formats, lengths. Each unique enough to avoid suppression,” the site says.
One of Doublespeed's AI influencers
Doublespeed allows clients to use its dashboard for between $1,500 and $7,500 a month, with more expensive plans allowing them to generate more posts. At the $7,500 price, users can generate 3,000 posts a month.

The dashboard I was able to access for free shows users can generate videos and “carousels,” which is a slideshow of images that are commonly posted to Instagram and TikTok. The “Carousel” tab appears to show sample posts for different themes. One, called “Girl Selfcare” shows images of women traveling and eating at restaurants. Another, called “Christian Truths/Advice” shows images of women who don’t show their face and text that says things like “before you vent to your friend, have you spoken to the Holy Spirit? AHHHHHHHHH”

On the company’s official Discord, one Doublespeed staff member explained that the accounts the company deploys are “warmed up” on both iOS and Android, meaning the accounts have been at least slightly used, in order to make it seem like they are not bots or brand new accounts. Doublespeed cofounder Zuhair Lakhani also said on the Discord that users can target their posts to specific cities and that the service currently only targets TikTok but that it has internal demos for Instagram and Reddit. Lakhani said Doublespeed doesn’t support “political efforts.”

A Reddit spokesperson told me that Doublespeed’s service would violate its terms of service. TikTok, Meta, and X did not respond to a request for comment.

Lakhani said Doublespeed has raised $1 million from a16z as part of its “Speedrun” accelerator program “a fast‐paced, 12-week startup program that guides founders through every critical stage of their growth.”

Marc Andreessen, after whom half of Andreessen Horowitz is named, also sits on Meta’s board of directors. Meta did not immediately respond to our question about one of its board members backing a company that blatantly aims to violate its policy on “authentic identity representation.”

What Doublespeed is offering is not that different than some of the AI generation tools Jason has covered that produce a lot of the AI-slop flooding social media already. It’s also similar, but a more blatant version of an app I covered last year which aimed to use social media manipulation to “shape reality.” The difference here is that it has backing from one of the biggest VC firms in the world.


#ai #News #a16z


The app, which went viral before facing multiple data breaches, is currently unavailable on the Apple App Store.#tea #News


Apple Removes Women Dating Safety App from the App Store


Apple has removed Tea, the women’s safety app which went viral earlier this year before facing multiple data breaches, from the App Store.

“This app is currently not available in your country or region,” a message on the Apple App Store currently says when trying to visit a link to the app.

Apple told 404 Media in an email it removed the app, as well as a copycat called TeaOnHer, for failing to meet the company’s terms of use around content moderation and user privacy. Apple also said it received an excessive number of complaints, including ones about the personal data of minors being posted in the apps.

💡
Do you know anything else about this removal? Do you work at Tea or did you used to? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The company pointed to parts of its guidelines including that apps are not allowed to share someone’s personal data without their permission, and that apps need a mechanism for reporting objectionable content.

Randy Nelson, head of insights and media resources at app intelligence company Appfigures, first alerted 404 Media to the app’s removal.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News #tea


When Amazon Web Services went offline, people lost control of their cloud-connected smart beds, getting stuck in reclined positions or roasting with the heat turned all the way up.#News


The AWS Outage Bricked People’s $2,700 Smartbeds


Sleepers snoozing in Eight Sleep smartbeds had a bad night on Monday when a major outage of Amazon Web Services (AWS) caused their beds to malfunction. Some were left with the bed’s heat blasting, others were left in a sitting position and unable to recline. One woman said her bed went haywire and she had to unplug it from the wall.

At around 3 a.m. ET on Monday morning the US-EAST-1 AWS cluster went down and screwed up internet connected services across the planet. Customers for the banks Lloyds and Halifax couldn’t access their accounts. United Airlines check-ins stopped functioning. And people who rest in Eight Sleep beds awoke to find their mattresses had turned against them.

An Eight Sleep bed is a smart bed that starts at $2,700. Users provide their own mattress and Eight Sleep sells them a mattress cover and a “Pod” that acts as the brain of the system. If customers want to spend a few thousand more, they can get a base that adjusts the position of the mattress, provides biometric sleeping data, and heats and cools the sleeper. Customers must also subscribe to a service for Eight Sleep, which ranges from $17 to $33 a month.

Eight Sleep runs on the cloud and when the servers go down or the customer’s internet goes out it bricks the bed. There’s no offline mode. Customers have complained about the lack of an offline mode for a while, but the AWS outage focused their rage.
playlist.megaphone.fm?p=TBIEA2…
“So apparently, when my internet goes down, my bed decides to go on strike too. A quick outage, and boom—no change in sleep position available, not even with manual taps,” one customer on r/eightsleep said. “Maybe consider giving people a grace period before their $5,000 bed locks them into the world’s most ergonomic sitting position. AWS attack or Internet down for a few hours should not brick my bed.”

“Cloud only is unacceptable,” said another. “It’s 2025 there is no reason an internet or AWS server outage should impact your entire customer base's sleep—especially given the price tag of your product. Need EightSleep’s product team to opine here, your customer base demands it!”

“My pod is at +5 and I am sweating cuz I can’t turn it down or off,” said one comment.

Eight Sleep CEO Matteo Franceschetti apologized for the restless night in a statement posted to X. “The AWS outage has impacted some of our users since last night, disrupting their sleep. That is not the experience we want to provide and I want to apologize for it,” he said. He added that the company was restoring the bed’s features as AWS came back online and promised to outage-proof the Pods.

“Mine is still not working—it went super haywire and still seems to be turning on and off randomly with the inability to stop or control it. I had to unplug it,” ESPN host Victoria Arlen said on X, replying to Franceschetti. “I tried to get it going again and it’s still uncontrollable with the system turning on and off.”

Would be great if my bed wasn’t stuck in an inclined position due to an AWS outage. Cmon now
— Brandon (@Brandon25774008) October 21, 2025


“Would be great if my bed wasn’t stuck in an inclined position due to an AWS outage. Cmon now,” @Brandon25774008 said on X.

The truth is that so long as Eight Sleep beds have to communicate with a server to function, they’re always in danger of dying. That point of failure means the beds could go out at any time leaving the people who paid $5,000 for a fancy bed with little recourse. And, of course, no company lasts forever.

“When ES eventually goes bust, our pods will be bricked,” one Redditor said. “The fact that the pods cannot be controlled when you don’t have the internet is diabolical. I wish I knew this before purchasing. This basically means in the possibly near future, all of our pods will be bricked […] ES need to get their heads out of their ass and for once do a pro customer change and introduce an ‘offline’ mode where we can connect to the pod directly and at the very least change the temperature. It has wifi, it can make its own SSID, just make it work ES.”

Pro-active ES users have already found one solution: jailbreak the Pod. The ES sub is—at a minimum—$200 a year, the Pod uploads multiple GBs of telemetry data to ES servers every month, and when the internet goes down the bed dies. If you must own a $5,000 bed that heats and cools you dynamically, shouldn’t you take full control of it?

There’s an active Discord and a Github for a group of Eight Sleep snoozers who’ve decided to do just that. According to the GitHub, the jailbreak “allows complete control of device WITHOUT requiring internet access. If you lose internet, your pod WILL NOT turn off, it will continue working!”

Data centers are vulnerable. Server clusters go down. As long as there is a single point of failure and your device is commuting back to a network out of your control, it’s a risk. We have allowed tech companies to mediate the most basic functions of our lives, from cooking to travel to sleep. The AWS and ES outage is a stark reminder that we should do what we can to limit the control these tech companies have over our lives.

“I’m continuously horrified that I inextricably linked my sleep and therefore health to a cloud provider’s reliability,” one person said in the comments on Reddit.


#News


After condemnation from Trump’s AI czar, Anthropic’s CEO promised its AI is not woke.#News #AI #Anthropic


Anthropic Promises Trump Admin Its AI Is Not Woke


Anthropic CEO Dario Amodei has published a lengthy statement on the company’s site in which he promises Anthropic’s AI models are not politically biased, that it remains committed to American leadership in the AI industry, and that it supports the AI startup space in particular.

Amodei doesn’t explicitly say why he feels the need to state all of these obvious positions for the CEO of an American AI company to have, but the reason is that the Trump administration’s so-called “AI Czar” has publicly accused Anthropic of producing “woke AI” that it’s trying to force on the population via regulatory capture.

The current round of beef began earlier this month when Anthropic’s co-founder and head of policy Jack Clark published a written version of a talk he gave at The Curve AI conference in Berkeley. The piece, published on Clark’s personal blog, is full of tortured analogies and self-serving sci-fi speculation about the future of AI, but essentially boils down to Clark saying he thinks artificial general intelligence is possible, extremely powerful, potentially dangerous, and scary to the general population. In order to prevent disaster, put the appropriate policies in place, and make people embrace AI positively, he said, AI companies should be transparent about what they are building and listen to people’s concerns.

“What we are dealing with is a real and mysterious creature, not a simple and predictable machine,” he wrote. “And like all the best fairytales, the creature is of our own creation. Only by acknowledging it as being real and by mastering our own fears do we even have a chance to understand it, make peace with it, and figure out a way to tame it and live together.”

Venture capitalist, podcaster, and the White House’s “AI and Crypto Czar” David Sacks was not a fan of Clark’s blog.

“Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering,” Sacks said on X in response to Clark’s blog. “It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”

Things escalated yesterday when Reid Hoffman, LinkedIn’s co-founder and a megadonor to the Democratic party, supported Anthropic in a thread on X, saying “Anthropic was one of the good guys” because it's one of the companies “trying to deploy AI the right way, thoughtfully, safely, and enormously beneficial for society.” Hoffman also appeared to take a jab at Elon Musk’s xAI, saying “Some other labs are making decisions that clearly disregard safety and societal impact (e.g. bots that sometimes go full-fascist) and that’s a choice. So is choosing not to support them.”

Sacks responded to Hoffman on X, saying “The leading funder of lawfare and dirty tricks against President Trump wants you to know that ‘Anthropic is one of the good guys.’ Thanks for clarifying that. All we needed to know.” Musk hopped into the replies saying: “Indeed.”

“The real issue is not research but rather Anthropic’s agenda to backdoor Woke AI and other AI regulations through Blue states like California,” Sacks said. Here, Sacks is referring to Anthropic’s opposition to Trump’s One Big Beautiful Bill, which wanted to stop states from regulating AI in any way for 10 years, and its backing of California’s SB 53, which requires AI companies that generate more than $500 million in annual revenue to make their safety protocols public.

All this sniping leads us to Amodei’s statement today, which doesn’t mention the beef above but is clearly designed to calm investors who are watching Trump’s AI guy publicly saying one of the biggest AI companies in the world sucks.

“I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development,” Amodei said. “Despite our track record of communicating frequently and transparently about our positions, there has been a recent uptick in inaccurate claims about Anthropic's policy stances. Some are significant enough that they warrant setting the record straight.”

Amodei then goes to count the ways in which Anthropic already works with the federal government and directly grovels to Trump.

“Anthropic publicly praised President Trump’s AI Action Plan. We have been supportive of the President’s efforts to expand energy provision in the US in order to win the AI race, and I personally attended an AI and energy summit in Pennsylvania with President Trump, where he and I had a good conversation about US leadership in AI,” he said. “Anthropic’s Chief Product Officer attended a White House event where we joined a pledge to accelerate healthcare applications of AI, and our Head of External Affairs attended the White House’s AI Education Taskforce event to support their efforts to advance AI fluency for teachers.”

The more substantive part of his argument is that Anthropic didn’t support SB 53 until it made an exemption for all but the biggest AI labs, and that several studies found that Anthropic’s AI models are not “uniquely politically biased,” (read: not woke).

“Again, we believe we share those goals with the Trump administration, both sides of Congress, and the public,” Amodei wrote. “We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.”

Many of the AI industry’s most vocal critics would agree with Sacks that Clark’s blog and “fear-mongering” about AI is self-serving because it makes their companies seem more valuable and powerful. Some critics will also agree that AI companies take advantage of that perspective to then influence AI regulation in a way that benefits them as incumbents.

It would be a far more compelling argument if it didn’t come from Sacks and Musk, who found a much better way to influence AI regulation to benefit their companies and investments: working for the president directly and publicly bullying their competitors.




The same hackers who doxed hundreds of DHS, ICE, and FBI officials now say they have the personal data of tens of thousands of officials from the NSA, Air Force, Defense Intelligence Agency, and many other agencies.#News #ICE


Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials


A hacking group that recently doxed hundreds of government officials, including from the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE), has now built dossiers on tens of thousands of U.S. government officials, including NSA employees, a member of the group told 404 Media. The member said the group did this by digging through its caches of stolen Salesforce customer data. The person provided 404 Media with samples of this information, which 404 Media was able to corroborate.

As well as NSA officials, the person sent 404 Media personal data on officials from the Defense Intelligence Agency (DIA), the Federal Trade Commission (FTC), Federal Aviation Administration (FAA), Centers for Disease Control and Prevention (CDC), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), members of the Air Force, and several other agencies.

The news comes after the Telegram channel belonging to the group, called Scattered LAPSUS$ Hunters, went down following the mass doxing of DHS officials and the apparent doxing of a specific NSA official. It also provides more clarity on what sort of data may have been stolen from Salesforce’s customers in a series of breaches earlier this year, and which Scattered LAPSUS$ Hunters has attempted to extort Salesforce over.

💡
Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“That’s how we’re pulling thousands of gov [government] employee records,” the member told 404 Media. “There were 2000+ more records,” they said, referring to the personal data of NSA officials. In total, they said the group has private data on more than 22,000 government officials.

Scattered LAPSUS$ Hunters’ name is an amalgamation of other infamous hacking groups—Scattered Spider, LAPSUS$, and ShinyHunters. They all come from the overarching online phenomenon known as the Com. On Discord servers and Telegram channels, thousands of scammers, hackers, fraudsters, gamers, or just people hanging out congregate, hack targets big and small, and beef with one another. The Com has given birth to a number of loose-knit but prolific hacking groups, including those behind massive breaches like MGM Resorts, and normalized extreme physical violence between cybercriminals and their victims.

On Thursday, 404 Media reported Scattered LAPSUS$ Hunters had posted the names and personal information of hundreds of government officials from DHS, ICE, the FBI, and Department of Justice. 404 Media verified portions of that data and found the dox sometimes included peoples’ residential addresses. The group posted the dox along with messages such as “I want my MONEY MEXICO,” a reference to DHS’s unsubstantiated claim that Mexican cartels are offering thousands of dollars for dox on agents.

Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials
Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.
404 MediaJoseph Cox


After publication of that article, a member of Scattered LAPSUS$ Hunters reached out to 404 Media. To prove their affiliation with the group, they sent a message signed with the ShinyHunters PGP key with the text “Verification for Joseph Cox” and the date. PGP keys can be used to encrypt or sign messages to prove they’re coming from a specific person, or at least someone who holds that key, which are typically kept private.

They sent 404 Media personal data related to DIA, FTC, FAA, CDC, ATF and Air Force members. They also sent personal information on officials from the Food and Drug Administration (FDA), Health and Human Services (HHS), and the State Department. 404 Media verified parts of the data by comparing them to previously breached data collected by cybersecurity company District 4 Labs. It showed that many parts of the private information did relate to government officials with the same name, agency, and phone number.

Except the earlier DHS and DOJ data, the hackers don’t appear to have posted this more wide ranging data publicly. Most of those agencies did not immediately respond to a request for comment. The FTC and Air Force declined to comment. DHS has not replied to multiple requests for comment sent since Thursday. Neither has Salesforce.

The member said the personal data of government officials “originates from Salesforce breaches.” This summer Scattered LAPSUS$ Hunters stole a wealth of data from companies that were using Salesforce tech, with the group claiming it obtained more than a billion records. Customers included Disney/Hulu, FedEx, Toyota, UPS, and many more. The hackers did this by social engineering victims and tricking them to connect to a fraudulent version of a Salesforce app. The hackers tried to extort Salesforce, threatening to release the data on a public website, and Salesforce told clients it won’t pay the ransom, Bloomberg reported.

On Friday the member said the group was done with extorting Salesforce. But they continued to build dossiers on government officials. Before the dump of DHS, ICE, and FBI dox, the group posted the alleged dox of an NSA official to their Telegram group.

Over the weekend that channel went down and the member claimed the group’s server was taken “offline, presumably seized.”

The doxing of the officials “must’ve really triggered it, I think it’s because of the NSA dox,” the member told 404 Media.

Matthew Gault contributed reporting.


#News #ice