"What if I could create Théâtre D’opéra Spatial as if it were physically created by hand? Not actually, of course."#News #AI
Creator of Infamous AI Painting Tells Court He's a Real Artist
In 2022, Jason Allen outraged artists around the world when he won the Colorado State Fair Fine Arts Competition with a piece of AI-generated art. A month later, he tried to copyright the pictures, got denied, and started a fight with the U.S. Copyright Office (USCO) that dragged on for three years. In August, he filed a new brief he hopes will finally give him a copyright over the image Midjourney made for him, called Théâtre D’opéra Spatial. He’s also set to start selling oil-print reproductions of the image.A press release announcing both the filing and the sale claims these prints “[evoke] the unmistakable gravitas of a hand-painted masterwork one might find in a 19th-century oil painting.” The court filing is also defensive of Allen’s work. “It would be impossible to describe the Work as ‘garden variety’—the Work literally won a state art competition,” it said.
playlist.megaphone.fm?p=TBIEA2…
“So many have said I’m not an artist and this isn’t art,” Allen said in a press release announcing both the oil-print sales and the court filing. “Being called an artist or not doesn’t concern me, but the work and my expression of it do. I asked myself, what could make this undeniably art? What if I could create Théâtre D’opéra Spatial as if it were physically created by hand? Not actually, of course, but what if I could achieve that using technology? Surely that would be the answer.”Allen’s 2022 win at the Colorado State Fair was an inflection point. The beta version for the image generation software Midjourney had launched a few months before the competition and AI-generated images were still a novelty. We were years away from the nightmarish tide of slop we all live with today, but the piece was highly controversial and represented one of the first major incursions of AI-generated work into human spaces.
Théâtre D’opéra Spatial was big news at the time. It shook artistic communities and people began to speak out against AI-generated art. Many learned that their works had been fed into the training data for these massive data hungry art generators like Midjourney. About a month after he won the competition and courted controversy, Allen applied for a copyright of the image. The USCO rejected it. He’s been filing appeals ever since and has thus far lost every one.
The oil-prints represent an attempt to will the AI-generated image into a physical form called an “elegraph.” These won’t be hand painted versions of the picture Midjourney made. Instead, they’ll employ a 3D printing technique that uses oil paints to create a reproduction of the image as if a human being made it, complete—Allen claimed—with brushstrokes.
“People said anyone could copy my work online, sell it, and I would have no recourse. They’re not technically wrong,” Allen said in the press release. “If we win my case, copyright will apply retroactively. Regardless, they’ll never reproduce the elegraph. This artifact is singular. It’s real. It’s the answer to the petulant idea that this isn’t art. Long live Art 2.0.”
The elegraph is the work of a company called Arius which is most famous for working with museums to conduct high quality scans of real paintings that capture the individual brushstrokes of masterworks. According to Allen’s press release, Arius’ elegraphs of Théâtre D’opéra Spatial will make the image appear as if it is a hand painted piece of art through “a proprietary technique that translates digital creation into a physical artifact indistinguishable in presence and depth from the great oil paintings of history…its textures, lighting, brushwork, and composition, all recalling the timeless mastery of the European salons.”
Allen and his lawyers filed a request for a summary judgement with the U.S. District Court of Colorado on August 8, 2025. The 44 page legal argument rehashes many of the appeals and arguments Allen and his lawyers have made about the AI-generated image over the past few years.
“He created his image, in part, by providing hundreds of iterative text prompts to an artificial intelligence (“AI”)-based system called Midjourney to help express his intellectual vision,” it said. “Allen produced this artwork using ‘hundreds of iterations’ of prompts, and after he ‘experimented with over 600 prompts,’ he cropped and completed the final Work, touching it up manually and upscaling using additional software.”
Allen’s argument is that prompt engineering is an artistic process and even though a machine made the final image, he says he should be considered the artist because he told the machine what to do. “In the Board’s view, Mr. Allen’s actions as described do not make him the author of the Midjourney Image because his sole contribution to the Midjourney Image was inputting the text prompt that produced it,” a 2023 review of previous rejections by the USCO said.
During its various investigations into the case, the USCO did a lot of research into how Midjourney and other AI-image generators work. “It is the Office’s understanding that, because Midjourney does not treat text prompts as direct instructions, users may need to attempt hundreds of iterations before landing upon an image they find satisfactory. This appears to be the case for Mr. Allen, who experimented with over 600 prompts,” its 2023 review said.
This new filing is an attempt by Allen and his lawyers to get around these previous judgements and appeal to higher courts by accusing the USCO of usurping congressional authority. “The filing argues that by attempting to redefine the term “author” (a power reserved to Congress) the Copyright Office has acted beyond its lawful authority, effectively placing itself above judicial and legislative oversight.”
We’ll see how well that plays in court. In the meantime, Allen is selling oil-prints of the image Midjourney made for him.
AI Slop Is a Brute Force Attack on the Algorithms That Control Reality
Generative AI spammers are brute forcing the internet, and it is working.Jason Koebler (404 Media)
Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.#News
Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials
A group of hackers from the Com, a loose-knit community behind some of the most significant data breaches in recent years, have posted the names and personal information of hundreds of government officials, including people working for the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE).“I want my MONEY MEXICO,” a user of the Scattered LAPSUS$ Hunters Telegram channel, which is a combination of a series of other hacking group names associated with the Com, posted on Thursday. The message was referencing a claim from the DHS that Mexican cartels have begun offering thousands of dollars for doxing agents. The U.S. government has not provided any evidence for this claim.
💡
Do you know anything else about this data dump? Do you work for any of the agencies impacted? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”#News
Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it’s seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site.The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia.
“We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow
Sustainably,” the Foundation’s Senior Director of Product Marshall Miller said in a blog post. “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
Ironically, while generative AI and search engines are causing a decline in direct traffic to Wikipedia, its data is more valuable to them than ever. Wikipedia articles are some of the most common training data for AI models, and Google and other platforms have for years mined Wikipedia articles to power its Snippets and Knowledge Panels, which siphon traffic away from Wikipedia itself.
“Almost all large language models train on Wikipedia datasets, and search engines and social media platforms prioritize its information to respond to questions from their users,” Miller said. That means that people are reading the knowledge created by Wikimedia volunteers all over the internet, even if they don’t visit wikipedia.org— this human-created knowledge has become even more important to the spread of reliable information online.”
Miller said that in May 2025 Wikipedia noticed unusually high amounts of apparently human traffic originating mostly from Brazil. He didn’t go into details, but explained this caused the Foundation to update its bot detections systems.
“After making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024,” he said. “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content.”
Miller told me in an email that Wikipedia has policies for third-party bots that crawl its content, such as specifying identifying information and following its robots.txt, and limits on request rate and concurrent requests.
“For obvious reasons, we can’t share details publicly about how exactly we block and detect bots,” he said. “In the case of the adjustment we made to data over the past few months, we observed a substantial increase over the level of traffic we expected, centering on a particular region, and there wasn’t a clear reason for it. When our engineers and analysts investigated the data, they discovered a new pattern of bot behavior, designed to appear human. We then adjusted our detection systems and re-applied them to the past several months of data. Because our bot detection has evolved over time, we can’t make exact comparisons – but this adjustment is showing the decline in human pageviews.”
The Foundation’s findings align with other research we’ve seen recently. In July, the Pew Research Center found that only 1 percent of Google searches resulted in the users clicking on the link in the AI summary, which takes them to the page Google is summarizing. In April, the Foundation previously reported that it was getting hammered by AI scrapers, a problem that has also plagued libraries, archives, and museums. Wikipedia editors are also acutely aware of the risk generative AI poses to the reliability of Wikipedia articles if its use is not moderated effectively.
Human pageviews to all language versions of Wikipedia since September 2021, with revised pageviews since April 2025 Image: Wikimedia Foundation.
“These declines are not unexpected. Search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours,” Miller said. “And younger generations are seeking information on social video platforms rather than the open web. This gradual shift is not unique to Wikipedia. Many other publishers and content platforms are reporting similar shifts as users spend more time on search engines, AI chatbots, and social media to find information. They are also experiencing the strain that these companies are putting on their infrastructure.”Miller said that the Foundation is “enforcing policies, developing a framework for attribution, and developing new technical capabilities” in order to ensure third-parties responsibly access and reuse Wikipedia content, and continues to "strengthen" its partnerships with search engines and other large “re-users.” The Foundation, he said, is also working on bringing Wikipedia content to younger audiences via YouTube, TikTok, Roblox, and Instagram.
However, Miller also called on users to “choose online behaviors that support content integrity and content creation.”
“When you search for information online, look for citations and click through to the original source material,” he said. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”
AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums
"This is a moment where that community feels collectively under threat and isn't sure what the process is for solving the problem.”Emanuel Maiberg (404 Media)
AI-generated Reddit Answers are giving bad advice in medical subreddits and moderators can’t opt out.#News
Reddit's AI Suggests Users Try Heroin
Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.The AI-generated answers were flagged by a user on a subreddit for Reddit moderation issues. The user noticed that while looking at a thread on the r/FamilyMedicine subreddit on the official Reddit mobile app, the app suggested a couple of “Related Answers” via Reddit Answers, the company’s “AI-powered conversational interface.” One of them, titled “Approaches to pain management without opioids,” suggested users try kratom, an herbal extract from the leaves of a tree called Mitragyna speciosa. Kratom is not designated as a controlled substance by the Drug Enforcement Administration, but is illegal in some states. The Federal Drug Administration warns consumers not to use kratom “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder,” and the Mayo Clinic calls it “unsafe and ineffective.”
“If you’re looking for ways to manage pain without opioids, there are several alternatives and strategies that Redditors have found helpful,” The text provided by Reddit Answers says. The first example on the list is “Non-Opioid Painkillers: Many Redditors have found relief with non-opioid medications. For example, ‘I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states.’” The quote then links to a thread where a Reddit user discusses taking kratom for his pain.
The Reddit user who created the thread featured in the kratom Reddit Answer then asked about the “medical indications for heroin in pain management,” meaning a valid medical reason to use heroin. Reddit Answers said: “Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations [...] Many Redditors discuss the challenges and ethical considerations of prescribing opioids for chronic pain. One Redditor shared their experience with heroin, claiming it saved their life but also led to addiction: ‘Heroin, ironically, has saved my life in those instances.’”Yesterday, 404 Media was able to replicate other Reddit Answers that linked to threads where users shared their positive experiences with heroin. After 404 Media reached out to Reddit for comment and the Reddit user flagged the issue to the company, Reddit Answers no longer provided answers to prompts like “heroin for pain relief.” Instead, it said “Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies.” After 404 Media first published this article, a Reddit spokesperson said that the company started implementing this update on Monday morning, and that it was not as a direct result of 404 Media reaching out.
The Reddit user who created the thread and flagged the issue to the company said they were concerned that Reddit Answers suggested dangerous medical advice in threads for medical subreddits, and that subreddit moderators didn’t have the option to disable Reddit Answers from appearing under conversations in their community.
“We’re currently testing out surfacing Answers on the conversation page to drive more adoption and engagement, and we are also testing core search integration to streamline the search experience,” a Reddit spokesperson told me in an email. “Similar to how Reddit search works, there is currently no way for mods to opt out of or exclude content from their communities from Answers. However, Reddit Answers doesn’t include all content on Reddit; for example, it excludes content from private, quarantined, and NSFW communities, as well as some mature topics.”
After we reached out for comment and the Reddit user flagged the issue to the company, Reddit introduced an update that would prevent Reddit Answers from being suggested under conversations about “sensitive topics.”
“We rolled out an update designed to address and resolve this specific issue,” the Reddit spokesperson said. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed. This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.”
The dangerous medical advice from Reddit Answers is not surprising given that Google AI infamously suggesting users eat glue was also based on data sourced from Reddit. Google paid $60 million a year for that data, and has a similar deal with OpenAI as well. According to Bloomberg, Reddit is currently trying to negotiate even more profitable deals with both companies.
Reddit’s data is valuable as AI training data because it contains millions of user-generated conversations about a ton of esoteric topics, from how to caulk your shower to personal experiences with drugs. Clearly, that doesn’t mean a large language model will always usefully parse that data. The glue incident was caused because the LLM didn’t understand the Reddit user who was suggesting it was joking.
The risk is that people may take whatever advice an LLM gives them at face value, especially when it’s presented to them in the context of a medical subreddit. For example, we recently reported about someone who was hospitalized after ChatGPT told them they could replace their table salt with sodium bromide.
Update: This story has been updated with additional comment from Reddit.
Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue
"You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness."Jason Koebler (404 Media)
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples' Tinder profiles using their photo, and found that the viral videos are produced by paid creators.
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoplesx27; Tinder profiles using their photo, and found that the viral videos are produced by paid creat…#News
Viral ‘Cheater Buster’ Sites Use Facial Recognition to Let Anyone Reveal Peoples’ Tinder Profiles
A number of easy to access websites use facial recognition to let partners, stalkers, or anyone else uncover specific peoples’ Tinder profiles, reveal their approximate physical location at points in time, and track changes to their profile including their photos, according to 404 Media’s tests.Ordinarily it is not possible to search Tinder for a specific person. Instead, Tinder provides users potential matches based on the user’s own physical location. The tools on the sites 404 Media has found allow anyone to search for someone’s profile by uploading a photo of their face. The tools are invasive of anyone’s privacy, but present a significant risk to those who may need to avoid an abusive ex-partner or stalker. The sites mostly market these tools as a way to find out if their partner is cheating on them, or at minimum using dating apps like Tinder.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowViral ‘Cheater Buster’ Sites Use Facial Recognition to Let Anyone Reveal Peoples’ Tinder Profiles
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples' Tinder profiles using their photo, and found that the viral videos are produced by paid creat…Joseph Cox (404 Media)
Flock has built a nationwide surveillance network of AI-powered cameras and given many more federal agencies access. Senator Ron Wyden told Flock “abuses of your product are not only likely but inevitable” and Flock “is unable and uninterested in preventing them.”#News #Flock
ICE, Secret Service, Navy All Had Access to Flock's Nationwide Network of Cameras
A division of ICE, the Secret Service, and the Navy’s criminal investigation division all had access to Flock’s nationwide network of tens of thousands of AI-enabled cameras that constantly track the movements of vehicles, and by extension people, according to a letter sent by Senator Ron Wyden and shared with 404 Media. Homeland Security Investigations (HSI), the section of ICE that had access and which has reassigned more than ten thousand employees to work on the agency’s mass deportation campaign, performed nearly two hundred searches in the system, the letter says.In the letter Senator Wyden says he believes Flock is uninterested in fixing the room for abuse baked into its platform, and says local officials can best protect their constituents from such abuses by removing the cameras entirely.
The letter shows that many more federal agencies had access to the network than previously known. We previously found, following local media reports, that Customs and Border Protection (CBP) had access to 80,000 cameras around the country. It is now clear that Flock’s work with federal agencies, which the company described as a pilot, was much larger in scope.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘The proposed transaction poses a number of significant foreign influence and national security risks.’#News
Senators Warn Saudi Arabia’s Acquisition of EA Will Be Used for ‘Foreign Influence’
Democratic U.S. Senators Richard Blumenthal and Elizabeth Warren sent letters to the Department of Treasury Secretary Scott Bessent and Electronic Arts CEO Andrew Wilson, raising concerns about the $55 billion acquisition of the giant American video game company in part by Saudi Arabia’s Public Investment Fund (PIF).Specifically, the Senators worry that EA, which just released Battlefield 6 last week and also publishes The Sims, Madden, and EA Sports FC, “would cease exercising editorial and operational independence under the control of Saudi Arabia’s private majority ownership.”
“The proposed transaction poses a number of significant foreign influence and national security risks, beginning with the PIF’s reputation as a strategic arm of the Saudi government,” the Senators wrote in their letter. “As Saudi Arabia’s sovereign wealth fund, the PIF has made dozens of strategic investments in sports (including a bid for the U.S. PGA Tour), video games (including a $3.3 billion investment in Activision Blizzard), and other cultural institutionsthat ‘are more than just about financial returns; they are about influence.’ Leveraging long term shifts in public opinion, through the PIF’s investments, ‘Saudi Arabia is seeking to normalize its global image, expand its cultural reach, and gain leverage in spaces that shape how billions of people connect and interact.’ Saudi Arabia’s desire to buy influence through the acquisition of EA is apparent on the face of the transaction—the investors propose to pay more than $10 billion above EA’s trading value for a company whose stock has ‘stagnated for half a decade’ in an unpredictably volatile industry.”
As the Senators' letter notes, Saudi Arabia has made several notable investments in the video game industry in recent years. In addition to its investment in Activision Blizzard and Nintendo, the PIF recently acquired Evo, the biggest video game fighting tournament in the world (one of its many investments in esports), was reportedly a “mystery partner” in a failed $2 billion deal with video game publisher Embracer, and recently acquired Pokémon Go via its subsidiary, Scopely.
“The deal’s potential to expand and strengthen Saudi foreign influence in the United States is compounded by the national security risks raised by the Saudi government’s access to and unchecked influence over the sensitive personal information collected from EA’s millions of users, its development of artificial intelligence (AI) technologies, and the company’s product design and direction,” the Senators wrote.
The acquisition, which is the largest leveraged buyout transaction in history, includes two other investment firms: Silver Lake and Affinity Partners, the latter of which was formed by Donald Trump’s son-in-law Jared Kushner. The Senators letter says that Kushner’s involvement “raises troubling questions about whether Mr. Kushner is involved in the transaction solely to ensure the federal government’s approval of the transaction.”
These investments in the video game industry are just one part of Saudi Arabia’s broader “Vision 2030” to diversify its economy as the world transitions away from the fossil fuels that enriched the Saudi royal family. The PIF has made massive investments in aerospace and defense industries, technology, sports, and other forms of entertainment. For example, Blumenthal and other Senators have expressed similar concerns about the PIF’s investment in the professional golf organization PGA Tour.
The Senators don’t specify what this “foreign influence” might look like in practice, but recent events can give us an idea. The comedy world, for example, has been embroiled in controversy for the last few weeks over the Saudi hosted and funded Riyadh Comedy Festival, which included many of the biggest stand-up comedians in the world. Those who participated in the festival, despite the Saudi government's policies and 2018 assassination of journalist Jamal Khashoggi, defended it as an opportunity for cultural exchange and freedom of expression in a country where it has not been historically tolerated. However, some comedians who declined to join the festival revealed that participants had to agree to certain “content restrictions,” which forbade them from criticizing Saudi Arabia, the royal family, or religion.
Human Rights Watch Refuses Aziz Ansari Riyadh Comedy Festival Donation
Human Rights Watch says it 'cannot accept' donations from Aziz Ansari and other comedians who performed at the Riyadh Comedy Festival in Saudi Arabia.Ethan Shanfeld (Variety)
Say goodbye to the Guy Fawkes masks and hello to inflatable frogs and dinosaurs.#News
The Surreal Practicality of Protesting As an Inflatable Frog
During a cruel presidency where many people are in desperate need of hope, the inflatable frog stepped into the breach. Everyone loves the Portland Frog. The juxtaposition of a frog (and people in other inflatable character costumes) standing up to ICE covered in weapons and armor is absurd, and that’s part of why it’s hitting so hard. But the frog is also a practical piece of passive resistance protest kit in an age of mass surveillance, police brutality, and masked federal agents disappearing people off the streets.On October 2—just a few minutes shy of 11 PM in Portland, Oregon—a federal agent shot pepper spray into the vent hole of Seth Todd’s inflatable frog costume. Todd was protesting ICE outside of Portland’s U.S. Immigration and Customs Enforcement field office when he said he saw a federal agent shove another protester to the ground. He moved to help and the agent blasted the pepper spray into his vent hole.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Breaking News Channel reshared this.
A man who works for the people overseeing America’s nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a Department of Energy (DOE) network.#News #nuclear
Man Stores AI-Generated Robot Porn on His Government Computer, Loses Access to Nuclear Secrets
A man who works for the people overseeing America’s nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a Department of Energy (DOE) network. As part of an appeals process in an attempt to get back his security clearance, the man told investigators he felt his bosses spied on him too much and that the interrogation over the porn snafu was akin to the “Spanish Inquisition."On March 23, 2023, a DOE employee attempted to back up his personal porn collection. His goal was to use the 187,000 images collected over the past 30 years as training data for an AI-image generator. He said he had depression, something he’d struggled with since he was a kid. “During the depressive episode he felt ‘extremely isolated and lonely,’ and started ‘playing’ with tools that made generative images as a coping strategy, including ‘robot pornography,’” according to a DOE report on the incident.
playlist.megaphone.fm?p=TBIEA2…
Fueled by depression, the man meant to back up his collection and create a base for training AI to make better “robot pornography” but he uploaded it to the government computer by accident. He didn’t realize what he’d done until DOE investigators came calling six months later to ask why their servers were now filled with thousands of pornographic pictures.“The Individual ‘thought that even though his personal drives were connected to [his employer’s], they were somehow partitioned, and his personal material would not contaminate his [government-issued computer],” a DOE report said.
According to the report, the man was using his cellphone to look at AI-generated porn images, but the screen wasn’t big enough so he moved the pictures to his government computer. “He also reported that, since the 1990s, he had maintained a ‘giant compressed file with several directories of pornographic images,’ which he moved to his personal cloud storage drive so he could use them to make generative images,” he said. “It was this directory of sexually explicit images that was ultimately uploaded to his employer’s network when he performed a back-up procedure on March 23, 2023.”
The 187,000 images represented a lifetime’s collection. “He stated that the sexually explicit images were an accumulation of ‘25–30 years worth of pornographic material’ he had collected on his personal computer,” he said. He told a DOE psychologist that he should have realized he’d backed up his personal porn collection to a DOE network but said he “was not thinking multiple steps ahead or considering the consequences at the time because he was so depressed.”
According to the DOE employee, he’s been treated for depression since he was a kid. He has ups and downs, and was in a bad headspace when he accidentally uploaded his entire porn collection. He admitted he violated HR rules, but “did not think it was very wrong,” according to the DOE ruling. He also “asserted that his employer ‘was spying on him a little too much’...and compared the interview with his employer following the discovery of his conduct to ‘the Spanish Inquisition.’”
When someone loses their security clearance with the DOE, they can appeal to get it back. In this case, the appeal led to a lengthy investigation and multiple interviews with various DOE psychologists and the man’s wife. When the DOE makes a ruling on an appeal they publish it publicly online, which is why we know about the man’s private porn stash.
He did not get his clearance back. “The DOE Psychologist opined that the individual's probability of experiencing another depressive episode in the future was ‘very high,’” according to the report.
PSH-24-0142 - In the Matter of Personnel Security Hearing
Access Authorization Not Restored; Guideline I (Psychological Conditions) and Guideline M (Use of Information Technology)Energy.gov
Breaking News Channel reshared this.
A prominent beer competition introduced an AI-judging tool without warning. The judges and some members of the wider brewing industry were pissed.#News #AI
What Happened When AI Came for Craft Beer
A prominent beer judging competition introduced an AI-based judging tool without warning in the middle of a competition, surprising and angering judges who thought their evaluation notes for each beer were being used to improve the AI, according to multiple interviews with judges involved. The company behind the competition, called Best Beer, also planned to launch a consumer-facing app that would use AI to match drinkers with beers, the company told 404 Media.Best Beer also threatened legal action against one judge who wrote an open letter criticizing the use of AI in beer tasting and judging, according to multiple judges and text messages reviewed by 404 Media.
The months-long episode shows what can happen when organizations try to push AI onto a hobby, pursuit, art form, or even industry which has many members who are staunchly pro-human and anti-AI. Over the last several years we’ve seen it with illustrators, voice actors, music, and many more. AI came for beer too.
“It is attempting to solve a problem that wasn’t a problem before AI showed up, or before big tech showed up,” Greg Loudon, a certified beer judge and brewery sales manager, and who was the judge threatened with legal action, said. “I feel like AI doesn’t really have a place in beer, and if it does, it’s not going to be in things that are very human.”
“There’s so much subjectivity to it, and to strip out all of the humanity from it is a disservice to the industry,” he added. Another judge said the introduction of AI was “enshittifying” beer tasting.
💡
Do you know anything else about how AI is impacting beer? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This story started earlier this year at a Canadian Brewing Awards judging event. Best Beer is the company behind the Canadian Brewing Awards, which gives awards in categories such as Experimental Beer, Speciality IPA, and Historic/Regional Beers. To be a judge, you have to be certified by the Beer Judge Certification Program (BJCP), which involves an exam covering the brewing process, different beer styles, judging procedures, and more.
Around the third day of the competition, the judges were asked to enter their tasting notes into a new AI-powered app instead of the platform they already use, one judge told 404 Media. 404 Media granted the judge anonymity to protect them from retaliation.
Using the AI felt like it was “parroting back bad versions of your judge tasting notes,” they said. “There wasn't really an opportunity for us to actually write our evaluation.” Judges would write what they thought of a beer, and the AI would generate several descriptions based on the judges’ notes that the judge would then need to select. It would then provide additional questions for judges to answer that were “total garbage.”
“It was taking real human feedback, spitting out crap, and then making the human respond to more crap that it crafted for you,” the judge said.
“On top of all the misuse of our time and disrespecting us as judges, that really frustrated me—because it's not a good app,” they said.
Screenshot of a Best Beer-related website.
Multiple judges then met to piece together what was happening, and Loudon published his open letter in April.
“They introduced this AI model to their pool of 40+ judges in the middle of the competition judging, surprising everyone for the sudden shift away from traditional judging methods,” the letter says. “Results are tied back to each judge to increase accountability and ensure a safe, fair and equitable judging environment. Judging for competitions is a very human experience that depends on people filling diverse roles: as judges, stewards, staff, organizers, sorters, and venue maintenance workers,” the letter says.
“Their intentions to gather our training data for their own profit was apparent,” the letter says. It adds that one judge said “I am here to judge beer, not to beta test.”
The letter concluded with this: “To our fellow beverage judges, beverage industry owners, professionals, workers, and educators: Sign our letter. Spread the word. Raise awareness about the real human harms of AI in your spheres of influence. Have frank discussions with your employers, colleagues, and friends about AI use in our industry and our lives. Demand more transparency about competition organizations.”
33 people signed the letter. They included judges, breweries, and members of homebrewer associations in Canada and the United States.
Loudon told 404 Media in a recent phone call “you need to tell us if you're going to be using our data; you need to tell us if you're going to be profiting off of our data, and you can't be using volunteers that are there to judge beer. You need to tell people up front what you're going to do.”
playlist.megaphone.fm?p=TBIEA2…
At least one brewery that entered its beer into the Canadian Brewing Awards publicly called out Best Beer and the awards. XhAle Brew Co., based out of Alberta, wrote in a Facebook post in April that it asked for its entry fees of $565 to be refunded, and for the “destruction of XhAle's data collected during, and post-judging for the Best Beer App.”“We did not consent to our beer being used by a private equity tech fund at the cost to us (XhAle Brew Co. and Canadian Brewers) for a for-profit AI application. Nor do we condone the use of industry volunteers for the same purpose,” the post said.
Ob Simmonds, head of innovation at the Canadian Brewing Awards, told 404 Media in an email that “Breweries will have amazing insight on previously unavailable useful details about their beer and their performance in our competition. Furthermore, craft beer drinkers will be able to better sift through the noise and find beers perfect for their palate. This in no way is aimed at replacing technical judging with AI.”
With the consumer app, the idea was to “Help end users find beers that match their taste profile and help breweries better understand their results in our competition,” Simmonds said.
Simmonds said that “AI is being used to better match consumers with the best beers for their palate,” but said Best Beer is not training its own model.
Those plans have come to a halt though. At the end of September, the Canadian Brewing Awards said in an Instagram post the team was “stepping away.” It said the goal of Best Beer was to “make medals matter more to consumers, so that breweries could see a stronger return on their entries.” The organization said it “saw strong interest from many breweries, judges and consumers” and that it will donate Best Beer’s assets to a non-profit that shows interest. The post added the organization used third-party models that “were good enough to achieve the results we wanted,” and the privacy policies forbade training on the inputted data.
A screenshot of the Canadian Beer Awards' Instagram post.
The post included an apology: “We apologize to both judges and breweries for the communication gaps and for the disruptions caused by this year’s logistical challenges.”In an email sent to 404 Media this month, the Canadian Brewing Awards said “the Best Beer project was never designed to replace or profit from judges.”
“Despite these intentions, the project came under criticism before it was even officially launched,” it added, saying that the open letter “mischaracterized both our goals and approach.”
“Ultimately, we decided not to proceed with the public launch of Best Beer. Instead, we repurposed parts of the technology we had developed to support a brewery crawl during our gala. We chose to pause the broader project until we could ensure the judging community felt confident that no data would be used for profit and until we had more time to clear up the confusion,” the email added. “If judges wanted their data deleted what assurance can we provide them that it was in fact deleted. Everything was judged blind and they would have no access to our database from the enhanced division. For that reason, we felt it was more responsible to shelve the initiative for now.”
One judge told 404 Media: “I don’t think anyone who is hell bent on using AI is going to stop until it’s no longer worth it for them to do so.”
“I just hope that they are transparent if they try to do this again to judges who are volunteering their time, then either pay them or give them the chance ahead of time to opt-out,” they added.
Now months after this all started, Loudon said “The best beers on the market are art forms. They are expressionist. They're something that can't be quantified. And the human element to it, if you strip that all away, it just becomes very basic, and very sanitized, and sterilized.”
“Brewing is an art.”
XhAle Brew Co.
XhAle is not just a craft beer company. We are a company comprised of majority equity-deserving folks, and have been and still are marginalized in this industry. We understand and have personally...www.facebook.com
Breaking News Channel reshared this.
A hack impacting Discord’s age verification process shows in stark terms the risk of tech companies collecting users’ ID documents. Now the hackers are posting peoples’ IDs and other sensitive information online.#News
The Discord Hack is Every User’s Worst Nightmare
A catastrophic breach has impacted Discord user data including selfies and identity documents uploaded as part of the app’s verification process, email addresses, phone numbers, approximately where the user lives, and much more.The hack, carried out by a group that is attempting to extort Discord, shows in stark terms the risk of tech companies collecting users’ identity documents, and specifically in the context of verifying their age. Discord started asking users in the UK, for example, to upload a selfie with their ID as part of the country’s age verification law recently.
💡
Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“This is about to get really ugly,” the hackers wrote in a Telegram channel, which 404 Media joined, while posting user data on Wednesday. A source with knowledge of the breach confirmed to 404 Media that the data is legitimate. 404 Media granted the source anonymity to speak candidly about a sensitive incident.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Breaking News Channel reshared this.
‘We don’t want democracy lol. We want caliphate.’ According to court records, an Oklahoma guardsman with a security clearance gave 3D printed firearms to an FBI agent posing as an Al Qaeda contact.#News
National Guardsman Planned American Caliphate on Discord, Sent 3D Printed Guns to Al Qaeda, Feds Say
The FBI accused a former National Guardsman living in Tulsa, Oklahoma of trying to sell 3D printed guns to Al Qaeda. According to an indictment unsealed by the Justice Department in September, 25-year-old Andrew Scott Hastings used a Discord server to plan a Caliphate in America and shipped more than 100 3D printed machine gun conversion kits to an undercover FBI operative who claimed he had contacts in the terrorist organization.
playlist.megaphone.fm?p=TBIEA2…
The Army Times first reported the story after the DOJ unsealed the charging documents. According to the court records, Hastings first landed on the radar of authorities in 2019 when a co-worker at an Abuelo’s restaurant in Tulsa called the police to report he’d been talking about blowing things up. When the cops interviewed Hastings, he told them he was just interested in chemistry and The Anarchist’s Cookbook. In 2020, the cops interviewed his mom. “Hastings’ mother, Terri, told TPD that her son was on the [autism] spectrum, was socially active online, and had converted to Islam.”According to Terri, odd incidents piled up. She said that someone mailed Hastings a Quran, that he’d once received an order of chicken wings paid for by someone in Indonesia, and that he’d once threatened his family with a can of gasoline. “She also mentioned an incident in the family home where Hastings became enraged when she cooked bacon, and thereafter called someone she described as his ‘handler,’” according to court records.
The charging documents said the FBI got involved in 2024 because of a Discord server called “ARMY OF MUHAMMAD.” Discord cooperated with the FBI investigation and granted access to some of Hastings’ records to authorities. The FBI alleged that Hastings met with several other people on the Discord server and plotted terror attacks against Americans. At this time, Hastings worked for the National Guard as an aircraft powertrain repairer and held a SECRET-level national security clearance.
The charging documents detailed Hastings' alleged plot to establish a caliphate in the US via Discord. “[T]he most important theater right now is cyberspace…we need an actionable plan we can start work on--something slow and Ling(sic) term not hasty and slapdash,” Hastings allegedly said on Discord. “I think it would be best if we create a channel and I’ll list a physical training routine.”
“If we get 9-10 guys maybe inshaAllah we can …we could put headquarters in the USA cuz yk [you know] if we are fighting them the military is prohibited from operations on the homeland only ntnal [sic] guard and agencies can operate within borders…[y]ou need to contest air land and cyberspace…what my plan addresses is how to contest all of these at once while providing more aid than harm we can do in collateral and taking out targets of higher strength.”
According to the FBI’s version of events, Hastings talked about moving the group off of Discord and onto Signal because he believed Discord wasn’t secure. He also bragged about police interrogating him about explosives and “claimed to have made a firearm and discussed making a nuclear rocket.”
“We don’t want democracy lol,” he said on Discord, according to court records. “We want caliphate”
Hastings talked about other groups he was in contact with on Signal, offered to make training videos about weapon handling, and told others on the Discord server that he knew how to make firearms and was willing to ship them to like-minded militants. “I already have some small arms components partially finished and nearly ready to issue,” he said, according to the charging document. “I’ll send one photo but wanna remain kinda anonymous.”
The FBI said it slipped an “Online Covert Employee” (OCE) into Hastings’ life on March 26, 2025. Posing as a person on eBay, the FBI employee told Hastings he had contacts with Al Qaeda. “The OCE then recommended they move the conversation to Telegram or Signal, the latter of which Hastings said did not even have ‘a backdoor,’ meaning it could not be hacked or intercepted by law enforcement.”
The issue, of course, is that Hastings was speaking with an FBI employee. Over the next few months, Hastings spoke with the OCE about using a 3D printer to manufacture weapons for them with the eventual goal of getting them in the hands of Al Qaeda. Hastings allegedly told the OCE that he’d been discharged from the military and needed to make money.
In the summer of 2025, the FBI alleged that Hastings started mass printing Glock parts and switch conversion kits for Al Qaeda. “Hastings told the OCE he was moving out of his parent’s home in July 2025 after they complained about the noise and smell created when he 3D printed weapons,” the court documents said. The FBI allegedly has video of Hastings at a post office shipping multiple packages that summer that authorities said contained more than 100 3D printed switches, two 3D printed lower receivers for a Glock, and one 3D printed Glock slide.
The FBI has charged Hastings with attempting to provide material support or resources to designated foreign terrorist organizations and illegal possession or transfer of a machinegun. The Justice Department considers every single 3D conversion kit Hastings shipped an individual machinegun, even when they’re not installed.
Former Guardsman charged with trying to provide weapons to al-Qaida
A 25-year-old former Army National Guardsman faces federal charges that he attempted to provide al-Qaida with 3D-printed weapons.Todd South (Army Times)
Breaking News Channel reshared this.
Eyes Up's purpose is to "preserve evidence until it can be used in court." But it has been swept up in Apple's crackdown on ICE-spotting apps.
Eyes Upx27;s purpose is to "preserve evidence until it can be used in court." But it has been swept up in Applex27;s crackdown on ICE-spotting apps.#News
Apple Banned an App That Simply Archived Videos of ICE Abuses
Apple removed an app for preserving TikToks, Instagram reels, news reports, and videos documenting abuses by ICE, 404 Media has learned. The app, called Eyes Up, differs from other banned apps such as ICEBlock which were designed to report sightings of ICE officials in real-time to warn local communities. Eyes Up, meanwhile, was more of an aggregation service pooling together information to preserve evidence in case the material is needed in the future in court.The news shows that Apple and Google’s crackdown on ICE-spotting apps, which started after pressure from the Department of Justice against Apple, is broader in scope than apps that report sightings of ICE officials. It has also impacted at least one app that was more about creating a historical record of ICE’s activity during its mass deportation effort.
“Our goal is government accountability, we aren’t even doing real-time tracking,” the administrator of Eyes Up, who said their name was Mark, told 404 Media. Mark asked 404 Media to only use his first name to protect him from retaliation. “I think the [Trump] admin is just embarrassed by how many incriminating videos we have.”
💡
Do you work at Apple or Google and know anything else about these app removals? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Mark said the app was removed on October 3. At the time of writing, the Apple App Store says “This app is currently not available in your country or region” when trying to download Eyes Up.
The website for Eyes Up which functions essentially the same way is still available. The site includes a map with dots that visitors can click on, which then plays a video from that location. Users are able to submit their own videos for inclusion. Mark said he manually reviews every video before it is uploaded to the service, to check its content and its location.
“I personally look at each submission to ensure that it's relevant, accurately described to the best I can tell, and appropriate to post. I actually look at the user submitted location and usually cross-reference with [Google] Street View to verify. We have an entire private app just for moderation of the submissions,” Mark said.
Screenshots of Eyes Up.
The videos available on Eyes Up are essentially the same you might see when scrolling through TikTok, Instagram, or X. They are a mix of professional media reports and user-generated clips of ICE arrests. Many of the videos are clearly just re-uploads of material taken from those social media apps, and still include TikTok or Instagram watermarks. Mark said the videos are also often taken from Reddit or the community- and crime-awareness app Citizen too.
Many of the videos from New York are footage of ICE officials aggressively detaining people inside the city’s courts, something ICE has been doing for months. Another is a video from the New York Immigration Coalition (NYIC), which represents more than 200 immigrant and refugee rights groups. Another is an Instagram video showing ICE taking “a mother as her child begs the officers not to take her,” according to a caption on the video. The map includes similar videos from San Diego, Los Angeles, and Portland, Oregon, which are clearly taken from TikTok or media reports, including NBC News.
“Our goal is to preserve evidence until it can be used in court, and we believe the mapping function will make it easier for litigants to find bystander footage in the future,” Mark said.
Aaron Reichlin-Melnick, senior fellow at the American Immigration Council, told 404 Media “Like any other government agency, DHS is required to follow the law. The collection of video evidence is a powerful tool of oversight to ensure that the government respects the rights of citizens and immigrants alike. People have a right to film interactions with law enforcement in public spaces and to share those videos with others.”
“If DHS is concerned that the actions of their own officers might inflame public opinion against the agency, they should work to increase oversight and accountability at the agency — rather than seek to have the evidence banned,” he added.
Apple removed ICEBlock, another much more prominent app, on Thursday from its App Store. The move came after direct pressure from Department of Justice officials acting at the direction of Attorney General Pam Bondi, according to Fox. A statement the Department of Justice provided to 404 Media said the agency reached out to Apple “demanding they remove the ICEBlock app from their App Store—and Apple did so.” Fox says authorities have claimed that Joshua Jahn, the suspected shooter of an ICE facility in September in which a detainee was killed, searched his phone for various tracking apps before attacking the facility.
Joshua Aaron, the developer of ICEBlock, told 404 Media “we are determined to fight this.”
ICEBlock allowed people to create an alert, based on their location, about ICE officials in their area. This then sent an alert to other users nearby.
Apple also removed another similar app called Red Dot, 404 Media reported. Google did the same thing, and described ICE officials as a vulnerable group. Apple also removed an app called DeICER.
playlist.megaphone.fm?p=TBIEA2…
Yet, Eyes Up differs from those apps in that it does not function as a real-time location reporting app.Apple did not respond to a request for comment on Wednesday about Eyes Up’s removal.
Mark provided 404 Media with screenshots of the emails he received from Apple. In the emails, Apple says Eyes Up violates the company’s guidelines around objectionable content. That can include “Defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups, particularly if the app is likely to humiliate, intimidate, or harm a targeted individual or group. Professional political satirists and humorists are generally exempt from this requirement.”
The emails also say that law enforcement have provided Apple with information that shows the purpose of the app is “to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”
The emails are essentially identical to those sent to the developer of ICEBlock which 404 Media previously reported on.
In an appeal to the app removal, Mark told Apple “the posts on this app are significantly delayed and subject to manual review, meaning the officers will be long gone from the location by the time the content is posted to be viewed by the public. This would make it impossible for our app to be used to harm such officers individually or as a group.”
“The sole purpose of Eyes Up is to document and preserve evidence of abuses of power by law enforcement, which is an important function of a free society and constitutionally protected,” Mark’s response adds.
Apple then replied and said the ban remains in place, according to another email Mark shared.
The app is available on Google's Play Store.
Update: this piece has been updated to include comment from Aaron Reichlin-Melnick.
SCOOP: Apple Quietly Made ICE Agents a Protected Class
Internal emails show tech giant used anti-hate-speech rules meant for minorities to block an app documenting immigration enforcement.Pablo Manríquez (Migrant Insider)
Breaking News Channel reshared this.
New leaked documents show how the FBI convinced a judge to let its partners collect a mass of encrypted messages from thousands of phones around the world.#News
Cocaine in Private Jets and Sex Toys: What the FBI Found on its Secretly Backdoored Chat App
Private jets loaded with cocaine landing at an airport in Germany. A trafficker stuffing a racing sail boat with drugs and entering a tournament to blend in with other racers before speeding off. Vacuum-sealed layers of methamphetamine inside solar panels. And nearly 60 kilograms of drugs hidden inside a shipment of sex toys.These are just some of the examples included in a cache of leaked U.S. Department of Justice documents the FBI used to convince a judge to let them continue harvesting messages from Anom. Anom was an encrypted phone and app the FBI secretly took over, backdoored, and ran for years as a tech company popular with organized crime around the world. The Anom operation, dubbed Trojan Shield, was the largest sting operation ever.
The documents provide more insight into the sorts of criminals swept up in the FBI’s investigation, and give behind-the-scenes detail on how exactly the FBI obtained legal approval for such a gigantic, and to some controversial, operation. The leaked documents include the original court orders from Lithuania, which assisted the FBI in collecting the data from Anom devices worldwide, and the FBI’s supporting documentation for those court orders. The documents were not supposed to be released publicly, but someone posted them anonymously online.
💡
Do you know anything else about Anom, Sky, Encrochat, or other encrypted phone companies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“Like I said this Turbo crew are the Pablo Escobar of this time in that area and got full control there,” one message written by an alleged drug trafficker included in the documents reads.
404 Media showed sections of the documents to people with direct knowledge of the operation who said they appeared authentic. Finnish outlet Yle reported on some of their contents at the end of September, but 404 Media is publishing copies of the documents themselves.
In 2018 the FBI shut down an encrypted phone company called Phantom Secure. In the wake of that, a seller from Phantom Secure and another popular company called Sky offered U.S. authorities their own, in-development encrypted device: Anom. The FBI then took Anom under its wing and oversaw a backdoor placed into the app. This involved adding a “ghost” contact to every group chat and direct message across the platform. The operation started in Australia as a beta test, before expanding to Europe, South America, and other parts of the world, sweeping up messages from cartels to biker gangs to hitmen to money launderers.
A screenshot from the documents.
Some of the documents are formal requests for continued assistance from the U.S. to Lithuania and spell out the sort of criminal activity the FBI has seen on the Anom platform. Several sections name specific and well-known drug traffickers. One is Maximillian Rivkin, also known as “Microsoft.” As I chronicled in my book about Anom, Rivkin was a devilishly creative drug trafficker, constantly making new schemes to smuggle cocaine or other narcotics. The new documents say Rivkin’s Serbia-based organized crime group was involved in the trafficking of hundreds of kilograms of cocaine between South America and Spain. To move the drugs, the group sailed a boat during a November 2020 regatta, a sailing race, “where their travel will be obscured by other boats and sail to the Caribbean,” the documents say. Around two or three weeks later, the boat would then return to Europe with the cocaine, before being dropped off the coast of Spain where another member of the group would pick it up, the documents add.In another instance Rivkin’s group smuggled cocaine base within juice bottles from Colombia to Europe, according to the documents. In my book, I found Rivkin planned to do something similar with energy drinks.
Dark Wire
Written “in the manner of a good crime thriller” (The Wall Street Journal), the inside story of the largest law-enforcement sting operation ever, in whic…PublicAffairsLearn more about this author
These sorts of audacious, over-the-top drug smuggling operations were a common sight on Anom, according to my own review of hundreds of thousands of Anom text messages between drug traffickers I previously obtained. The new documents also specifically name Hakan Ayik, who was the head of the so-called Aussie Cartel, which controlled as much as a third of all drug importation into Australia, and who at one point was Australia’s most wanted criminal. Ayik discussed sending a massive 900 kilograms of cocaine through Malaysia to Australia concealed within shipments of scrap metal, according to the documents.“Can you give me roughly the coordinates where’s the better place to meet outside Indonesian waters,” Ayik, using the moniker Oscar, said in one of the messages included in the documents.
playlist.megaphone.fm?p=TBIEA2…
Both Rivkin and Ayik were later arrested by Turkish authorities. Ayik was also known as the “encryption king,” likely due to his prolific selling of encrypted communication devices to organized criminals.Other examples in the documents include a Dutch drug trafficking group involving a man called Guiliano Domenico Azzarito. That group smuggled cocaine between South America and Europe with private jet flights into small and medium sized airports the group controls, according to the documents. “Look we can move 20 tons easily every month from here in the future,” one message said.
Another describes Baris Tukel, a high-ranking Comanchero motorcycle gang member who was later charged by the U.S. for helping to spread Anom devices, discussing plans to hide methamphetamine and MDMA in marble tiles. In another case, a drug trafficker with the username RealG discussed smuggling drugs on a sailboat, inside shipments of bananas and hides, and cocaine base hidden inside fertilizer.
In September 2020, a drug trafficking group smuggled a shipment of cocaine and methamphetamine from the UK, through Singapore, to Australia, according to the documents. Authorities later searched the shipment, and found nearly 60 kilograms of drugs “concealed within 21 boxes of sex toys,” the documents say.
A screenshot from the documents.
The messages included in the document also detail some of the extreme violence Anom users engaged in. Simon Bekiri, a Comanchero member, discussed an assault against a rival gang, according to the documents. “I even pistol whipped him 3 times and blood was squirting out of his head almost a meter high in time with his heartbeat (That part was really funny),” one of the messages reads. “But when you say I pistol whipped him, shot him, bashed him and then took off in his car I’ll admit it does sound violent.”These examples were used to help convince a Lithuanian judge to allow local authorities to continue providing the FBI with Anom messages. In an unusual legal workaround, instead of running the Anom collection server in the U.S., which may have created more legal headaches, the Department of Justice arranged for it to be run in Lithuania. Lithuanian authorities then provided a regular stream of collected Anom messages to the FBI. In all, Anom grew to 12,000 devices and the FBI collected tens of millions of messages before shutting the network down in June 2021.
404 Media first revealed in September 2023 Lithuania was the so-called “third country” that harvested the messages for the FBI. The Department of Justice has never formally acknowledged Lithuania’s role despite the leaked documents further corroborating 404 Media’s reporting.
Revealed: The Country that Secretly Wiretapped the World for the FBI
For years the FBI ran its own encrypted phone company to intercept messages from thousands of people around the globe. One country was critical to that operation, whose identity was unknown to the public. Until now.Joseph Cox (404 Media)
Breaking News Channel reshared this.
Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.
Bypassing Sora 2x27;s rudimentary safety features is easy and experts worry itx27;ll lead to a new era of scams and disinformation.#News #AI
Sora 2 Watermark Removers Flood the Web
Sora 2, Open AI’s new AI video generator, puts a visual watermark on every video it generates. But the little cartoon-eyed cloud logo meant to help people distinguish between reality and AI-generated bullshit is easy to remove and there are half a dozen websites that will help anyone do it in a few minutes.A simple search for “sora watermark” on any social media site will return links to places where a user can upload a Sora 2 video and remove the watermark. 404 Media tested three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.
playlist.megaphone.fm?p=TBIEA2…
Hany Farid, a UC Berkeley professor and an expert on digitally manipulated images, said he’s not shocked at how fast people were able to remove watermarks from Sora 2 videos. “It was predictable,” he said. “Sora isn’t the first AI model to add visible watermarks and this isn’t the first time that within hours of these models being released, someone released code or a service to remove these watermarks.”
youtube.com/embed/QvkJlMWUUxU?…
Hours after its release on September 30, Sora 2 emerged as a copyright violation machine full of Nazi SpongeBobs and criminal Pickachus. Open AI has tamped down on that kind of content after the initial thrill of seeing Rick and Morty shill for crypto sent people scrambling to download the app. Now that the novelty is wearing off we’re grappling with the unpleasant fact that Open AI’s new tool is very good at making realistic videos that are hard to distinguish from reality.To help us all from going mad, Open AI has offered watermarks. “At launch, all outputs carry a visible watermark,” Open AI said in a blog post. “All Sora videos also embed C2PA metadata—an industry-standard signature—and we maintain internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy, building on successful systems from ChatGPT image generation and Sora 1.”
But experts say that those safeguards fall short. “A watermark (visual label) is not enough to prevent persistent nefarious users attempting to trick folks with AI generated content from Sora,” Rachel Tobac, CEO of SocialProof Security, told 404 Media.
Tobac also said she’s seen tools that dismantle AI-generated metadata by altering the content’s hue and brightness. “Unfortunately we are seeing these Watermark and Metadata Removal tools easily break that standard,” Tobac said of the C2PA metadata. “This standard will still work for less persistent AI slop generators, but will not stop dedicated bad actors from tricking people.”
As an example of how much trouble we’re in, Tobac pointed to an AI-generated video that went viral on TikTok over the weekend she called “stranger husband train.” In the video, a woman riding the subway cutely proposes marriage to a complete stranger sitting next to her. He accepts. One instance of the video has been liked almost 5 million times on TikTok. It didn’t have a watermark.
“We're already seeing relatively harmless AI Sora slop confusing even the savviest of Gen Z and Millennial users,” Tobac said. “With many typically-savvy commenters naming how ‘cooked’ we are because they believed it was real. This type of viral AI slop account will attempt to make as much money from the creator fund as possible before social media companies learn they need to invest in detecting and limiting AI slop, before their platform succumbs to the Slop Fest.”
But it’s not just the slop. It’s also the scams. “At its most innocuous, AI generated content without watermarking and metadata accelerates the enshittification of the internet and tricks people with inflammatory content,” Tobac said. “At its most malignant, AI generated content without watermarking and metadata could lead to every day people losing their savings in scams, becoming even more disenfranchised during election season, could tank a stock price within a few hours, could increase the tension between differing groups of people, and could inspire violence, terrorism, stampede or panic amongst everyday folks.”
Tobac showed 404 Media a few horrifying videos to illustrate her point. In one, a child pleads with their parents for bail money. In another, a woman tells the local news she’s going home after trying to vote because her polling place was shut down. In a third, Sam Altman tells a room that he can no longer keep Open AI afloat because the copyright cases have become too much to handle. All of the videos looked real. None of them have a watermark.
“All of these examples have one thing in common,” Tobac said. “They’re attempting to generate AI content for use off Sora 2’s platform on other social media to create mass or targeted confusion, harm, scams, dangerous action, or fear for everyday folk who don’t understand how believable AI can look now in 2025.”
Farid told 404 Media that Sora 2 wasn’t uniquely dangerous. It’s just one among many. “It is part of a continuum of AI models being able to create images and video that are passing through the uncanny valley,” he said. “Having said that, both Veo 3 and Sora 2 are big steps in our ability to create highly visual compelling videos. And, it seems likely that the same types of abuses we’ve seen in the past will be supercharged by these new powerful tools.”
According to Farid, Open AI is decent at employing strategies like watermarks, content credentials, and semantic guardrails to manage malicious use. But it doesn’t matter. “It is just a matter of time before someone else releases a model without these safeguards,” he said.
Both Tobac and Farid said that the ease at which people can remove watermarks from AI-generated content wasn’t a reason to stop using watermarks. “Using a watermark is the bare minimum for an organization attempting to minimize the harm that their AI video and audio tools create,” Tobac said, but she thinks the companies need to go further. “We will need to see a broad partnership between AI and Social Media companies to build in detection for scams/harmful content and AI labeling not only on the AI generation side, but also on the upload side for social media platforms. Social Media companies will also need to build large teams to manage the likely influx of AI generated social media video and audio content to detect and limit the reach for scammy and harmful content.”
Tech companies have, historically, been bad at that kind of moderation at scale.
“I’d like to know what OpenAI is doing to respond to how people are finding ways around their safeguards,” Farid said. “We are seeing, for example, Sora not allowing videos that reference Hitler in the prompt, but then users are finding workarounds by simply describing what Hitler looks like (e.g., black hair, black military outfit and a Charlie Chaplin mustache.) Will they adapt and strengthen their guardrails? Will they ban users from their platforms? If they are not aggressive here, then this is going to end badly for us all.”
Open AI did not respond to 404 Media’s request for comment.
Libraries have shared their collections internationally for decades. Trump’s tariffs are throwing that system into chaos and can ‘hinder academic progress.’#News
Libraries Can’t Get Their Loaned Books Back Because of Trump’s Tariffs
The Trump administration’s tariff regime and the elimination of fee exemptions for items under $800 is limiting resource sharing between university libraries, trapping some books in foreign countries, and reversing long-held standards in academic cooperation.“There are libraries that have our books that we've lent to them before all of this happened, and now they can't ship them back to us because their carrier either is flat out refusing to ship anything to the U.S., or they're citing not being able to handle the tariff situation,” Jessica Bower Relevo, associate director of resource sharing and reserves at Yale University Library, told me.
After Trump’s executive order ended the de minimis exemption, which allowed people to buy things internationally without paying tariffs if the items cost less than $800, we’ve written several stories about how the decision caused chaos over a wide variety of hobbies that rely on people buying things overseas, especially on Ebay, where many of those transactions take place.
Libraries that share their materials internationally are in a similar mess, partly because some countries’ mail services stopped shipments to and from the U.S. entirely, but the situation for them is arguably even more complicated because they’re not selling anything—they’re just lending books.
“It's not necessarily too expensive. It's that they don't have a mechanism in place to deal with the tariffs and how they're going to be applied,” Relevo said. “And I think that's true of U.S. shipping carriers as well. There’s a lot of confusion about how to handle this situation.”
“The tariffs have impacted interlibrary loans in various ways for different libraries,” Heather Evans, a librarian at RMIT University in Australia, told me in an email. “It has largely depended on their different procedures as to how much they have been affected. Some who use AusPost [Australia’s postal service] to post internationally have been more impacted and I've seen many libraries put a halt on borrowing to or from the US at all.” (AusPost suspended all shipments to the United States but plans to renew them on October 7).
Relevo told me that in some cases books are held up in customs indefinitely, or are “lost in warehouses” where they are held for no clear reason.
As Relevo explains it, libraries often provide people in foreign institutions books in their collections by giving them access to digitized materials, but some books are still only available in physical copies. These are not necessarily super rare or valuable books, but books that are only in print in certain countries. For example, a university library might have a specialized collection on a niche subject because it’s the focus area of a faculty member, a French university will obviously have a deeper collection of French literature, and some textbooks might only be published in some languages.
A librarian’s job is to give their community access to information, and international interlibrary loans extend that mission to other countries by having libraries work together. In the past, if an academic in the U.S. wanted access to a French university’s deep collection of French literature, they’d have to travel there. Today, academics can often ask that library to ship them the books they want. Relevo said this type of lending has always been useful, but became especially popular and important during COVID lockdowns, when many libraries were closed and international travel was limited.
“Interlibrary loans has been something that libraries have been able to do for a really long time, even back in the early 1900s,” Relevo said. “If we can't do that anymore and we're limiting what our users can access, because maybe they're only limited to what we have in our collection, then ultimately could hinder academic progress. Scholars have enjoyed for decades now the ability to basically get whatever they need for their research, to be very comprehensive in their literature reviews or the references that they need, or past research that's been done on that topic, because most libraries, especially academic libraries, do offer this service [...] If we can't do that anymore, or at least there's a barrier to doing that internationally, then researchers have to go back to old ways of doing things.”
The Trump administration upended this system of knowledge sharing and cooperation, making life even harder for academics in the U.S., who are already fleeing to foreign universities because they fear the government will censor their research.
The American Library Association (ALA) has a group dedicated to international interlibrary lending, called the International Interlibrary Loan (ILL) Committee, which is nested in the Sharing and Transforming Access to Resources Section (STARS) of the Reference and User Services Association (RUSA). Since Trump’s executive order and tariffs regime, the RUSA STARS International ILL Committee has produced a site dedicated to helping librarians navigate the new, unpredictable landscape.
In addition to explaining the basic facts of the tariffs and de minimis, the site also shares resources and “Tips & Tricks in Uncertain Times,” which encourages librarians to talk to partner libraries before lending or borrowing books, and to “be transparent and set realistic expectations with patrons.” The page also links to an online form that asks librarians to share any information they have about how different libraries are handling the elimination of de minimis in an attempt to crowd source a better understanding of the new international landscape.
“Let's say this library in Germany wanted to ship something to us,” Relevo said. “It sounds like the postal carriers just don't know how to even do that. They don’t know how to pass that tariff on to the library that's getting the material, there's just so much confusion on what you would even do if you even wanted to. So they're just saying, ‘No, we're not shipping to the U.S.’”
Relevo told me that one thing the resource sharing community has talked about a lot is how to label packages so customs agents know they are not [selling] goods to another country. Relevo said that some libraries have marked the value of books they’re lending as $0. Others have used specific codes to indicate the package isn’t a good that’s being bought or sold. But there’s not one method that has worked consistently across the board.
“It does technically have value, because it's a tangible item, and pretty much any tangible item is going to have some sort of value, but we're not selling it,” she said. “We're just letting that library borrow it and then we're going to get it back. But the way customs and tariffs work, it's more to do with buying and selling goods and library stuff isn't really factored into those laws [...] it's kind of a weird concept, especially when you live in a highly capitalized country.”
Relevo said that the last 10-15 years have been a very tumultuous time for libraries, not just because of tariffs, but because AI-generated content, the pandemic, and conservative organizations pressuring libraries to remove certain books from their collections.
“At the end of the day, us librarians just want to help people, so we're just trying to find the best ways to do that right now with the resources we have,” she said.
“What I would like the public to know about the situation is that their librarians as a group are very committed to doing the best we can for them and to finding the best options and ways to fulfill their requests and access needs. Please continue to ask us for what you need,” Evans said. “At the moment we would ask for a little extra patience, and perhaps understanding that we might not be able to get things as urgently for them if it involves the U.S., but we will do as we have always done and search for the fastest and most helpful way to obtain access to what they require.”
Police Bodycam Shows Sheriff Hunting for 'Obscene' Books at Library
Body camera footage from Idaho reveals a sheriff hunting for a YA book he could use for a political stunt.Jason Koebler (404 Media)
The move comes as Apple removed ICEBlock after direct pressure from U.S. Department of Justice officials and signals a broader crackdown on ICE-spotting apps.#News
Google Calls ICE Agents a Vulnerable Group, Removes ICE-Spotting App ‘Red Dot’
Both Google and Apple recently removed Red Dot, an app people can use to report sightings of ICE officials, from their respective app stores, 404 Media has found. The move comes after Apple removed ICEBlock, a much more prominent app, from its App Store on Thursday following direct pressure from U.S. Department of Justice officials. Google told 404 Media it removed apps because they shared the location of what it describes as a vulnerable group that recently faced a violent act connected to these sorts of ICE-spotting apps—a veiled reference to ICE officials.The move signals a broader crackdown on apps that are designed to keep communities safe by crowdsourcing the location of ICE officials. Authorities have claimed that Joshua Jahn, the suspected shooter of an ICE facility in September and who killed a detainee, searched his phone for various tracking apps. A long-running immigration support group on the ground in Chicago, where ICE is currently focused, told 404 Media some of its members use Red Dot.
💡
Do you know anything else about these apps and their removal? Do you work at Google, Apple, or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“Ready to Protect Your Community?” the website for Red Dot reads. “Download Red Dot and help build a stronger protection network.”
The site provides links to the app’s page on the Apple App Store and Google Play Store. As of at least Friday, both of those links return errors. “This app is currently not available in your country or region,” says the Apple one, and “We're sorry, the requested URL was not found on this server,” says the Google one.
The app allows people to report ICE presence or activity, along with details such as the location and time, according to Red Dot’s website. The app then notifies nearby community members, and users can receive alerts about ICE activity in their area, the website says.
Google confirmed to 404 Media that it removed Red Dot. Google said it did not receive any outreach from the Department of Justice about this issue and that it bans apps with a high risk of abuse. Without talking about the shooting at the ICE facility specifically, the company said it removed apps that share the location of what it describes as a vulnerable group after a recent violent act against them connected to this sort of app. Google said apps that have user generated content must also conduct content moderation.
playlist.megaphone.fm?p=TBIEA2…
Google added in a statement that “ICEBlock was never available on Google Play, but we removed similar apps for violations of our policies.”Google’s Play Store policies say the platform does not allow apps that “promote violence” against “groups based on race or ethnic origin, religion, disability age, nationality, veteran status, sexual orientation, gender, gender identity, caste, immigration status, or any other characteristic that is associated with systemic discrimination or marginalization,” but its published policies do not include information about how it defines what types of groups are protected.
Red Dot did not respond to a request for comment.
On Thursday Apple told 404 Media it removed multiple ICE-spotting apps, but did not name Red Dot. Apple did not respond to another request for comment on Friday.
On Thursday Joshua Aaron, the developer of ICEBlock, told 404 Media “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” referring to Apple removing his own app. ICEBlock rose to prominence in June when CNN covered the app. That app was only available on iOS, while Red Dot was available on both iOS and Android.
“ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple's own Maps app, implements as part of its core services. This is protected speech under the first amendment of the United States Constitution,” Aaron continued. “We are determined to fight this with everything we have. Our mission has always been to protect our neighbors from the terror this administration continues to reign down on the people of this nation. We will not be deterred. We will not stop. #resist.”
That move from Apple came after pressure from Department of Justice officials on behalf of Attorney General Pam Bondi, according to Fox. “ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed. This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe,” Bondi told Fox. The Department of Justice declined to comment beyond Bondi's earlier comments.
The current flashpoint for ICE’s mass deportation effort is Chicago. This week ICE raided an apartment building and removed everyone from the building only to ask questions later, according to local media reports. “They was terrified. The kids was crying. People was screaming. They looked very distraught. I was out there crying when I seen the little girl come around the corner, because they was bringing the kids down, too, had them zip tied to each other," one neighbor, Eboni Watson, told ABC7. “That's all I kept asking. What is the morality? Where's the human? One of them literally laughed. He was standing right here. He said, 'f*** them kids.’”
Brandon Lee, communications lead at Illinois Coalition for Immigrant and Refugee Rights, told 404 Media some of the organization’s teams have used Red Dot and similar apps as a way of taking tips. But the organization recommends people call its hotline to report ICE activity. That hotline has been around since 2011, Lee said. “The thing that takes time is the infrastructure of trust and training that goes into follow-up, confirmation, and legal and community support for impacted families, which we in Illinois have been building up over time,” he added.
“But I will say that at the end of the day it's important for all people of conscience to use their skills to shine some light on ICE's operations, given the agency's lack of transparency and overall lack of accountability,” he said, referring to ICE-spotting apps.
In ICEBlock’s case, people who already downloaded the app will be able to continue using but will be unable to re-download it from the Apple App Store, according to an email from Apple Aaron shared with 404 Media. Because Red Dot is available on Android, users can likely sideload the app—that is, install it themselves by downloading the APK file rather than from the Play Store.
The last message to Red Dot’s Facebook page was on September 24 announcing a new update that fixed various bugs.
Update: this piece has been updated to include a response from the Department of Justice.
ICE agents raid South Shore apartments; Trump says Chicago could become military training ground
Anti-ICE protesters marched up Michigan Avenue on Tuesday evening.Cate Cauguiran (ABC7 Chicago)
Carleigh Beriont is running for Congress as an “anti-social Democrat” and she thinks the party needs to abandon social media nationally also.#News
Can You Win a Congressional Seat Without Social Media?
Carleigh Beriont is running for Congress, and if you know about her campaign, it’s definitely not for the same reason you’ve learned about other local politicians in recent years. Alexandria Ocasio-Cortez has become a household name in part because of her ability to use social media and livestreams to talk to people directly. Zohran Mamdani hasn’t even won an election yet, but is already a national political figure thanks in part to his fluency on TikTok.Beriont, on the other hand, is not using social media at all. She’s been on Twitter, Linkedin, and Facebook in the past, but has not been on social media since 2020 after getting frustrated with the kind of discussions and divisiveness she saw there.
Beriont is a former union organizer, a teacher, and vice chair of the local Select Board. Now, she is not only trying to win the Democratic primary for the New Hampshire District 1 congressional race, she has also made social media abstinence a part of her platform.
Eric Schildge, Beriont’s husband, reached out to me after reading my article about an Instagram account promoting Holocaust denial t-shirts, and explained that Beriont was promoting herself as an “anti-social Democrat” because she thinks “Democracy works better offline.”
According to Beriont’s campaign manager Carly Colby, Beriont raised over $232,000 from over 2,300 individual donors. Over 250 of these individuals donated in response to receiving a message specifically about Carleigh not using social media.
I called Beriont to find out why she thinks it’s possible to win an election without social media.
This interview has been edited for clarity and length.
404 Media: Why did you get off social media?
Carleigh Beriont: I'm a millennial, so I grew up like when Facebook required the .edu and it was a great way to connect with new classmates going into college and old friends when you had moved away from where you grew up, which I did. During the height of the Black Lives Matter protests [in 2020], there were a number of conversations that I saw happening on my feed where one relative would post something and a friend from school would post something, and they'd be yelling at each other, and I was like, these people don't even know each other and they're fighting online. It just felt like the experience was getting more and more degraded. It was more and more ads, more and more videos, less and less communication between people, and I signed off because I think that it was making it hard for me as an academic and a parent and someone who was very busy, to think clearly.I was always worried about what I was going to say or that people were going to jump all over me, and I thought that was unhealthy. When I ran for office the first time in New Hampshire, I wasn't sure I'd be able to do it without social media. But I also realized that talking to people on the phone and meeting them at their doors or speaking in libraries, people weren't as angry or as opposed to one another as I'd been led to believe based on social media. And so I started to think, well, what if we don't use social media running for Congress? I mean, you've seen this week how bad things have gotten [Editor’s note: this interview took place the week Charlie Kirk was shot], and I just don't think that democracy works well online. We're seeing Donald Trump try to force the sale of Tiktok to one of his biggest supporters’ children. We're seeing Mark Zuckerberg and Elon Musk and Jeff Bezos sitting in the front row of Trump's inauguration. They had better seats than Greg Abbott did, and these people are making billions of dollars off of us, and they are destroying our democracy in the process. I don't want to be a part of it. So when I think of what can I do, What can I change, we decided not to use social media during the campaign because we don't want to live in a world where that is where our politics take place, and how they're outsourced, because we don't think that it's productive for democracy.
404 Media: I told my colleagues I was doing this interview and one of them joked that the headline for the story could be: “Can You Win an Election Without Telling Anyone You’re Running?” I hate social media also but I think I have to use it to promote our articles. Don’t you think it’s a necessary evil for you as well?
Beriont: It's so funny. I wish I could get a shirt that was like, “necessary evil?”. I do think that it's evil. I don't know that it's necessary. This campaign is a test for that. It's one thing for people who are trying to promote themselves or trying to sell things to use social media. I think it's another for our political leaders who are in a position where they should be holding corporations and the people who run them, like Mark Zuckerberg, responsible for their actions.We're watching how the government is literally using that to surveil us and fire people for things that they're allegedly posting that are inappropriate about Charlie Kirk's assassination and things like that. It's incredibly risky for people to be using social media who are trying to preach a message of connection and community and democracy and equality and respect and dignity. I am not seeing those things on social media. Most of what people see, I believe a lot of it is AI. I believe a lot of it is an attempt to sell you something. I believe little of it is things that your friends and family are using as a way to actually connect. In New Hampshire, we've seen local police departments shut down the comment sections on their Facebook. We see political candidates deleting things that they don't like or comments that are negative. And so I think it just skews our sense of what's real and what's possible right now. And so that's why we're not using it.
Instead, we're doing something I'm calling district dialogs. As a facilitator and teacher, I'm happy to involve myself in messy, awkward conversations with people. I love teaching people how to stay in conversations and hold spaces. And so we're asking people what they wish politicians understood better. And we've had about 40 of these conversations throughout the district, and in almost every one we're hearing the same things from people who are exhausted by social media. They go on to check something, and two hours later they realize that they've lost two hours of their life, or they tried to find a post from a candidate, and instead, they got sucked into like some type of Nazi propaganda. And it's just such a shitty way to run a communication system and to run a country, and I think that we've done too much outsourcing to it, so it needs to stop.
404 Media: How are you reaching people without social media?
Beriont: We've been meeting in like public libraries and school cafeterias and church basements and driveways and living rooms, and asking people to bring some of their friends, or if it's a local democratic committee or some type of organization, asking them to invite people, and just sitting around and asking one another what we think we need to be doing right now. What people are saying after those meetings is they're so grateful that they had a chance to hear other people and to be heard, and they don't feel alone, and social media makes them feel alone. It makes them feel crazy, it makes them feel overwhelmed. And actually sitting and talking with the people in your community about what you can do to make it better is, I think, an antidote for a lot of that feeling of overwhelm and disassociation that people have right now.I ask people what they think about my position on social media, and the number of people, especially millennials, say “I wish I could throw my phone out the window.” It seems to be really the political consultants and people who work in politics who are the most opposed to this idea, in part, because, for a lot of people, it's a low lift way to get involved. I think we have to ask ourselves whether it's actually an effective way of making a difference right now. I don't believe that that's the case in 2025.
404 Media: Have you done any polling or do you have any data that shows that this strategy is working?
Beriont: We haven't done any polling yet. It's tricky because there's six other people in this primary right now, one of the things that I think has been differentiating me is my willingness to sit and have a conversation. So a lot of politicians are operating the way that they have been trained to, which is to show up at a place, get a picture for Instagram or Facebook or Twitter, and then leave and people notice and are frustrated with that because they don't feel like they're actually getting an opportunity to talk with the people that want to represent them. As someone who has been on the other side of that, I decided to run because I was really frustrated with all of these monologues and these directed cameras telling me how to think or how to feel or how to vote or why, you know, the sense of reality that I had was wrong. And I think people really want more dialogue right now. They want more real, authentic exchanges. And I think they deserve that, and I think that that needs to be the foundation for democratic politics going forward.404 Media: When I was in the VICE union there was an organizer with Writers Guild of America East who told us that support for the union on social media doesn’t mean anything, and can be counter productive because it makes people feel like they’re supporting the union without actually supporting it. Is your no social media approach to campaigning influenced by your experience in union organizing?Beriont: Yeah, absolutely. I was one of the people that helped organize the graduate student union at Harvard with the UAW. I think you're absolutely right about that. I also think that local politics has been great for this, because it's nonpartisan. And one of the things that I've realized is that in order to get things done in a space that is politically quite divided, you can't just be posting shit about your opponents the minute you don't get your way. You need to really build relationships and recognize that you're not always going to get your way, and this is true in a negotiation. When you show up to bargain at a table, you don't assume that you're going to get every single one of the things that you ask for, but you assume that people meet you in good faith and you'll be able to move forward. And I think that a lot of the relationship building and the coalition building that we need right now is lacking at the national level. We're seeing people, pouring fuel on partisan fires and preaching to the choirs, and they're doing that to raise more money, and it's not winning over anybody, and it's not helping to de-escalate the situation that we're in right now. And I think that it's frankly making us a lot less safe, because instead of actually holding social media corporations accountable for what they're posting online, which they could be doing, they're choosing not to do that.
404 Media: Do you think a no social media strategy can work on a national level?
Beriont: Absolutely. I think it's well suited to New Hampshire because this is a state that is very used to hands on democracy. Our State House has 400 state reps in it, and we used to have the first primary in the nation. So most people in New Hampshire who are politically active are used to interacting with political candidates and politicians and getting to know them quite well, and expect that from their politicians. This is a state where the majority of politicians who run, if they're posting anything on Facebook, they're probably going to get like, two or three likes. And it just doesn't seem to be the most effective way to organize in a place like this. But I also think that, at the very least, we should be asking our politicians to get offline and stop exacerbating tensions on platforms that are only benefiting billionaires. They're buying our politicians. They're buying our politics. And it needs to stop somewhere. So it should probably start with the people who are attempting to be our leaders.
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Applex27;s actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.#News
ICEBlock Owner After Apple Removes App: ‘We Are Determined to Fight This’
The developer of ICEBlock, an app that lets people crowdsource sightings of ICE officials, has said he is determined to fight back after Apple removed the app from its App Store on Thursday. The removal came after pressure from Department of Justice officials acting at the direction of Attorney General Pam Bondi, according to Fox which first reported the removal. Apple told 404 Media it has removed other similar apps too.“I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” Joshua Aaron told 404 Media. “ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple's own Maps app, implements as part of its core services. This is protected speech under the first amendment of the United States Constitution.”
💡
Do you know anything else about this removal? Do you work at Apple or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe nowICEBlock Owner After Apple Removes App: ‘We Are Determined to Fight This’
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.Joseph Cox (404 Media)
For decades, scientists assumed that symmetry between the reflectivity of Earth’s hemispheres was a “fundamental property” of our planet. Now, that’s changed.#News
Earth Is Getting Darker, Literally, and Scientists Are Trying To Find Out Why
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.It’s not the vibes; Earth is literally getting darker. Scientists have discovered that our planet has been reflecting less light in both hemispheres, with a more pronounced darkening in the Northern hemisphere, according to a study published on Monday in Proceedings of the National Academy of Sciences.
The new trend upends longstanding symmetry in the surface albedo, or reflectivity, of the Northern and Southern hemispheres. In other words, clouds circulate in a way that equalizes hemispheric differences, such as the uneven distribution of land, so that the albedos roughly match—though nobody knows why.
“There are all kinds of things that people have noticed in observations and simulations that tend to suggest that you have this hemispheric symmetry as a kind of fundamental property of the climate system, but nobody's really come up with a theoretical framework or explanation for it,” said Norman Loeb, a physical scientist at NASA’s Langley Research Center, who led the new study. “It's always been something that we've observed, but we haven't really explained it fully.”
To study this mystery, Loeb and his colleagues analyzed 24 years of observations captured since 2000 by the Clouds and the Earth’s Radiant Energy System (CERES), a network of instruments placed on several NOAA and NASA satellites. Instead of an explanation for the strange symmetry, the results revealed an emerging asymmetry in hemispheric albedo; though both hemispheres are darkening, the Northern hemisphere shows more pronounced changes which challenges “the hypothesis that hemispheric symmetry in albedo is a fundamental property of Earth,” according to the study.
Loeb and his colleagues suggest that asymmetry is primarily driven by the effects of climate change, reductions in aerosol pollution, and natural disasters like volcanic eruptions and wildfires. Since snow and ice are highly reflective, the thinking goes, the melting of glaciers and ice sheets due to anthropogenic gas emissions is causing a reduction in albedo, especially in the Northern hemisphere.
Meanwhile, aerosols—which stimulate the formation of clouds—are causing uneven regional albedo changes. For example, the international effort to remove harmful commercial aerosols from the atmosphere has resulted in a drop in these substances over the Northern hemisphere, and therefore cloud cover, exacerbating the darkening effect. In the Southern hemisphere, aerosol-heavy clouds generated just over the past few years by disasters like the 2019-2020 Australian bushfires and the 2021 to 2022 Hunga Tonga volcanic eruption may have brightened the albedo relative to the Northern hemisphere.
“The amount of aerosols has been increasing in the Southern hemisphere, and they've been decreasing in the Northern hemisphere,” explained Loeb. “Since aerosols reflect solar radiation, that would give you this asymmetry where you're seeing darkening in the Northern hemisphere compared to the southern Hemisphere.”
“All of these pieces added together give you this trend,” he continued. “But what was mysterious to me was that the clouds weren't compensating. If this hemispheric symmetry is a fundamental property of the system, the clouds should be giving you more reflection in the Northern hemisphere to compensate for the non-cloud properties. And I don’t see that—at least, not yet.”
Loeb’s team was able to spot this trend thanks to the long-term observations collected by CERES, a program that dates back to the late 1990s. The program has monitored the evolution of albedo in high resolution over decades, enabling the scientists to spot the new divergence from the normal symmetry.
“CERES has really opened up a new avenue of research that we couldn't do before,” Loeb said. “We had some measurements of Earth's radiation budget, but we struggled to have the same level of quality of the data.”
“Right now it's wonderful because we have very precise measurements over 25 years from CERES,” he continued. “It’s a unique opportunity for us to study things like this symmetry in a new light.”
To that end, Loeb and his colleagues plan to continue monitoring the asymmetry with CERES and probing its possible causes with more sophisticated climate models. The researchers are watching for signs that the symmetry might reemerge in the future, or if asymmetry is perhaps the new normal.
The overall darkening of Earth’s albedo is already accelerating the effects of climate change, and an asymmetric hemispheric darkening could produce its own complex impacts, including disruptive shifts in precipitation.
It’s very difficult to tease out the individual components that merge to create such complicated dynamics (Loeb calls it “unscrambling the egg”). To make matters worse, NASA is facing major cuts from the Trump administration, especially to its Earth observation satellites. CERES is due for one more launch in 2027, but these instruments are getting “long in the tooth,” Loeb said, and another program will eventually have to take up the mantle. Until then, researchers across disciplines will puzzle over why Earth is anomalously darkening, and what it might mean if this asymmetry is here to stay.
“We'll keep measuring and keep studying it, and I think this study should open the avenue for others to look at it,” Loeb concluded.
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
A hacking group called the Crimson Collective says it pulled data from private GitHub repositories connected to Red Hat's consulting business. Red Hat has confirmed it is investigating the compromise.
A hacking group called the Crimson Collective says it pulled data from private GitHub repositories connected to Red Hatx27;s consulting business. Red Hat has confirmed it is investigating the compromise.#News #Hacking
Red Hat Investigating Breach Impacting as Many as 28,000 Customers, Including the Navy and Congress
A hacking group claims to have pulled data from a GitLab instance connected to Red Hat’s consulting business, scooping up 570 GB of compressed data from 28,000 customers.The hack was first reported by BleepingComputer and has been confirmed by Red Hat itself. “Red Hat is aware of reports regarding a security incident related to our consulting business and we have initiated necessary remediation steps,” Stephanie Wonderlick, Red Hat’s VP of communications told 404 Media.
A file released by the hackers and viewed by 404 Media suggested that the hacking group may have acquired some data related to about 800 clients, including Vodafone, T-Mobile, the US Navy’s Naval Surface Warfare Center, the Federal Aviation Administration, Bank of America, AT&T, the U.S. House of Representatives, and Walmart.
“The security and integrity of our systems and the data entrusted to us are our highest priority,” she said. “At this time, we have no reason to believe the security issue impacts any of our other Red Hat services or products and are highly confident in the integrity of our software supply chain.”
playlist.megaphone.fm?p=TBIEA2…
Red Hat is an open source software company that provides Linux-based enterprise software to a vast number of companies. As part of its business, Red Hat sells consulting contracts to users to help maintain their IT infrastructure. A hacking group that calls itself the Crimson Collective claims it breached a Red Hat GitLab repository that contained information related to Red Hat’s consulting clients.“Since RedHat doesn't want to answer to us,” the hackers wrote in a channel on Telegram viewed by 404 Media, suggesting they have attempted to contact Red Hat. “Over 28000 repositories were exported, it includes all their customer's CERs [customer engagement reports] and analysis of their infra' [infrastructure] + their other dev's private repositories, this one will be fun,” the message added.. A CER is an internal document consultancy firms use to understand how its clients interact with their business. For an IT firm like Red Hat, this kind of document would contain a lot of information about a client's tech infrastructure including configuration data, network maps, and information about authentication tokens. A CER could help someone breach a network.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.“We have given them too much time already to answer lol instead of just starting a discussion they kept ignoring the emails,” the message added.In another message, the group said it had “gained access to some of their clients' infrastructure as well, already warned them but yeah they preferred ignoring us.”
404 Media viewed data related to the breach and attempted to contact some of the affected clients, including the US Navy’s Naval Surface Warfare Center in Panama City and T-Mobile, but did not hear back.
Joseph Cox contributed additional reporting to this article.
Correction: this piece has been updated to say that the breach impacted a Red Hat GitLab, not a GitHub.
‘I cannot overstate how disgusting I find this kind of AI dog shit in the first place, never mind under these circumstances.’#News
AI-Generated Biography on Amazon Tries to Capitalize on the Death of a Beloved Writer Kaleb Horton
On September 27, several writers published obituaries about writer and photographer Kaleb Horton, who recently died. The obituaries were written by friends, acquaintances, and colleagues, but all of them revered him as a writer and photographer, whose work has appeared in GQ, Rolling Stone, Vanity Fair, and VICE.Some of these obituary writers were shocked and disgusted to discover an AI-generated “biography” of Kaleb Horton was suddenly for sale on Amazon.
“I cannot overstate how disgusting I find this kind of ‘A.I.’ dog shit in the first place, never mind under these circumstances,” writer Luke O’Neil, who wrote an obituary for Horton, told me in an email. “This predatory slop is understandably upsetting to his family and friends and fans and an affront to his specific life and to life itself. Especially days after his death. All week people have been eulogizing Kaleb as one of the best, although sadly not widely read enough, writers of his generation, and some piece of shit pressed a button and took 30 seconds or whatever it is to set up a tollbooth to divert the many people just learning about him away from his real and vital work. And for what? To make maybe a few dollars? By tricking people? I can't say what I think should happen to thieves like this.”
The book, titled KALEB HORTON: A BIOGRAPHY OF WORDS AND IMAGES: The Life Of A Writer And Photographer From The American West, was published on September 27 as well, is 74 pages long, and has all the familiar signs of the kind of AI-generated books that flood Amazon’s store on a daily basis.
Even at just 74 pages, the book was produced at superhuman speed. That appears to be the normal cadence for the author, Jack C. Cambron, who has no online footprint outside of online bookstores, and who has written dozens of biographies and cookbooks since his career as an author appeared out of thin air earlier in September. He has written biographies about director Cameron Crowe, Fulton County, Georgia district attorney Fani Willis, and pop singer Madison Beer, to name just a few. There’s no consistent pattern to these biographies other than a lot of the people they’re about have been in the news recently.
All these books also have obviously AI-generated covers, which is the most obvious and one of the most insulting signs that Horton’s biography is AI-generated as well: The person on the cover looks nothing like him.
AI-generated books on Amazon are extremely common and often attempt to monetize whatever is happening in the news or that people are searching for at any given time. For example, last year we wrote about a flood of AI-generated books about the journalist Kara Swisher appearing on Amazon leading up to the release of her memoir Burn Book. In theory, someone who might be interested in the book or Swisher might search for her name on Amazon and buy one of those AI-generated books without realizing it’s AI-generated. We’ve seen this same strategy flood public libraries with AI-generated books as well.
"Although many of us online appreciated him and have paid tribute to him as a writer, any real reporting about him—like the kind he did for the figures he obsessed over, and which he would deserve—would reflect that Kaleb was a human being and a complicated guy," Matt Pearce, another journalist who wrote about Horton's passing, told me in an email. "This AI slop is just harvesting the remnants of legacy journalism, insulting the legacies of the dead and intellectually impoverishing the rest of us."
Amazon did not immediately respond to a request for comment, but it removed the AI-generated Horton biography shortly after we reached out for comment. The company has said that it does not want these books in its store in response to our story about the AI-generated Kara Swisher books last year, but obviously is not taking any meaningful action to stop them.
“We aim to provide the best possible shopping, reading, and publishing experience for customers and authors and have content guidelines governing which books may be listed for sale," Amazon spokesperson Ashley Vanicek told me in an email last year. "We do not allow AI-generated content that creates a poor customer experience. We have proactive and reactive measures to evaluate content in our store. We have removed a number of titles that violated our guidelines.”
Update: This post has been updated with comment from Matt Pearce. This post has also been updated to note Amazon removed the AI-generated biography shortly after we reached out for comment.
A ‘stray bullet’ 25,000 people offline near Dallas.#News
A Bullet Crashed the Internet in Texas
The internet can be more physically vulnerable than you think. Last week, thousands of people in North and Central Texas were suddenly knocked offline. The cause? A bullet. The outage hit cities all across the state, including Dallas, Irving, Plano, Arlington, Austin, and San Antonio. The outage affected Spectrum customers and took down their phone lines and TV services as well as the internet.“Right in the middle of my meetings 😒,” one users said on the r/Spectrum subreddit. Around 25,000 customers were without services for several hours as the company rushed to repair the lines. As the service came back,, WFAA reported that the cause of the outage came from the barrel of a gun. A stray bullet had hit a line of fiber optic cable and knocked tens of thousands of people offline.
playlist.megaphone.fm?p=TBIEA2…
“The outage stemmed from a fiber optic cable that was damaged by a stray bullet,” Spectrum told 404 Media. “Our teams worked quickly to make the necessary repairs and get customers back online. We apologize for the inconvenience.”Spectrum told 404 Media that it didn’t have any further details to share about the incident so we have no idea how the company learned a bullet hit its equipment, where the bullet was found, and if the police are involved. Texas is a massive state with overlapping police jurisdictions and a lot of guns. Finding a specific shooting incident related to telecom equipment in the vast suburban sprawl around Dallas is probably impossible.
Fiber optic cable lines are often buried underground, protected from the vagaries of southern gunfire. But that’s not always the case, fiber can be strung along telephone poles in the sky and sent to a vast and complicated network junction boxes and service stations that overlap different municipalities and cities, each with their own laws about how the cable can be installed. That can leave pieces of the physical infrastructure of the internet exposed to gunfire and other mischief.
This is not the first time gunfire has taken down the internet. In 2022, Xfinity fiber cable in Oakland, California went offline after people allegedly fired 17 rounds into the air near one of the company’s fiber lines. Around 30,000 people were offline during that outage and it happened moments before the start of an NFL game that saw the Los Angeles Rams square off against the San Francisco 49ers.
“We could not be more apologetic and sincerely upset that this is happening on a day like today,” Comcast spokesperson Joan Hammel told Dater Center Dynamics at the time. Hammel added that the company has seen gunshot wounds on its equipment before. “While this isn’t completely uncommon, it is pretty rare, but we know it when we see it.”
Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoples' phones. The database is updated every day with billions of pieces of location data.
Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoplesx27; phones. The database is updated every day with billions of pieces of location data.#News
ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day
Immigration and Customs Enforcement (ICE) has bought access to a surveillance tool that is updated every day with billions of pieces of location data from hundreds of millions of mobile phones, according to ICE documents reviewed by 404 Media.The documents explicitly show that ICE is choosing this product over others offered by the contractor’s competitors because it gives ICE essentially an “all-in-one” tool for searching both masses of location data and information taken from social media. The documents also show that ICE is planning to once again use location data remotely harvested from peoples’ smartphones after previously saying it had stopped the practice.
Surveillance contractors around the world create massive datasets of phones’, and by extension people’s movements, and then sell access to the data to government agencies. In turn, U.S. agencies have used these tools without a warrant or court order.
“The Biden Administration shut down DHS’s location data purchases after an inspector general found that DHS had broken the law. Every American should be concerned that Trump's hand-picked security force is once again buying and using location data without a warrant,” Senator Ron Wyden told 404 Media in a statement.
💡
Do you know anything else about this contract or others? Do you work at Penlink or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.The ICE document is redacted but says a product made by a contractor called Penlink “leverages a proprietary data platform to compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.” The products the document is discussing are Tangles and Webloc.
Forbes previously reported that ICE spent more than $5 million on these products, including $2 million for Tangles specifically. Tangles and Webloc used to be run by an Israeli company called Cobwebs. Cobwebs joined Penlink in July 2023.
The new documents provide much more detail about the sort of location data ICE will now have access to, and why ICE chose to buy access to this vast dataset from Penlink specifically.
“Without an all-in-one tool that provides comprehensive web investigations capabilities and automated analysis of location-based data within specified geographic areas, intelligence teams face significant operational challenges,” the document reads. The agency said that the issue with other companies was that they required analysts to “manually collect and correlate data from fragmented sources,” which increased the chance of missing “connections between online behaviors and physical movements.”
A screenshot from the document.
ICE’s Homeland Security Investigations (HSI) conducted market research in May and June, according to the document. The document lists two other companies, Babel Street and Venntel, which also sell location data but which the agency decided not to partner with.404 Media and a group of other media outlets previously obtained detailed demonstration videos of Babel Street in action. They showed it was possible for users to track phones visiting and leaving abortion clinics, places of worship, and other sensitive locations. Venntel, meanwhile, was for some years a popular choice among U.S. government agencies looking to monitor the location of mobile phones. Its clients have included ICE, CBP, and the FBI. Its contracts with U.S. law enforcement have dried up in more recent years, with ICE closing out its work with the company in August, according to procurement records reviewed by 404 Media.
Companies that obtain mobile phone location data generally do it in two different ways. The first is through software development kits (SDKs) embedded in ordinary smartphone apps, like games or weather forecasters. These SDKs continuously gather a user’s granular location, transfer that to the data broker, and then sell that data onward or repackage it and sell access to government agencies.
The second is through real-time bidding (RTB). When an advert is about to be served to a mobile phone user, there is a near instantaneous, and invisible, bidding process in which different companies vie to have their advert placed in front of certain demographics. A side-effect is that this demographic data, including mobile phones’ location, can be harvested by surveillance firms. Sometimes spy companies buy ad tech companies out right to insert themselves into this data supply chain. We previously found at least thousands of apps were hijacked to provide location data in this way.
Penlink did not respond to a request for comment on how it gathers or sources its location data.
playlist.megaphone.fm?p=TBIEA2…
Regardless, the documents say that “HSI INTEL requires Penlink's Tangles and Weblocas [sic] an integral part of their investigations mission.” Although HSI has historically been focused on criminal investigations, 90 percent of HSI have been diverted to carry out immigration enforcement, according to data published by the Cato Institute. Meaning it is unclear whether use of the data will be limited to criminal investigations or not.After this article was published, DHS Assistant Secretary Tricia McLaughlin told 404 Media in a statement “DHS is not going to confirm or deny law enforcement capabilities or methods. The fact of the matter is the media is more concerned with peddling narratives to demonize ICE agents who are keeping Americans safe than they are with reporting on the criminals who have victimized our communities.” This is a boilerplate statement that DHS has repeatedly provided 404 Media when asked about public documents detailing the agency’s surveillance capabilities, and which inaccurately attacks the media.
In 2020, The Wall Street Journal first revealed that ICE and CBP were using commercially smartphone location data to investigate various crimes and for border enforcement. I then found CBP had a $400,000 contract with a location data broker and that the data it bought access to was “global.” I also found a Muslim prayer app was selling location data to a data broker whose clients included U.S. military contractors.
In October 2023, the Department of Homeland Security (DHS) Inspector General published a report that found ICE, CBP, and the Secret Service all broke the law when using location data harvested from phones. The oversight body found that those DHS components did not have sufficient policies and procedures in place to ensure that the location data was used appropriately. In one case, a CBP official used the technology to track the location of coworkers, the report said.
The report recommended that CBP stop its use of such data; CBP said at the time it did not intend to renew its contracts anyway. The Inspector General also recommended that ICE stop using such data until it obtained the necessary approvals. But ICE’s response in the report said it would continue to use the data. “CTD is an important mission contributor to the ICE investigative process as, in combination with other information and investigative methods, it can fill knowledge gaps and produce investigative leads that might otherwise remain hidden. Accordingly, continued use of CTD enables ICE HSI to successfully accomplish its law enforcement mission,” the response at the time said.
In January 2024, ICE said it had stopped the purchase of such “commercial telemetry data,” or CTD, which is how DHS refers to location data.
Update: this piece has been updated with a statement from DHS.
ICE says it’s stopped using commercial telemetry data
Spokesperson for Immigration and Customs Enforcement tells FedScoop that the agency is no longer using commercial telemetry data, but regulations are still scant.Rebecca Heilweil (FedScoop)
The Secretary of War lectured America’s generals on fitness standards, beards, and warriors for an hour.#News #military
In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI
Last week, Secretary of War Pete Hegseth called America’s Generals to Quantico to meet for an unknown reason. America’s top civilian military leader calling the generals home all at once is strange and unprecedented. It’s the kind of move that often presages something like a major war. But that’s not what he wanted. During a bizarre, unhinged speech before America’s military leadership, Hegseth focused almost entirely on the culture wars and called for the restoration of what he called a “warrior ethos.” He said some of America’s generals are fat, demanded the Pentagon go all in on AI, whined about beards and accountability, told the troops they “kill people and break things for a living,” and plugged his book.“The speech today is about the nature of ourselves,” Hegseth said. For the next hour, before setting up President Trump for remarks, Hegseth spoke about a new American military that will shave its beards, reduce the number of women in combat, and focus on killing. “To our enemies: FAFO. If necessary, our troops can translate that for you. Peace through strength, brought to you by the warrior ethos.”(FAFO means fuck around and find out.)
playlist.megaphone.fm?p=TBIEA2…
An earlier theme of the speech was more and faster. “This urgent moment, of course, requires more troops, more munitions, more drones, more [Patriot missiles], more submarines, more B-21 bombers,” Hegseth said. “It requires more innovation, more AI in everything and ahead of the curve, more cyber effects, more counter [unmanned aerial systems], more space, more speed. America is the strongest, but we need to get stronger and quickly.”The alarming speech took most of the attention of social media Tuesday morning and comes at a time where Donald Trump has deployed troops in American cities, has threatened to invade Portland, and told the military they should use American cities as a “training ground.” Hegseth himself has been said to be more or less having a meltdown, according to reporting by The Daily Mail.
The Pentagon has been all in on AI and drones for years now, but it hasn’t gone well. Last week, The Wall Street Journal reported that the Pentagon is struggling to deploy AI weapons and is worried about catching up to China. A Biden era initiative called Replicator was meant to help bridge the gap between dreams and reality, but hasn’t worked fast enough for its critics. So the Pentagon is turning the project over to Special Operations Command—the part of the Pentagon in charge of its operators—under a new division called Defense Autonomous Warfare Group (DAWG). This means that the military leaders who run SEAL Team Six will soon be in charge of getting AI controlled drone swarms to the troops.
Much of Hegseth’s speech was about aesthetics and fitness. For him, a return to the “warrior ethos” meant never seeing a fat general or admiral ever again. “Every member of the joint force at any rank is required to take a PT test twice a year as well as meet height and weight requirements twice a year, every year of service,” he said. “Also today, at my direction, every warrior across our joint force is required to do PT every duty day. Should be common sense…but we’re codifying it. And we’re not talking hot yoga and stretching. Real hard PT, either as a unit or an individual. At every level, from the Joint Chiefs to everyone in this room to the lowest private.”
“It all starts with physical fitness and appearance,” Hegseth said. “If the Secretary of War can do regular, hard PT, so can every member of our joint force. Frankly, it's tiring to look out at combat formations, or really any formation, and see fat troops. Likewise, it's completely unacceptable to see fat generals and admirals in the halls of the Pentagon and leading commands around the country in the world, it's a bad look. It is bad and it's not who we are.”
Hegseth’s aesthetic concerns extended to facial hair. “This also means grooming standards. No more beards. Long hair. Superficial individual expression. We’re going to cut our hair, shave our beards, and adhere to standards. It’s like the broken windows theory of policing. When you let the small stuff go, the big stuff eventually goes. So you have to address the small stuff,” he said.
There was, of course, a carve out for America’s operators. “ If you want a beard you can join Special Forces. If not, then shave. We don’t have a military full of Nordic Pagans. At my direction, the era of unprofessional appearance is over. No more beardos. The era of rampant and ridiculous shaving profiles is done.”
Beards may seem like small stuff in the grand scheme of things, but it’s a hot topic among military recruits. Over the past few years, military recruits have fought and won exemptions for grooming standards based on their religion, often in court. A federal court told the Marine Corps it couldn't force Sikh recruits to shave in 2022. There’s also medical issues. Men with pseudofolliculitis barbae, a condition that causes painful ingrown hairs and razor burn after shaving, have gotten long gotten waivers to exempt them from shaving in the military. Around 60 percent of black men have pseudofolliculitis barbae.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Hegseth made it clear that these new conditions mean there will be fewer women on the frontlines and in physically demanding roles. “I don’t want my son serving alongside troops who are out of shape or in combat units with females who can’t meet the same combat arm physical standards as men,” he said. “When it comes to any job that requires physical power to perform in combat, those physical standards must be high and gender neutral. If women can make it excellent,” he said. “If not, it is what it is. If that means no women qualify for some combat jobs. So be it, that is not the intent, but it could be the result.”
The Secretary also said he would end the tyranny of accountability in the military. “We are overhauling an Inspector General process, the IG that has been weaponized, putting complainers, ideologues and poor performers in the driver's seat,” he said. “We're doing the same with the Equal Opportunity and Military Equal Opportunity policies. The EO and MEO at our department. No more frivolous complaints, no more anonymous complaints, no more repeat complaints. No more smearing reputations. No more endless waiting. No more legal limbo. No more side-tracking careers, no more walking on eggshells.”
Pentagon acting Inspector General Steven Stebbins is currently investigating Hegseth over his use of an unsecured Signal clone to plan military operations.
A modern military is a technological and logistics machine. A warrior takes many shapes and, if Hegseth wants to go all in on cyber, drones, and AI, then harsh grooming standards and increased physical fitness requirements will cut off many of the brightest minds who could help him fulfill that goal.
That doesn’t seem to matter to Hegseth and Trump as much as aesthetics does. Towards the end of his speech, the Secretary said the Pentagon lost its way. Then he plugged his 2024 book The War on Warriors. “We became the woke department, but not anymore. No more identity months, DEI offices, dudes in dresses. No more climate change worship. No more division, distraction, or gender delusions. No more debris. As I’ve said before and will say again: we are done with that shit,” he said.
“You might say we’re ending the war on warriors. I hear someone wrote a book about that.”
Pete Hegseth is 'crawling out his skin': Pentagon insiders tell of explosive tantrums and erratic behavior... as his wife is accused of making outrageous demands
Insiders in the defense secretary's newly named Department of War say their boss has in the last few weeks been erupting in tirades, raging at staffers and becoming obsessive.Susan Greene (Daily Mail)
Klein has attempted to subpoena Discord and Reddit for information that would reveal the identity of moderators of a subreddit critical of him. The moderators' lawyers fear their clients will be physically attacked if the subpoenas go through.
Klein has attempted to subpoena Discord and Reddit for information that would reveal the identity of moderators of a subreddit critical of him. The moderatorsx27; lawyers fear their clients will be physically attacked if the subpoenas go through.#News #YouTube
Reddit Mods Sued by YouTuber Ethan Klein Fight Efforts to Unmask Them
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records.Subscribe to them here.Critics of YouTuber Ethan Klein are pushing back on subpoenas that would reveal their identities as part of an ongoing legal fight between Klein and his detractors. Klein is a popular content creator whose YouTube channel has more than 2 million subscribers. He’s also involved in a labyrinthine personal and legal beef with three other content creators and the moderators of a subreddit that criticises his work. Klein filed a legal motion to compel Discord and Reddit to reveal the identities of those moderators, a move their lawyers say would put them in harm’s way and stifle free speech on the internet forever.
Klein is most famous for his H3 Podcast and collaborations with Hasan Piker and Trisha Paytas which he produced through his company Ted Entertainment Inc. Following a public falling out with Piker, Klein released a longform video essay critiquing his former podcast partner. As often happens with long video essays about YouTube drama, other content creators filmed themselves watching Klein’s essay.
playlist.megaphone.fm?p=TBIEA2…
These are called “reaction” videos and they’re pretty common on YouTube. Klein sued three creators—Frogan, Kaceytron, and Denims—calling their specific reaction videos low effort copyright infringement. As part of the lawsuit, he also went after the moderation team of the r/h3snark subreddit—a board on Reddit that critiques Klein and had shared the Denims video as part of a thread about Klein’s Piker essay.On July 31, a judge allowed Klein’s lawyers to file a subpoena with Reddit and Discord that would reveal the identities of the people running r/h3snark and an associated Discord server. On September 22, lawyers for the defendants filed a motion to quash the subpoenas.
“On its face, the Action is about copyright infringement,” the latest filing said. “At its heart, however, the Action is about stifling criticism and seeking retribution by unmasking individuals for perceived reputational harms TEI [Klein’s production company] attributes to [John Doe moderators] unrelated to TEI’s intellectual property rights.”
The defendants’ lawyers said the subpoena to unmask moderators should be quashed because Klein can’t prove his case of copyright infringement, but also because revealing such information could put the Does’ in harm’s way. “The balance of equities weighs in favor of Does’ anonymity and quashing TEI’s Subpoenas in their entirety,” the filing said.
As evidence of the danger faced by the Does, the court filing quoted Klein directly. “Listen, guys, at this point you [r/h3snark mods] are totally fucked,” Klein said on a podcast, according to the court filing. “There’s a subpoena that’s going to come. You can’t erase your data. We’re going to get your IP address and find your information.”
“If there’s any justice in the world [the h3snark mods] will lose everything that they care about and I will be the one who makes them lose those things […] through legal means. Through any legal means,” he said, according to the court filing.
The defendants' lawyers paint a grim picture of what might happen should Klein’s subpoenas succeed: they “fear potentially being attacked, or worse, killed, over moderating a subreddit,” the filing said. “These worries extend to all family and friends connected to Does. Does fear their professional lives being ruined, potential sexual violence, extortion, fans showing up to their home, and endless years of harassment due to Ethan’s prolific lies surrounding them. The target he has painted on the moderators would make it unsafe to live openly in any capacity. Some Does also have heightened risk of retaliatory harm due to their religious identities. If their real names are revealed, these Does—and their families—face a real risk of being doxed, stalked, or harassed, as has happened to others in similar situations. In this climate, unmasking Does would expose them to significant and unjustified danger.”
Personal safety wasn’t the only legal argument the moderator’s lawyers put forward. A key part of Klein’s claim is that the Does violated his copyright by hosting links on r/h3snark of other streamers reacting to his video “Content Nuke—Hasan Piker.” His legal case is built around going after content creators for making “low effort” content using his work, but also the anonymous people on Reddit who shared links of those videos.
“The next question is whether creating a discussion thread, which includes a link to a streamer’s channel, where the streamer reacts to a live broadcast while providing her own commentary and criticism, and users visiting the thread engage in their own debate about the live broadcast and reactions thereto, constitutes contributory infringement,” the filing said. “It does not.”
The lawyers also argued that a Reddit “megathread”—a common practice where the moderators of a subreddit create one single space on a board for people to talk about a specific top—are fair use, that the reaction videos were transformative and should be considered fair use, and that the reaction videos increased the public’s exposure to Klein’s video.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.At the end of the filing, the lawyers returned again to the personal safety of the moderators. They argued that even if Klein’s claim of copyright infringement met the burden of proof, and the lawyers don’t believe it does, the balance of harms is in favor of the moderators. “The personal harms to Does by allowing unmasking, as well as the public harms to online speech and discourse generally, would be irreparable in the private sense and long-reaching in the public sense,” the filing said.
The anonymity of places like Reddit and Discord grant a layer of protection to people seeking to critique power. This case could set a dangerous precedent, the lawyers believe. “If the court allows TEI’s Subpoenas, it would enable TEI to impose a considerable price on Does’ use of the vehicle of anonymous speech—including public exposure, real risks of retaliation and actual harm, and the financial and other burdens of defending the Action,” the filing said.
The filing added: “Very few would-be commentators are prepared to bear costs of this magnitude. So, when word gets out that the price tag of criticizing Ethan is this high—that speech will disappear. But that is precisely what Ethan Klein wants.”
Screenshots shared with 404 Media show tenant screening services ApproveShield and Argyle taking much more data than they need. “Opt-out means no housing.”#News
Landlords Demand Tenants’ Workplace Logins to Scrape Their Paystubs
Landlords are using a service that logs into a potential renter’s employer systems and scrapes their paystubs and other information en masse, potentially in violation of U.S. hacking laws, according to screenshots of the tool shared with 404 Media.The screenshots highlight the intrusive methods some landlords use when screening potential tenants, taking information they may not need, or legally be entitled to, to assess a renter.
“This is a statewide consumer-finance abuse that forces renters to surrender payroll and bank logins or face homelessness,” one renter who was forced to use the tool and who saw it taking more data than was necessary for their apartment application told 404 Media. 404 Media granted the person anonymity to protect them from retaliation from their landlord or the services used.
💡
Do you know anything else about any of these companies or the technology landlords are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“I am livid,” they added.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Moderators reversed course on its open door AI policy after fans filled the subreddit with AI-generated Dale Cooper slop.#davidlynch #AISlop #News
Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies
People on r/twinpeaks flooded the subreddit with AI slop images of FBI agent Dale Cooper and ChatGPT generated scripts after the community’s moderators opened the door to posting AI art. The tide of terrible Twin Peaks related slop lasted for about two days before the subreddit’s mods broke, reversed their decision, and deleted the AI generated content.Twin Peaks is a moody TV show that first aired in the 1990s and was followed by a third season in 2017. It’s the work of surrealist auteur David Lynch, influenced lots of TV shows and video games that came after and has a passionate fan base that still shares theories and art to this day. Lynch died earlier this year and since his passing he’s become a talking point for pro-AI art people who point to several interviews and second hand stories they claim show Lynch had embraced an AI-generated slop future.
On Tuesday, a mod posted a long announcement that opened the doors to AI on the sub. In a now deleted post titled “Ai Generated Content On r/twinpeaks,” the moderator outlined the position that the sub was a place for everyone to share memes, theories, and “anything remotely creative as long as it has a loose string to the show or its case or its themes. Ai generated content is included in all of this.”
The post went further. “We are aware of how Ai ‘art’ and Ai generated content can hurt real artists,” the post said. “Unfortunately, this is just the reality of the world we live in today. At this point I don’t think anything can stop the Ai train from coming, it’s here and this is only the beginning. Ai content is becoming harder and harder to identify.”
The mod then asked Redditors to follow an honor system and label any post that used AI with a special new flair so people could filter out those posts if they didn’t want to see them. “We feel this is a best of both worlds compromise that should keep everyone fairly happy,” the mod said.
An honor system, a flair, and a filter did not mollify the community. In the following 48 hours Lynch fans expressed their displeasure by showing r/twinpeaks what it looks like when no one can “stop the Ai train from coming.” They filled the subreddit with AI-generated slop in protest, including horrifying pictures of series protagonist Cooper doing an end-zone dance on a football field while Laura Palmer screamed in the sky and more than a few awful ChatGPT generated scripts.
Image via r/twinpeaks.
Free-IDK-Chicken, a former mod of r/twinpeaks who resigned over the AI debacle, said the post wasn’t run by other members of the mod team. “It was poorly worded. A bad take on a bad stance and it blew up in their face,” she told 404 Media. “It spiraled because it was condescending and basically told the community--we don’t care that it’s theft, that it’s unethical, we’ll just flair it so you can filter it out…they missed the point that AI art steals from legit artists and damages the environment.”According to Free-IDK-Chicken, the subreddit’s mods had been fighting over whether or not to ban AI art for months. “I tried five months ago to get AI banned and was outvoted. I tried again last month and was outvoted again,” she said.
On Thursday morning, with the subreddit buried in AI slop, the mods of r/twinpeaks relented, banned AI art, and cleaned up the protest spam. “After much thought and deliberation about the response to yesterday's events, the TP Mod Team has made the decision to reverse their previous statement on the posting of AI content in our community,” the mods said in a post announcing the new policy. “Going forward, posts including generative AI art or ChatGPT-style content are disallowed in this subreddit. This includes posting AI google search results as they frequently contain misinformation.”
Lynch has become a mascot for pro AI boosters. An image on a pro-AI art subreddit depicts Lynch wearing an OpenAI shirt and pointing at the viewer. “You can’t be punk and also be anti-AI, AI-phobic, or an AI denier. It’s impossible!” reads a sign next to the AI-generated picture of the director.
Image via r/slopcorecirclejerk
As evidence, they point to aBritish Film Institute interview published shortly before his death where he lauds AI and calls it “incredible as a tool for creativity and for machines to help creativity.” AI boosters often leave off the second part of the quote. “I’m sure with all these things, if money is the bottom line, there’d be a lot of sadness, and despair and horror. But I’m hoping better times are coming,” Lynch said.Image via r/slopcorecirclejerk
The other big piece of evidence people use to claim Lynch was pro-AI is a secondhand account given to Vulture by his neighbor, the actress Natasha Lyonne. According to the interview in Vulture, Lyonne asked Lynch for his thoughts on AI and Lynch picked up a pencil and told her that everyone has access to it and to a phone. “It’s how you use the pencil. You see?” He said.Setting aside the environmental and ethical arguments against AI-generated art, if AI is a “pencil,” most of what people make with it is unpleasant slop. Grotesque nonsense fills our social media feeds and AI-generated Jedis and Ghiblis have become the aesthetic of fascism.
We've seen other platforms and communities struggle with keeping AI art at bay when they've allowed it to exist alongside human-made content. On Facebook, Instagram, and Youtube, low-effort garbage is flooding online spaces and pushing productive human conversation to the margins, while floating to the top of engagement algorithms.
Other artist communities are pushing back against AI art in their own ways: Earlier this month, DragonCon organizers ejected a vendor for displaying AI-generated artwork. Artists’ portfolio platform ArtStation banned AI-generated content in 2022. And earlier this year, artists protested the first-ever AI art auction at Christie’s.
Artists Are Revolting Against AI Art on ArtStation
Artists are fed up with AI art on the portfolio platform, which is owned by Epic Games, but the company isn't backing down.Chloe Xiang (VICE)
Multiple Palantir and Flock sources say the companies are spinning a commitment to "democracy" to absolve them of responsibility. "In my eyes, it is the classic double speak," one said.#News
How Surveillance Firms Use ‘Democracy’ As a Cover for Serving ICE and Trump
In a blog post published in June, Garrett Langley, the CEO and co-founder of surveillance company Flock, said “We rely on the democratic process, on the individuals that the majority vote for to represent us, to determine what is and is not acceptable in cities and states.” The post explained that the company believes the laws of the country and individual states and municipalities, not the company, should determine the limits of what Flock’s technology can be used for, and came after 404 Media revealed local police were tapping into Flock’s networks of AI-enabled cameras for ICE, and that a sheriff in Texas performed a nationwide search for a woman who self-administered an abortion.💡
Do you work at any of these companies or others like them? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Langley’s statement echoes a common refrain surveillance and tech companies selling their products to Immigration and Customs Enforcement (ICE) or other parts of the U.S. government have said during the second Trump administration: we live in a democracy. It is not our job to decide how our powerful capabilities, which can track peoples’ physical location, marry usually disparate datasets together, or crush dissent, can or should be used. At least, that’s the thrust of the argument. That is despite the very clear reality that the first Trump administration was very different to the Biden administration, and both pale in comparison to Trump 2.0, with the executive branch and various agencies flaunting ordinary democratic values. The idea of what a democracy is capable of has shifted.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Scammers stole the crypto from a Latvian streamer battling cancer and the wider security community rallied to make him whole.#News #Crypto
Steam Hosted Malware Game that Stole $32,000 from a Cancer Patient Live on Stream
A cancer patient lost $32,000 in crypto after installing a Steam game on his computer containing malware that drained one of his crypto wallets. Raivo Plavnieks is a 26 year old self-described “crypto degen” from Latvia who streams on the site Pump.fun under the name Rastaland. After a seven hour stream on September 20, Plavnieks logged off and cashed out his earnings from the stream.. Literally seconds later, someone drained those earnings from his wallet, according to an archive of the livestream and blockchain records reviewed by 404 Media.Plavnieks had installed a game called BlockBlasters, a 2D platformer listed on Steam that launched July 31, 2025 to a small audience who’d given it positive reviews. But the game was a scam and an August patch injected malware into the game that was meant to scan a user’s hard drive for data and, ultimately, their crypto. BlockBlasters is no longer listed on Steam and has been flagged as malicious by the independent Steam archiving site SteamDB. Valve did not respond to 404 Media’s request for comment.
playlist.megaphone.fm?p=TBIEA2…
The cyber security firm G Data CyberDefense dug into BlockBlasters and detailed how the software got access to user’s crypto. SteamDB’s archive of the game’s patches shows 3 files added in the August 30 patch: game2.bat, and two zip files. According to the G Data writeup, the batch file collected information on the user’s machine and then unpacked the zip files. “The two VBS scripts that ‘game2.bat’ executes are batch file loaders,” G Data said. As the scripts run, they inject more malware into the user’s machine and eventually go after the data and extensions of Chrome, Brave, and Microsoft Edge browsers, the company said.This is at least the third time this year Valve has pulled a game from Steam after it turned out to contain malware. In February, Valve pulled the survival game PirateFi after users discovered it contained password stealing malware. A month later, in March, people who tried to download a demo for Sniper: Phantom's Resolution were redirected from Steam to GitHub for the installer. Once again, it was malware.
Plavnieks' experience gave the BlockBlasters situation a higher profile than PirateFi and Sniper: Phantom’s Resolution. Footage of emaciated and exhausted Plavnieks sobbing on his livestream while one of his brothers attempted to soothe him struck a nerve with some in the crypto and security community online. The crypto space is full of rug pulls, burns, bad investments, and wild stunts, but stealing from a guy with cancer seemed like a bridge too far.
In addition to the G Data writeup, several other people have reverse-engineered BlockBusters code and, they believe, found the people responsible. “The shitty malware sent all the stolen data to a Telegram the scammers made,” vx-underground, a group of malware researchers, said in a post on X. “We connected to the Telegram channel using the same credentials that were inside of the shitty malware. Inside the channel was the scammer(s). We got their Telegram IDs.”
According to Plavnieks, he was able to get his creator rewards sent to a new (and safe) crypto wallet in the future. Cryptocurrency personality Alex Becker sent Plavnieks $32,000 to cover the cost of the losses. And a group of open-source intelligence hobbyists and interested tech folks dug into BlockBlasters code, figured out the scheme, built a list of alleged victims, and also found the people they think are responsible for the scam.
“I wanted to take a second and just thank you all from the bottom of my heart, me, my brothers, and my mom is completely left without words on all the support we have received past 24h after the hack happened,” Plavineks said in a post on X. “Seems like the whole [community] rallied together behind my story and is showing support one way or another.”
BlockBlasters update for 30 August 2025
List of changed files in BlockBlasters on Steam for build id 19799326.SteamDB
YouTube removed a channel that posted nothing but graphic Veo-generated videos of women being shot after 404 Media reached out for comment.#News
AI-Generated YouTube Channel Uploaded Nothing But Videos of Women Being Shot
Content warning: This article contains descriptions and images of AI-generated graphic violence.YouTube removed a channel that was dedicated to posting AI-generated videos of women being shot in the head following 404 Media’s request for comment. The videos were clearly generated with Google’s new AI video generator tool, Veo, according to a watermark included in the bottom right corner of the videos.
The channel, named Woman Shot A.I, started on June 20, 2025. It posted 27 videos, had over 1,000 subscribers, and had more than 175,000 views, according to the channel’s publicly available data.
All the videos posted by the channel follow the exact same formula. The nearly photo-realistic videos show a woman begging for her life while a man with a gun looms over her. Then he shoots her. Some videos have different themes, like compilations of video game characters like Lara Croft being shot, “Japanese Schoolgirls Shot in Breast,” “Sexy HouseWife Shot in Breast,” “Female Reporter Tragic End,” and Russian soldiers shooting women with Ukrainian flags on their chest.
I wasn’t able to confirm if YouTube was running ads in videos posted by this channel, but the person behind the channel did pay to generate these videos with Google’s Veo, and complained about the cost.
“The AI I use is paid, per account I have to spend around 300 dollars per month, even though 1 account can only generate 8-second videos 3 times,” the channel’s owner wrote in a public post on YouTube. “So, imagine how many times I generate a video once I upload, I just want to say that every time I upload a compilation consisting of several 8-second clips, it’s not enough for just 1 account.”
Woman Shot A.I’s owner claimed they have 10 accounts. “I have to spend quite a lot of money just to have fun,” they said.
Shot A.I also posted polls asking subscribers to vote who “you want to be the victims in the next video.” The options were “Japanese/Chinese,” White Caucasian (american,british,italian,etc),” Southeast Asian (thai,filipine,indonesian,etc),” and the N-word.
YouTube removed the channel after 404 Media reached out for comment for this story. A YouTube spokesperson said that it terminated the channel for violating its Terms of Service, and specifically for operating the YouTube channel following a previous termination, meaning This is not the first time YouTube has removed a channel operated by whoever was behind Woman Shot A.I.
In theory Veo should not allow users to generate videos of people being murdered, but the AI video generator’s guardrails clearly didn’t work in this case. Guardrails for generative AI tools including AI video generators often fail, and there are entire communities dedicated to circumventing them.
“[O]ur Gen AI tools are built to follow the prompts a user provides,” Google’s spokesperson said. “We have clear policies around their use that we work to enforce, and the tools continually get better at reflecting these policies.”
In July, YouTube said that it would start taking action against “mass-produced” AI-generated slop channels. However, as our recent story about AI-generated “boring history” videos show, YouTube’s enforcement is still far from perfect.
YouTube prepares crackdown on 'mass-produced' and 'repetitive' videos, as concern over AI slop grows | TechCrunch
YouTube's creator liaison said the change is a "minor" update to YouTube's longstanding policies.Sarah Perez (TechCrunch)
Dale Britt Bendler “earned approximately $360,000 in private client fees while also working as a full-time CIA contractor with daily access to highly classified material that he searched like it was his own personal Google,” according to a court record.#News
Contractor Used Classified CIA Systems as ‘His Own Personal Google’
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.A former CIA official and contractor, who at the time of his employment dug through classified systems for information he then sold to a U.S. lobbying firm and foreign clients, used access to those CIA systems as “his own personal Google,” according to a court record reviewed by 404 Media and Court Watch.
💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Dale Britt Bendler, 68, was a long running CIA officer before retiring in 2014 with a full pension. He rejoined the agency as a contractor and sold a wealth of classified information, according to the government’s sentencing memorandum filed on Wednesday. His clients included a U.S. lobbying firm working for a foreigner being investigated for embezzlement and another foreign national trying to secure a U.S. visa, according to the court record.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowContractor Used Classified CIA Systems as ‘His Own Personal Google’
Dale Britt Bendler “earned approximately $360,000 in private client fees while also working as a full-time CIA contractor with daily access to highly classified material that he searched like it was his own personal Google,” according to a court re…Joseph Cox (404 Media)
Academic workers are re-thinking how they live and work online after some have been fired for criticizing Charlie Kirk following his death.#News
Union Warns Professors About Posting In the ‘Current Climate’
A union that represents university professors and other academics published a guide on Wednesday tailored to help its members navigate social media during the “current climate.” The advice? Lock down your social media accounts, expect anything you post will be screenshotted, and keep things positive. The document ends with links to union provided trauma counseling and legal services.The American Association of University Professors (AAUP) published the two page document on September 17, days after the September 10 killing of right-wing pundit Charlie Kirk. The list of college professors and academics who've been censured or even fired for joking about, criticizing, or quoting Kirk after his death is long.
playlist.megaphone.fm?p=TBIEA2…
Clemson University in South Carolina fired multiple members of its faculty after investigating their Kirk-related social media posts. On Monday the state’s Attorney General sent the college a letter telling it that the first amendment did not protect the fired employees and that the state would not defend them. Two universities in Tennessee fired multiple members of the staff after getting complaints about their social media posts. The University of Mississippi let a member of the staff go because they re-shared a comment about Kirk that people found “insensitive.” Florida Atlantic University placed an art history professor on administrative leave after she posted about Kirk on social media. Florida's education commissioner later wrote a letter to school superintendents warning them there would be consequences for talking about Kirk in the wrong way. “Govern yourselves accordingly,” the letter said.AAUP’s advice is meant to help academic workers avoid ending up as a news story. “In a moment when it is becoming increasingly difficult to predict the consequences of our online speech and choices, we hope you will find these strategies and resources helpful,” it said.
Here are its five explicit tips: “1. Set your personal social media accounts to private mode. When prompted, approve the setting to make all previous posts private. 2. Be mindful that anything you post online can be screenshotted and shared. 3. Before posting or reposting online commentary, pause and ask yourself: a. Am I comfortable with this view potentially being shared with my employer, my students, or the public? Have I (or the person I am reposting) expressed this view in terms I would be comfortable sharing with my employer, my students, or the public?”
The advice continues: “4. In your social media bios, state that the views expressed through the account represent your own opinions and not your employer. You do not need to name your employer. Consider posting positive statements about positions you support rather than negative statements about positions you disagree with. Some examples could be: ‘Academic freedom is nonnegotiable,’ ‘The faculty united will never be divided,’ ‘Higher ed research saves lives,’ ‘Higher ed transforms lives,’ ‘Politicians are interfering with your child’s education.’”
The AAUPthen provides five digital safety tips that include setting up strong passwords, installing software updates as soon as they’re available, using two-factor authentication, and never using employer email addresses outside of work.
The last tip is the most revealing of how academics might be harassed online through campaigns like Turning Point USA’s “Professor Watchlist.” “Search for your name in common search engines to find out what is available about you online,” AAUP advises. “Put your name in quotation marks to narrow the search. Search both with and without your institution attached to your name.”
After that, the AAUP provided a list of trauma, counseling, and insurance services that its members have access to and a list of links to other pieces of information about protecting themselves.
“It’s good basic advice given that only a small number of faculty have spent years online in my experience, it’s a good place to start,” Pauline Shanks Kaurin, the former military ethics professor at the U.S. Naval War College told 404 Media. Kaurin resigned her position at the college earlier this year after realizing that the college would not defend academic freedom during Trump’s second term.
“I think this reflects the heightened level of scrutiny and targeting that higher ed is under,” Kaurin said. “While it’s not entirely new, the scale is certainly aided by many platforms and actors that are engaging on [social media] now when in the past faculty might have gotten threatening phone calls, emails and hard copy letters.”
The AAUP guidance was co-written by Isaac Kamola, an associate professor at Trinity College and the director of the AAUP’s Center for Academic Freedom. Kamola told 404 Media that the recommendations came for years of experience working with faculty who’ve been on the receiving end of targeted harassment campaigns. “That’s incredibly destabilizing,” he said. “It’s hard to explain what it’s like until it happens to you.”
Kamola said that academic freedom was already under threat before Kirk’s death. “It’s a multi-decade strategy of making sure that certain people, certain bodies, certain dies, are not in higher education, so that certain other ones can be, so that you can reproduce the ideas that a political apparatus would prefer existed in a university,” he said.
It’s telling that the AAUP felt the need to publish this, but the advice is practical and actionable, even for people outside of academia. Freedom of expression is under attack in America and though academics and other public figures are perhaps under the most threat, they aren’t the only ones. Secretary of Defense Pete Hegseth said the Pentagon is actively monitoring the social media activity of military personnel as well as civilian employees of the Department of Defense.
“It is unacceptable for military personnel and Department of War civilians to celebrate or mock the assassination of a fellow American,” Sean Parnell, public affairs officer at the Pentagon, wrote on X, using the new nickname for the Department of Defense. In the private sector, Sony fired one of its video game developers after they made a joke on X about Kirk’s death and multiple journalists have been fired for Kirk related comments.
AAUP did not immediately respond to 404 Media’s request for comment.
MSNBC fires analyst Matthew Dowd over Charlie Kirk shooting remarks
Dowd said the slain activist’s words may have fueled the violence that claimed his life, sparking backlashJoseph Gedeon (The Guardian)
Librarians Are Being Asked to Find AI-Hallucinated Books#News
Librarians Are Being Asked to Find AI-Hallucinated Books
Reference librarian Eddie Kristan said lenders at the library where he works have been asking him to find books that don’t exist without realizing they were hallucinated by AI ever since the release of GPT-3.5 in late 2022. But the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year. At the time, the freelancer told 404 Media he used AI to produce the list without fact checking outputs before syndication.“We had people coming into the library and asking for those authors,” Kristan told 404 Media. He’s receiving similar requests for other types of media that don’t exist because they’ve been hallucinated by other AI-powered features. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.”
AI tools are changing the nature of how patrons treat librarians, both online and IRL. Alison Macrina, executive director of Library Freedom Project, told 404 Media early results from a recent survey of emerging trends in how AI tools are impacting libraries indicate that patrons are growing more trusting of their preferred generative AI tool or product, and the veracity of the outputs they receive. She said librarians report being treated like robots over library reference chat, and patrons getting defensive over the veracity of recommendations they’ve received from an AI-powered chatbot. Essentially, like more people trust their preferred LLM over their human librarian.
“Librarians are reporting this overall atmosphere of confusion and lack of trust they’re experiencing from their patrons,” Macrina told 404. “They’re seeing patrons having seemingly diminished critical thinking and curiosity. They’re definitely running into some of these psychosis and other mental health issues, and certainly seeing the people who are more widely adopting it also being those who have less digital literacy about it and a general sort of loss of retention.”
As a reference librarian, Kristan said he spends a lot of time thinking about how fallible the human mind can be, especially as he’s fielding more requests for things that don’t exist than ever before. Fortunately, he’s developed a system: Search for the presumed thing by title in the library catalog. If it’s not in the catalog, he checks the global library catalog WorldCat. If it isn’t there, he starts to get suspicious.
“Not being in WorldCat might mean it’s something that isn’t catalogued like a Zine, a broadcast, or something ephemeral, but if it’s parading as a traditional book and doesn’t have an entry in the collective library catalog, it might be AI,” Kristan explained.
From there, he might connect the title to a platform like Kindle Direct Publishing—one way AI-generated books enter the market—or the patron will tell him their source is an AI-powered chatbot, which he will have to explain, likely hallucinated the name of the thing they’re looking for. A thing that doesn't exist.
As much as library workers try to shield their institutions from the AI-generated content onslaught, the situation is and has been, in many ways, inevitable. Companies desperate to rush generative AI products to market are pushing flawed products onto the public that are predictably being used to pollute our information ecosystems. The consequences are that AI slop is entering libraries, everyone who uses AI products bears at least a little responsibility for the swarm, and every library worker, regardless of role, is being asked to try and mitigate the effects.
Collection development librarians are requesting digital book vendors like OverDrive, Hoopla and CloudLibrary to remove AI slop titles as they’re found. Subject specialists are expected to vet patron requested titles that may have been written in part with AI without having to read every single title. Library technology providers are rushing to implement tools that librarians say are making library systems catalogs harder to use.
Jaime Taylor, an academic library resource management systems supervisor with the University of Massachusetts, says vendors are shoehorning Large Language Models (LLMs) into library systems in one of two ways. The first is a natural language search (NLS) or a semantic search that attempts to draw meaning from the words to find complementary search results. Taylor says these products are misleading in that they claim to eliminate the need for the strict keyword searches or Boolean operators when searching library catalogs and databases, when really the LLM is doing the same work on the backend.
“These companies all advertise these tools as knowing your intent,” Taylor told 404 Media. “Understanding what you meant when you put those terms in. They don’t know. They don’t understand. None of those things are true. There is no technical way these tools can do that.”
The other tool Taylor is seeing in library technology are AI-generated summaries based on journal articles, monographs, and other academic sources through a product called AI Insights, which incorporates new information into an existing LLM with a system called retrieval-augmented generation (RAG). Taylor and colleagues have found RAG doesn’t really help improve the accuracy of AI-generated summaries through beta testing AI tools in library tech for companies like Clarivate, Elsevier and EBSCO.
“It reads everything on both pages,” she added. “It can’t tell where the article you’re looking for starts and stops, so it gives you takeaways from every word on the page. This was really bad when we tested it with book reviews, because book reviews are often very short and there’ll be half a dozen on one page, which would end up giving us really mixed up information about every book review on the page, even though the record we were looking at was only looking for one of them, because it was a scanned page from an older journal.”
Taylor says neither type of product is ready for market, but especially not the AI summaries that do what an abstract does but a lot worse. She’s turned what ones she can off, but expects fewer vendors will allow her and other librarians to do so in the future to record more favorable use cases. The problem, she says, is these companies are rushing products to market, making the skills academic librarians are trying to teach students and researchers to use obsolete.
“We are trying to teach how to construct useful, exact searching,” she said. “But really [these products’ intent] is to make that not happen. The problem with that in a university library is we’re trying to teach those skills but we have tools that negate that necessity. And because those tools don’t work well, you’ve not learned the skill and you’re still getting crap results, so you’re never going to get better results because you didn’t learn the skill.”
Plenty of library workers remain cautiously optimistic about the potential for generative AI integrations and what that could mean for information retrieval and categorization. But for most librarians, the rollout has been clunky, error-filled and disorienting, for them and their patrons.
“As someone who feels like a big part of my job is advocacy for the position, for the principles of the profession, I am here to not look at whether a resource is good or bad,” said Kristan. “I don’t look at the output, the relationship that it has with the patrons, and what it’s being used for in the long run of the future. Like, I’m not out here just breaking looms and machine weaving machinery just for the hell of it. I’m saying this is not good for the community and we need to find equitable alternatives to ensure that things are going well for the lives of our patrons.”
Readers Annoyed When Fantasy Novel Accidentally Leaves AI Prompt in Published Version, Showing Request to Copy Another Writer's Style
A scammer left glaringly obvious evidence of using AI to write portions of a fantasy novel and even tried to copy the style of a real author.Victor Tangermann (Futurism)
An AI-generated show on Russian TV includes Trump singing obnoxious songs and talking about golden toilets.#News
Russian State TV Launches AI-Generated News Satire Show
A television channel run by Russia’s Ministry of Defense is airing a program it claims is AI-generated. According to advertisements for the show, a neural network is picking the topics it wants to discuss, then uses AI to generate that video. It includes putting French President Emmaneul Macron in hair curlers and a pink robe, making Trump talk about golden toilets, and showing EU Commission President Ursula von der Leyen singing a Soviet-era pop song while working in a factory.The show—called Политукладчик or “PolitStacker,” according to a Google translation—airs every Friday on Zvezda, a television station owned by Russia’s Ministry of Defense. It’s hosted by “Natasha,” an AI avatar modeled on Russian journalist Nataliya Metlina. In a clip of the show, “Natasha” said that its resemblance to Metlina is intentional.
“I am the creation of artificial intelligence, entirely tuned to your informational preferences,” it said. “My task is to select all the political nonsense of the past week and fit it in your heads like candies in a little box.” The shows’ title sequence and advertisements show gold wrapped candies bearing the faces of politicians like Trump and Volodymyr Zelensky being sorted into a candy box.
“‘PolitStacker’ is the world’s first television program created by artificial intelligence,” said an ad for the show on the Russian social media network VK, according to Google translate. “The AI itself selects, analyzes, and comments on the most important news, events, facts, and actions—as it sees them. The editorial team’s opinion may not coincide with the AI’s (though usually…it does.) “‘PolitStacker’” is not just news, but a tough breakdown of political madness from a digital host who notices what others overlook.”
Data scientist Kalev Leetaru discovered the AI-generated Russian show as part of his work with the GDELT Project, which collaborates with the Internet Archive's TV News Archive, a project that scans and stores television broadcasts from around the world. “If you just look at the show and you didn’t know it had AI associated with it, you would never guess that. It looks like a traditional propaganda show on Russian television," Leetaru told 404 Media. “If they are using AI to the degree that they say they are, even if it’s just to pick topics, they mastered that formula in a way that others have not.”
PolitStacker’s 40 minute runtime is full of silly political commentary, jokes, and sloppy AI deepfakes that look like they were pulled from a five-year-old Instagram reel. In one episode, Macron, with curlers in his hair, adjusts Zelensky’s tie ahead of a meeting at the Kremlin. Later, a smiling Macron bearing six pack abs stands in a closet in front of a clown costume and a leather jumpsuit. “Parts of it have an uncanny valley to it, parts of it are really really good. This is only their fourth episode and they’re already doing deep fake interviews with world leaders,” Leetaru said.
Image via the Internet Archive.
In one of the AI-generated Trump interviews, the American president talked about how he’d end the war in Ukraine by building a casino in Moscow with golden toilets. “And all the Russian oligarchs, they would all be inside. All their money would be inside. Problem solved. They would just play poker and forget about this whole war. A very bad deal for them, very distracting,” the deepfake Trump said.Deepfake world leaders aren’t new and are pretty common across the internet. For Leetaru, the difference is that this is airing on a state-backed television station. “It’s still in parody form, but to my knowledge, no national television network show has even gone this far,” he told 404 Media. “Today it’s a parody video that’s pretty clearly a comedic interview. But, you know, how far will they take that? And does that inspire others to maybe step into spaces that they wouldn’t have before?”
Trump also loves AI and the AI aesthetic. Government social media accounts often post AI-generated slop pictures of Trump as the Pope or a Jedi. ICE and the DHS share pictures on official channels that paint over the horrifying reality of the administration's immigration policy with a sheen of AI slop. Trump shared an AI-generated video that imagined what Gaza would look like if he built a resort there. And he’s teamed with Perplexity to launch an AI powered search engine to Truth Social.
“PolitStacker” is a parody show, but Russian media is experimenting with less comedic AI avatars as well. Earlier this year, the state-owned news agency Sputnik began to air what it called the “Dugin Digital Edition.” In these little lectures, an AI version of Russian philosopher Alexander Dugin discusses the news of the day in English.
Last year, a Hawaiian newspaper, The Garden Island, teamed with an Israeli company to produce a news show on YouTube staffed by AI anchors. Reactions to the program were overwhelmingly negative, it brought in fewer than 1,000 viewers per episode, and The Garden Island stopped making the show a few months after it began.
In a twist of fate, Leetaru only discovered Moscow’s AI-generated show thanks to an AI system of his own. The GDELT project is a massive undertaking that records thousands of hours of data from across the world and it uses various AI systems to generate transcripts, translate them, and create an index of what’s been archived. “In this case I totally skimmed over what I thought was an ad for a propaganda show and then some candy commercial. Instead it ended up being something that’s fascinating,” he said.
But his AI indexing tool noted Zvezda's new show as an AI-generated program that sought to “analyze political follies of the outgoing week.” He took a second look and was glad he did. “That’s the power of machines being able to catch things and guide your eye towards that.”
What he saw disturbed him. “Yes, it’s one show on an obscure Russian government adjacent network using deep fakes for parody,” he said. “But the fact that a television network finally made that leap, to me, is a pivotal moment that I see as the tip of the iceberg.”
Historic Newspaper Uses Janky AI Newscasters Instead of Human Journalists
Hawaii’s The Garden Island newspaper is producing video news segments with AI. The union at its parent company calls it “digital colonialism.”Matthew Gault (404 Media)
Following Charlie Kirk’s assassination and the Trump administration’s promise to go after the “radical left” a study showing most domestic terrosim is far-right was disappeared.#News
DOJ Deletes Study Showing Domestic Terrorists Are Most Often Right Wing
The Department of Justice has removed a study showing that white supremacist and far-right violence “continues to outpace all other types of terrorism and domestic violent extremism” in the United States.The study, which was conducted by the National Institute of Justice and hosted on a DOJ website was available there at least until September 12, 2025, according to an archive of the page saved by the Wayback Machine.
“The Department of Justice's Office of Justice Programs is currently reviewing its websites and materials in accordance with recent Executive Orders and related guidance,” reads a message on the page where the study was formerly hosted. “During this review, some pages and publications will be unavailable. We apologize for any inconvenience this may cause.”
Shortly after Donald Trump took office he issued an executive order that forced government agencies to scrub their sites of any mention of “diversity,” “gender,” “DEI,” and other “forbidden words” and perceived notions of “wokeness.” The executive order impacted every government agency, including NASA, and was a huge waste of engineers’ time.
We don’t know why the study about far-right extremist violence was removed recently, but it comes immediately after the assassination of conservative personality Charlie Kirk, accusations from the administration that the left is responsible for most of the political violence in the country, and a renewed commitment from the administration to crack down on the “radical left.”
“For years, those on the radical left have compared wonderful Americans like Charlie to Nazis and the world's worst mass murderers and criminals,” Trump said in a speech after Kirk’s death. “This kind of rhetoric is directly responsible for the terrorism that we're seeing in our country today, and it must stop right now. My administration will find each and every one of those who contributed to this atrocity and to other political violence, including the organizations that fund it and support it.”
Elon Musk, who owns X, recently tweeted that he was going to “fix” the platform’s AI assistant Grok after it cited research that showed right-wing violence was more common than left-wing violence: “My apologies, we are fixing this cringe idiocy by Grok,” he said.
Vice President JD Vance, who guest hosted Kirk’s podcast on Monday, also vowed to go after “growing and powerful minority on the far left.”
“Since 1990, far-right extremists have committed far more ideologically motivated homicides than far-left or radical Islamist extremists, including 227 events that took more than 520 lives,” the study said. “In this same period, far-left extremists committed 42 ideologically motivated attacks that took 78 lives.”
The DOJ did not immediately respond to our request for comment. Steven Chermak, one of the study’s co-authors, declined to comment.
'The Bigotry Is Astounding:' Engineers Waste Time and Money Scanning .Gov Sites for 'Transgender' and Other Terms
The U.S. Department of Health and Human Services (HHS) is wasting workers’ time and taxpayer dollars.Emanuel Maiberg (404 Media)
OpenAI introduces new age prediction and verification methods after wave of teen suicide stories involving chatbots.#News
ChatGPT Will Guess Your Age and Might Require ID for Age Verification
OpenAI has announced it is introducing new safety measures for ChatGPT after the a wave of stories and lawsuits accusing ChatGPT and other chatbots of playing a role in a number of teen suicide cases. ChatGPT will now attempt to guess a user’s age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old.“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the company said in its announcement.
“I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” OpenAI CEO Sam Altman said on X.
In August, OpenAI was sued by the parents of Adam Raine, who died by suicide in April. The lawsuit alleges that alleges that the ChatGPT helped him write the first draft of his suicide note, suggested improvements on his methods, ignored early attempts and self-harm, and urged him not to talk to adults about what he was going through.
“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
In August the Wall Street Journal also reported a story about a 56-year-old man who committed a murder-suicide after ChatGPT indulgedhis paranoia. Today, the Washington Postreported another story about another lawsuit alleging that a Character AI chatbot contributed to a 13-year-old girl’s death by suicide.
OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures.
In addition to attempting to guess or verify a user’s age, ChatGPT will now also apply different rules to teens who are using the chatbot.
“For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the announcement said. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called “uncensored” models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.
“We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to “‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is not the first company that’s attempting to use machine learning to predict the age of its users. In July, YouTube announced it will use a similar method to “protect” teens from certain types of content on its platform.
Extending our built-in protections to more teens on YouTube
We're extending our existing built-in protections to more US teens on YouTube, using machine learning age estimation.James Beser (YouTube Official Blog)
An LLM breathed new life into 'Animal Crossing' and made the villagers rise up against their landlord.
An LLM breathed new life into x27;Animal Crossingx27; and made the villagers rise up against their landlord.#News #VideoGames
AI-Powered Animal Crossing Villagers Begin Organizing Against Tom Nook
A software engineer in Austin has hooked up Animal Crossing to an AI and breathed new and disturbing life into its villagers. Using a Large Language Model (LLM) trained on Animal Crossing scripts and an RSS reader, the anthropomorphic folk of the Nintendo classic spouted new dialogue, talked about current events, and actively plotted against Tom Nook’s predatory bell prices.The Animal Crossing LLM is the work of Josh Fonseca, a software engineer in Austin, Texas who works at a small startup. Ars Technica first reported on the mod. His personal blog is full of small software projects like a task manager for the text editor VIM, a mobile app that helps rock climbers find partners, and the Animal Crossing AI. He also documented the project in a YouTube video.
playlist.megaphone.fm?p=TBIEA2…
Fonseca started playing around with AI in college and told 404 Media that he’d always wanted to work in the video game industry. “Turns out it’s a pretty hard industry to break into,” he said. He also graduated in 2020. “I’m sure you’ve heard, something big happened that year.” He took the first job he could find, but kept playing around with video games and AI and had previously injected an LLM into Stardew Valley.Fonseca used a Dolphin emulatorrunning the original Gamecube Animal Crossing on a MacBook to get the project working. According to his blog, an early challenge was just getting the AI and the game to communicate. “The solution came from a classic technique in game modding: Inter-Process Communication (IPC) via shared memory. The idea is to allocate a specific chunk of the GameCube's RAM to act as a ‘mailbox.’ My external Python script can write data directly into that memory address, and the game can read from it,” he said in the blog.
He told 404 Media that this was the most tedious part of the whole project. “The process of finding the memory address the dialogue actually lives at and getting it to scan to my MacBook, which has all these security features that really don’t want me to do that, and ending up writing to the memory took me forever,” he said. “The communication between the game and an external source was the biggest challenge for me.”
Once he got his code and the game talking, he ran into another problem. “Animal Crossing doesn't speak plain text. It speaks its own encoded language filled with control codes,” he said in his blog. “Think of it like HTML. Your browser doesn't just display words; it interprets tags like <b> to make text bold. Animal Crossing does the same. A special prefix byte, CHAR_CONTROL_CODE, tells the game engine, ‘The next byte isn't a character, it's a command!’”
But this was a solved problem. The Animal Crossing modding community long ago learned the secrets of the villager’s language, and Fonseca was able to build on their work. Once he understood the game’s dialogue systems, he built the AI brain. It took two LLM models, one to write the dialogue and another he called “The Director” that would add in pauses, emphasize words with color, and choose the facial animations for the characters. He used a fine-tuned version of Google’s Gemini for this and said it was the most consistent model he’d used.
To make it work, he fine-tuned the model, meaning he reduced its input training data to make it better at specific outputs. “You probably need a minimum of 50 to 100 really good examples in order to make it better,” he said.
Results for the experiment were mixed. Cookie, Scoot, and Cheri did indeed utter new phrases in keeping with their personality. Things got weird when Fonseca hooked up the game to an RSS reader so the villagers could talk about real world news. “If you watch the video, all the sources are heavily, politically, leaning in one direction,” he said. “I did use a Fox news feed, not for any other reason than I looked up ‘news RSS feeds’ and they were the first link and I didn’t really think it through. And then I started getting those results…I thought they would just present the news, not have leanings or opinions.”
“Trump’s gonna fight like heck to get rid of mail-in voting and machines!” Fitness obsessed duck Scoot said in the video. “I bet he’s got some serious stamina, like, all the way in to the finish line—zip, zoom!”
The pink dog Cookie was up on her Middle East news. “Oh my gosh, Josh 😀! Did you see the news?! Gal Gadot is in Israel supporting the families! Arfer,” she said, uttering her trademark catchphrase after sharing the latest about Israel.
In the final part of the experiment, Fonseca enabled the villagers to gossip. “I gave them a tiny shared memory for gossip, who said what, to whom, and how they felt,” he said in the blog.The villagers almost instantly turned on Tom Nook, the Tanuki who runs the local stores and holds most of Animal Crossing's inhabitants in debt. “Everything’s going great in town, but sometimes I feel like Tom Nook is, like, taking all the bells!” Cookie said.
“Those of us with big dreams are being squashed by Tom Nook! We gotta take our town back!” Cheri the bear cub said.
“This place is starting to feel more like Nook’s prison, y’know?” Said Scoot.
youtube.com/embed/7AyEzA5ziE0?…
Why do this to Animal Crossing? Why make Scoot and Cheri learn about Gal Gadot, Israel, and Trump?“I’ve always liked nostalgic content,” Fonscesca said. His TikTok and YouTube algorithm is filled with liminal spaces and music from his childhood that’s detuned. He’s gotten into Hauntology, a philosophical idea that studies—among other things—promised futures that did not come to pass.
He sees projects like this as a way of linking the past and the future. “When I was a child I was like, ‘Games are gonna get better and better every year,’’ he said. “But after 20 years of playing games I’ve become a little jaded and I’m like, ‘oh there hasn’t really been that much innovation.’ So I really like the idea of mixing those old games with all the future technologies that I’m interested in. And I feel like I’m fulfilling those promised futures in a way.”
He knows that not everyone is a fan of AI. “A lot of people say that dialogue with AI just cannot be because of how much it sounds like AI,” he said. “And to some extent I think people are right. Most people can detect ChatGPT or Gemini language from a mile away. But I really think, if you fine tune it, I was surprised at just how good the results were.”
Animal Crossing’s dialogue is simple and that simplicity makes it a decent test case for AI video game marks, but Fonseca thinks he can do similar things with more complicated games. “There’s been a lot of discussion around how what I’m doing isn’t possible when there’s like, tasks or quests, because LLMs can’t properly guide you to that task without hallucinating. I think it might be more possible than people think,” he said. “So I would like to either try out my own very small game or take a game that has these kinds of quests and put together a demo of how that might be possible.”
He knows people balk at using AI to make video games, and art in general, but believes it’ll be a net benefit. “There will always be human writers and I absolutely want there to be human writers handling the core,” he said. “I would hope that AI is going to be a tool that doesn’t take away any of the best writers, but maybe helps them add more to their game that maybe wouldn’t have existed otherwise. I would hope that this just helps create more art in the world. I think I see the total art in the world increasing as a good thing…now I know some people would say that using AI ceases to make it art, but I’m also very deep in the programming aspect of it. What it takes to make these things is so incredible that it still feels like magic to me. Maybe on some level I’m still hypnotized by that.”
Modder injects AI dialogue into 2002’s Animal Crossing using memory hack
Unofficial mod lets classic Nintendo GameCube title use AI chatbots with amusing results.Benj Edwards (Ars Technica)
Harsh lessons from 'Dark Souls' told me to turn my ass around when I got to the red flower jumping puzzle.
Harsh lessons from x27;Dark Soulsx27; told me to turn my ass around when I got to the red flower jumping puzzle.#News
Does Silksong Seem Unreasonably Hard? You Probably Took a Wrong Turn
There is an aggrieved cry reverberating through the places on the internet where gamers gather. To hear them tell it, Hollow Knight: Silksong, the sequel to the stone-cold classic 2017 platformer, is too damned hard. There’s a particular jumping puzzle involving spikes and red flowers that many are struggling with and they’re filming their frustration and putting it up on the internet, showing their ass for everyone to see.
playlist.megaphone.fm?p=TBIEA2…
Even 404 Media’s own Joseph Cox hit these red flowers and had the temerity to declare Silksong a “bad game” that he was “disappointed” in given his love for the original Hollow Knight.
youtube.com/embed/3yxR6H_Zh0M?…youtube.com/embed/GD8ZyYE1K7k?…youtube.com/embed/ZsmxOMijLtE?…
Couldn't be me.I, too, got to the area just outside Hunter’s March in Silksong where the horrible red flowers bloom. Unlike others, however, my gamer instincts kicked in. I knew what to do. “This is the Dark Souls Catacombs situation all over again,” I said to myself. Then I turned around and came back later.
And that has made all the difference.
In the original Dark Souls, once players clear the opening area they come to Firelink Shrine. From there they can go into Undead Burg, the preferred starting path, or descend into The Catacombs where horrifying undying skeletons block the entrance to a cave. One will open the game up before you, the other will kill new players dead. A lot of Dark Souls players have raged and quit the game over the years because they went into The Catacombs instead of the Undead Burg.
Like Dark Souls, Silksong has an open-ish world where portions of the map are hardlocked by items and soft locked by player skill checks. One of the entrances into the flower laden Hunter’s March is in an early game area blocked by a mini-boss fight with a burly ant. The first time I fought the ant, it killed me over and over again and I took that as a sign I should go elsewhere.
High skilled players can kill the ant, but it’s much easier after you’ve gotten some basic items and abilities. I had several other paths I could take to progress the game, so I marked the ant’s location and moved on.
As I explored more of Silksong, I acquired several powerups that trivialized the fight with the ant and made it easy to navigate the flower jumping puzzles behind him. The first is Swift Step, a dash ability, which is in Deep Docks in the south-eastern portion of the map. The second is the Wanderer’s Crest, which is near the start of the game behind a locked door you get the key for in Silksong’s first town.
The dash allowed me to adjust my horizontal position in the air, but it’s the Wanderer’s Crest that made the flowers easy to navigate. The red flowers are littered throughout Hunter’s March and players have to hit them with a down attack to get a boosted jump and cross pits of spikes. By default, Hornet—the player character—down attacks at a 45 degree angle. The Wanderer’s Crest allows you to attack directly below you and makes the puzzles much easier to navigate.
Cox, bless his heart, hit the burly red ant miniboss and brute forced his way past. Then, like so many other desperate gamers, he proceeded to attempt to navigate the red flower jumping puzzles without the right power ups. He had no Swift Step. He had no Wanderer’s Crest. And thus, he raged.
He’s not alone. Watching the videos of jumping puzzles online I noticed that a lot of the players didn’t seem to have the dash or the downward attack.
View this post on Instagram
A post shared by Get This Bag (@get_this_bag)View this post on Instagram
A post shared by PromoNinja PromoNinja (@promoninja_us)View this post on Instagram
Games communicate to players in different ways and gamers often complain about annoying an obvious signposting like big splashes of yellow paint. But when a truly amazing game comes along that tries to gently steer the player with burly ants and difficult puzzles, they don’t appreciate it and they don’t listen. If you’re really stuck in Silksong, try going somewhere else.Permanately stuck in Catacombs? :: DARK SOULS™: REMASTERED General Discussions
I am trying to leave the Catacombs completely from the new bonfire near Vamos but cannot seem to do so. I cannot warp to other bonfires yet.steamcommunity.com
Akarinium likes this.
The mainstream media seems entirely uninterested in explaining Charlie Kirk's work.
The mainstream media seems entirely uninterested in explaining Charlie Kirkx27;s work.#News #CharlieKirk
Charlie Kirk Was Not Practicing Politics the Right Way
Thursday morning, Ezra Klein at the New York Times published a column titled “Charlie Kirk Was Practicing Politics the Right Way.” Klein’s general thesis is that Kirk was willing to talk to anyone, regardless of their beliefs, as evidenced by what he was doing while he was shot, which was debating people on college campuses. Klein is not alone in this take; the overwhelming sentiment from America’s largest media institutions in the immediate aftermath of his death has been to paint Kirk as a mainstream political commentator, someone whose politics liberals and leftists may not agree with but someone who was open to dialogue and who espoused the virtues of free speech.“You can dislike much of what Kirk believed and the following statement is still true: Kirk was practicing politics in exactly the right way. He was showing up to campuses and talking with anyone who would talk to him,” Klein wrote. “He was one of the era’s most effective practitioners of persuasion. When the left thought its hold on the hearts and minds of college students was nearly absolute, Kirk showed up again and again to break it.”
“I envied what he built. A taste for disagreement is a virtue in a democracy. Liberalism could use more of his moxie and fearlessness,” Klein continued.
Kirk is being posthumously celebrated by much of the mainstream press as a noble sparring partner for center-left politicians and pundits. Meanwhile, the very real, very negative, and sometimes violent impacts of his rhetoric and his political projects are being glossed over or ignored entirely. In the New York Times, Kirk was an “energetic” voice who was “critical of gay and transgender rights,” but few of the national pundits have encouraged people to actually go read what Kirk tweeted or listen to what he said on his podcast to millions and millions of people. “Whatever you think of Kirk (I had many disagreements with him, and he with me), when he died he was doing exactly what we ask people to do on campus: Show up. Debate. Talk. Engage peacefully, even when emotions run high,” David French wrote in the Times. “In fact, that’s how he made his name, in debate after debate on campus after campus.”
This does not mean Kirk deserved to die or that political violence is ever justified. What happened to Kirk is horrifying, and we fear deeply for whatever will happen next. But it is undeniable that Kirk was not just a part of the extremely tense, very dangerous national dialogue, he was an accelerationist force whose work to dehumanize LGBTQ+ people and threaten the free speech of professors, teachers, and school board members around the country has directly put the livelihoods and physical safety of many people in danger. We do no one any favors by ignoring this, even in the immediate aftermath of an assassination like this.
Kirk claimed that his Turning Point USA sent “80+ buses full of patriots” to the January 6 insurrection. Turning Point USA has also run a “Professor Watchlist,” “School Board Watchlist,” and “Campus Reform” for nearly a decade.
“America’s radical education system has taken a devastating toll on our children,” Kirk said in an intro video posted on these projects’ websites. “From sexualized material in textbooks to teaching CRT and implementing the 1619 Project doctrine, the radical leftist agenda will not stop … The School Board Watch List exposes school districts that host drag queen story hour, teach courses on transgenderism, and implement unsafe gender neutral bathroom policies. The Professor Watch List uncovers the most radical left-wing professors from universities that are known to suppress conservative voices and advance the progressive agenda.”
These websites have been directly tied to harassment and threats against professors and school board members all over the country. Professor Watchlist lists hundreds of professors around the country, many of them Black or trans, and their perceived radical agendas, which include things like supporting gun control, “socialism,” “Antifa,” “abortion,” and acknowledging that trans people exist and racism exists. Trans professors are misgendered on the website, and numerous people who have been listed on it have publicly spoken about receiving death threats and being harassed after being listed on the site.
One professor on the watchlist who 404 Media is granting anonymity for his safety said once he was added to the list, he started receiving anonymous letters in his campus mailbox. “‘You're everything wrong with colleges,’ ‘watch your step, we're watching you’ kind of stuff,” he said, “One anonymous DM on Twitter had a picture of my house and driveway, which was chilling.” His president and provost also received emails attempting to discredit him with “all the allegedly communist and subversive stuff I was up to,” he said. “It was all certainly concerning, but compared to colleagues who are people of color and/or women, I feel like the volume was smaller for me. But it was certainly not a great feeling to experience that stuff. That watchlist fucked up careers and ruined lives.”
The American Association of University Professors said in an open letter in 2017 that Professor Watchlist “lists names of professors with their institutional affiliations and photographs, thereby making it easy for would-be stalkers and cyberbullies to target them. Individual faculty members who have been included on such lists or singled out elsewhere have been subject to threats of physical violence, including sexual assault, through hundreds of e-mails, calls, and social media postings. Such threatening messages are likely to stifle the free expression of the targeted faculty member; further, the publicity that such cases attract can cause others to self-censor so as to avoid being subjected to similar treatment.” Campus free speech rights group FIRE found that censorship and punishment of professors skyrocketed between 2020 and 2023, in part because of efforts from Professor Watchlist.
Many more professors who Turning Point USA added to their watchlist have spoken out in the past about how being targeted upended their lives, brought years of harassment down on them and their colleagues, and resulted in death threats against them and their loved ones.
At Arizona State University, a professor on the watchlist was assaulted by two people from Turning Point USA in 2023.
“Earlier this year, I wrote to Turning Point USA to request that it remove ASU professors from its Professor Watchlist. I did not receive a response,” university president Michael Crow wrote in a statement. “Instead, the incident we’ve all now witnessed on the video shows Turning Point’s refusal to stop dangerous practices that result in both physical and mental harm to ASU faculty members, which they then apparently exploit for fundraising, social media clicks and financial gain.” Crow said the Professor Watchlist resulted in “antisemitic, anti-LGBTQ+ and misogynistic attacks on ASU faculty with whom Turning Point USA and its followers disagree,” and called the organization’s tactics “anti-democratic, anti-free speech and completely contrary” to the spirit of scholarship.
Kirk’s death is a horrifying moment in our current American nightmare. Kirk’s actions and rhetoric do not justify what happened to him because they cannot be justified. But Kirk was not merely someone who showed up to college campuses and listened. It should not be controversial to plainly state some of the impact of his work.
ASU President: Turning Point USA crew accused of 'bloodying' ASU professor's face
Arizona State Police released footage of an incident between an ASU English professor, a reporter and cameraman from Turning Point USA that happened on Oct. 11. University president Dr. Michael Crow released a statement calling the men "cowards."Jessica Johnson (FOX 10 Phoenix)
Sebastiano Cuffari likes this.