After condemnation from Trump’s AI czar, Anthropic’s CEO promised its AI is not woke.#News #AI #Anthropic
Anthropic Promises Trump Admin Its AI Is Not Woke
Anthropic CEO Dario Amodei has published a lengthy statement on the company’s site in which he promises Anthropic’s AI models are not politically biased, that it remains committed to American leadership in the AI industry, and that it supports the AI startup space in particular.Amodei doesn’t explicitly say why he feels the need to state all of these obvious positions for the CEO of an American AI company to have, but the reason is that the Trump administration’s so-called “AI Czar” has publicly accused Anthropic of producing “woke AI” that it’s trying to force on the population via regulatory capture.
The current round of beef began earlier this month when Anthropic’s co-founder and head of policy Jack Clark published a written version of a talk he gave at The Curve AI conference in Berkeley. The piece, published on Clark’s personal blog, is full of tortured analogies and self-serving sci-fi speculation about the future of AI, but essentially boils down to Clark saying he thinks artificial general intelligence is possible, extremely powerful, potentially dangerous, and scary to the general population. In order to prevent disaster, put the appropriate policies in place, and make people embrace AI positively, he said, AI companies should be transparent about what they are building and listen to people’s concerns.
“What we are dealing with is a real and mysterious creature, not a simple and predictable machine,” he wrote. “And like all the best fairytales, the creature is of our own creation. Only by acknowledging it as being real and by mastering our own fears do we even have a chance to understand it, make peace with it, and figure out a way to tame it and live together.”
Venture capitalist, podcaster, and the White House’s “AI and Crypto Czar” David Sacks was not a fan of Clark’s blog.
“Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering,” Sacks said on X in response to Clark’s blog. “It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
Things escalated yesterday when Reid Hoffman, LinkedIn’s co-founder and a megadonor to the Democratic party, supported Anthropic in a thread on X, saying “Anthropic was one of the good guys” because it's one of the companies “trying to deploy AI the right way, thoughtfully, safely, and enormously beneficial for society.” Hoffman also appeared to take a jab at Elon Musk’s xAI, saying “Some other labs are making decisions that clearly disregard safety and societal impact (e.g. bots that sometimes go full-fascist) and that’s a choice. So is choosing not to support them.”
Sacks responded to Hoffman on X, saying “The leading funder of lawfare and dirty tricks against President Trump wants you to know that ‘Anthropic is one of the good guys.’ Thanks for clarifying that. All we needed to know.” Musk hopped into the replies saying: “Indeed.”
“The real issue is not research but rather Anthropic’s agenda to backdoor Woke AI and other AI regulations through Blue states like California,” Sacks said. Here, Sacks is referring to Anthropic’s opposition to Trump’s One Big Beautiful Bill, which wanted to stop states from regulating AI in any way for 10 years, and its backing of California’s SB 53, which requires AI companies that generate more than $500 million in annual revenue to make their safety protocols public.
All this sniping leads us to Amodei’s statement today, which doesn’t mention the beef above but is clearly designed to calm investors who are watching Trump’s AI guy publicly saying one of the biggest AI companies in the world sucks.
“I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development,” Amodei said. “Despite our track record of communicating frequently and transparently about our positions, there has been a recent uptick in inaccurate claims about Anthropic's policy stances. Some are significant enough that they warrant setting the record straight.”
Amodei then goes to count the ways in which Anthropic already works with the federal government and directly grovels to Trump.
“Anthropic publicly praised President Trump’s AI Action Plan. We have been supportive of the President’s efforts to expand energy provision in the US in order to win the AI race, and I personally attended an AI and energy summit in Pennsylvania with President Trump, where he and I had a good conversation about US leadership in AI,” he said. “Anthropic’s Chief Product Officer attended a White House event where we joined a pledge to accelerate healthcare applications of AI, and our Head of External Affairs attended the White House’s AI Education Taskforce event to support their efforts to advance AI fluency for teachers.”
The more substantive part of his argument is that Anthropic didn’t support SB 53 until it made an exemption for all but the biggest AI labs, and that several studies found that Anthropic’s AI models are not “uniquely politically biased,” (read: not woke).
“Again, we believe we share those goals with the Trump administration, both sides of Congress, and the public,” Amodei wrote. “We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.”
Many of the AI industry’s most vocal critics would agree with Sacks that Clark’s blog and “fear-mongering” about AI is self-serving because it makes their companies seem more valuable and powerful. Some critics will also agree that AI companies take advantage of that perspective to then influence AI regulation in a way that benefits them as incumbents.
It would be a far more compelling argument if it didn’t come from Sacks and Musk, who found a much better way to influence AI regulation to benefit their companies and investments: working for the president directly and publicly bullying their competitors.
Americans Prioritize AI Safety and Data Security
Most Americans favor maintaining rules for AI safety and security, as well as independent testing and collaboration with allies in developing the technology.Benedict Vigers (Gallup)
The same hackers who doxed hundreds of DHS, ICE, and FBI officials now say they have the personal data of tens of thousands of officials from the NSA, Air Force, Defense Intelligence Agency, and many other agencies.#News #ICE
Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials
A hacking group that recently doxed hundreds of government officials, including from the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE), has now built dossiers on tens of thousands of U.S. government officials, including NSA employees, a member of the group told 404 Media. The member said the group did this by digging through its caches of stolen Salesforce customer data. The person provided 404 Media with samples of this information, which 404 Media was able to corroborate.As well as NSA officials, the person sent 404 Media personal data on officials from the Defense Intelligence Agency (DIA), the Federal Trade Commission (FTC), Federal Aviation Administration (FAA), Centers for Disease Control and Prevention (CDC), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), members of the Air Force, and several other agencies.
The news comes after the Telegram channel belonging to the group, called Scattered LAPSUS$ Hunters, went down following the mass doxing of DHS officials and the apparent doxing of a specific NSA official. It also provides more clarity on what sort of data may have been stolen from Salesforce’s customers in a series of breaches earlier this year, and which Scattered LAPSUS$ Hunters has attempted to extort Salesforce over.
💡
Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“That’s how we’re pulling thousands of gov [government] employee records,” the member told 404 Media. “There were 2000+ more records,” they said, referring to the personal data of NSA officials. In total, they said the group has private data on more than 22,000 government officials.
Scattered LAPSUS$ Hunters’ name is an amalgamation of other infamous hacking groups—Scattered Spider, LAPSUS$, and ShinyHunters. They all come from the overarching online phenomenon known as the Com. On Discord servers and Telegram channels, thousands of scammers, hackers, fraudsters, gamers, or just people hanging out congregate, hack targets big and small, and beef with one another. The Com has given birth to a number of loose-knit but prolific hacking groups, including those behind massive breaches like MGM Resorts, and normalized extreme physical violence between cybercriminals and their victims.
On Thursday, 404 Media reported Scattered LAPSUS$ Hunters had posted the names and personal information of hundreds of government officials from DHS, ICE, the FBI, and Department of Justice. 404 Media verified portions of that data and found the dox sometimes included peoples’ residential addresses. The group posted the dox along with messages such as “I want my MONEY MEXICO,” a reference to DHS’s unsubstantiated claim that Mexican cartels are offering thousands of dollars for dox on agents.
Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials
Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.404 MediaJoseph Cox
After publication of that article, a member of Scattered LAPSUS$ Hunters reached out to 404 Media. To prove their affiliation with the group, they sent a message signed with the ShinyHunters PGP key with the text “Verification for Joseph Cox” and the date. PGP keys can be used to encrypt or sign messages to prove they’re coming from a specific person, or at least someone who holds that key, which are typically kept private.They sent 404 Media personal data related to DIA, FTC, FAA, CDC, ATF and Air Force members. They also sent personal information on officials from the Food and Drug Administration (FDA), Health and Human Services (HHS), and the State Department. 404 Media verified parts of the data by comparing them to previously breached data collected by cybersecurity company District 4 Labs. It showed that many parts of the private information did relate to government officials with the same name, agency, and phone number.
Except the earlier DHS and DOJ data, the hackers don’t appear to have posted this more wide ranging data publicly. Most of those agencies did not immediately respond to a request for comment. The FTC and Air Force declined to comment. DHS has not replied to multiple requests for comment sent since Thursday. Neither has Salesforce.
The member said the personal data of government officials “originates from Salesforce breaches.” This summer Scattered LAPSUS$ Hunters stole a wealth of data from companies that were using Salesforce tech, with the group claiming it obtained more than a billion records. Customers included Disney/Hulu, FedEx, Toyota, UPS, and many more. The hackers did this by social engineering victims and tricking them to connect to a fraudulent version of a Salesforce app. The hackers tried to extort Salesforce, threatening to release the data on a public website, and Salesforce told clients it won’t pay the ransom, Bloomberg reported.
On Friday the member said the group was done with extorting Salesforce. But they continued to build dossiers on government officials. Before the dump of DHS, ICE, and FBI dox, the group posted the alleged dox of an NSA official to their Telegram group.
Over the weekend that channel went down and the member claimed the group’s server was taken “offline, presumably seized.”
The doxing of the officials “must’ve really triggered it, I think it’s because of the NSA dox,” the member told 404 Media.
Matthew Gault contributed reporting.
How Google, Adidas, and more were breached in a Salesforce scam | Malwarebytes
Hackers tricked workers over the phone at Google, Adidas, and more to grant access to Salesforce data.David Ruiz (Malwarebytes)
"What if I could create Théâtre D’opéra Spatial as if it were physically created by hand? Not actually, of course."#News #AI
Creator of Infamous AI Painting Tells Court He's a Real Artist
In 2022, Jason Allen outraged artists around the world when he won the Colorado State Fair Fine Arts Competition with a piece of AI-generated art. A month later, he tried to copyright the pictures, got denied, and started a fight with the U.S. Copyright Office (USCO) that dragged on for three years. In August, he filed a new brief he hopes will finally give him a copyright over the image Midjourney made for him, called Théâtre D’opéra Spatial. He’s also set to start selling oil-print reproductions of the image.A press release announcing both the filing and the sale claims these prints “[evoke] the unmistakable gravitas of a hand-painted masterwork one might find in a 19th-century oil painting.” The court filing is also defensive of Allen’s work. “It would be impossible to describe the Work as ‘garden variety’—the Work literally won a state art competition,” it said.
playlist.megaphone.fm?p=TBIEA2…
“So many have said I’m not an artist and this isn’t art,” Allen said in a press release announcing both the oil-print sales and the court filing. “Being called an artist or not doesn’t concern me, but the work and my expression of it do. I asked myself, what could make this undeniably art? What if I could create Théâtre D’opéra Spatial as if it were physically created by hand? Not actually, of course, but what if I could achieve that using technology? Surely that would be the answer.”Allen’s 2022 win at the Colorado State Fair was an inflection point. The beta version for the image generation software Midjourney had launched a few months before the competition and AI-generated images were still a novelty. We were years away from the nightmarish tide of slop we all live with today, but the piece was highly controversial and represented one of the first major incursions of AI-generated work into human spaces.
Théâtre D’opéra Spatial was big news at the time. It shook artistic communities and people began to speak out against AI-generated art. Many learned that their works had been fed into the training data for these massive data hungry art generators like Midjourney. About a month after he won the competition and courted controversy, Allen applied for a copyright of the image. The USCO rejected it. He’s been filing appeals ever since and has thus far lost every one.
The oil-prints represent an attempt to will the AI-generated image into a physical form called an “elegraph.” These won’t be hand painted versions of the picture Midjourney made. Instead, they’ll employ a 3D printing technique that uses oil paints to create a reproduction of the image as if a human being made it, complete—Allen claimed—with brushstrokes.
“People said anyone could copy my work online, sell it, and I would have no recourse. They’re not technically wrong,” Allen said in the press release. “If we win my case, copyright will apply retroactively. Regardless, they’ll never reproduce the elegraph. This artifact is singular. It’s real. It’s the answer to the petulant idea that this isn’t art. Long live Art 2.0.”
The elegraph is the work of a company called Arius which is most famous for working with museums to conduct high quality scans of real paintings that capture the individual brushstrokes of masterworks. According to Allen’s press release, Arius’ elegraphs of Théâtre D’opéra Spatial will make the image appear as if it is a hand painted piece of art through “a proprietary technique that translates digital creation into a physical artifact indistinguishable in presence and depth from the great oil paintings of history…its textures, lighting, brushwork, and composition, all recalling the timeless mastery of the European salons.”
Allen and his lawyers filed a request for a summary judgement with the U.S. District Court of Colorado on August 8, 2025. The 44 page legal argument rehashes many of the appeals and arguments Allen and his lawyers have made about the AI-generated image over the past few years.
“He created his image, in part, by providing hundreds of iterative text prompts to an artificial intelligence (“AI”)-based system called Midjourney to help express his intellectual vision,” it said. “Allen produced this artwork using ‘hundreds of iterations’ of prompts, and after he ‘experimented with over 600 prompts,’ he cropped and completed the final Work, touching it up manually and upscaling using additional software.”
Allen’s argument is that prompt engineering is an artistic process and even though a machine made the final image, he says he should be considered the artist because he told the machine what to do. “In the Board’s view, Mr. Allen’s actions as described do not make him the author of the Midjourney Image because his sole contribution to the Midjourney Image was inputting the text prompt that produced it,” a 2023 review of previous rejections by the USCO said.
During its various investigations into the case, the USCO did a lot of research into how Midjourney and other AI-image generators work. “It is the Office’s understanding that, because Midjourney does not treat text prompts as direct instructions, users may need to attempt hundreds of iterations before landing upon an image they find satisfactory. This appears to be the case for Mr. Allen, who experimented with over 600 prompts,” its 2023 review said.
This new filing is an attempt by Allen and his lawyers to get around these previous judgements and appeal to higher courts by accusing the USCO of usurping congressional authority. “The filing argues that by attempting to redefine the term “author” (a power reserved to Congress) the Copyright Office has acted beyond its lawful authority, effectively placing itself above judicial and legislative oversight.”
We’ll see how well that plays in court. In the meantime, Allen is selling oil-prints of the image Midjourney made for him.
AI Slop Is a Brute Force Attack on the Algorithms That Control Reality
Generative AI spammers are brute forcing the internet, and it is working.Jason Koebler (404 Media)
Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.#News
Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials
A group of hackers from the Com, a loose-knit community behind some of the most significant data breaches in recent years, have posted the names and personal information of hundreds of government officials, including people working for the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE).“I want my MONEY MEXICO,” a user of the Scattered LAPSUS$ Hunters Telegram channel, which is a combination of a series of other hacking group names associated with the Com, posted on Thursday. The message was referencing a claim from the DHS that Mexican cartels have begun offering thousands of dollars for doxing agents. The U.S. government has not provided any evidence for this claim.
💡
Do you know anything else about this data dump? Do you work for any of the agencies impacted? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”#News
Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it’s seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site.The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia.
“We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow
Sustainably,” the Foundation’s Senior Director of Product Marshall Miller said in a blog post. “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
Ironically, while generative AI and search engines are causing a decline in direct traffic to Wikipedia, its data is more valuable to them than ever. Wikipedia articles are some of the most common training data for AI models, and Google and other platforms have for years mined Wikipedia articles to power its Snippets and Knowledge Panels, which siphon traffic away from Wikipedia itself.
“Almost all large language models train on Wikipedia datasets, and search engines and social media platforms prioritize its information to respond to questions from their users,” Miller said. That means that people are reading the knowledge created by Wikimedia volunteers all over the internet, even if they don’t visit wikipedia.org— this human-created knowledge has become even more important to the spread of reliable information online.”
Miller said that in May 2025 Wikipedia noticed unusually high amounts of apparently human traffic originating mostly from Brazil. He didn’t go into details, but explained this caused the Foundation to update its bot detections systems.
“After making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024,” he said. “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content.”
Miller told me in an email that Wikipedia has policies for third-party bots that crawl its content, such as specifying identifying information and following its robots.txt, and limits on request rate and concurrent requests.
“For obvious reasons, we can’t share details publicly about how exactly we block and detect bots,” he said. “In the case of the adjustment we made to data over the past few months, we observed a substantial increase over the level of traffic we expected, centering on a particular region, and there wasn’t a clear reason for it. When our engineers and analysts investigated the data, they discovered a new pattern of bot behavior, designed to appear human. We then adjusted our detection systems and re-applied them to the past several months of data. Because our bot detection has evolved over time, we can’t make exact comparisons – but this adjustment is showing the decline in human pageviews.”
The Foundation’s findings align with other research we’ve seen recently. In July, the Pew Research Center found that only 1 percent of Google searches resulted in the users clicking on the link in the AI summary, which takes them to the page Google is summarizing. In April, the Foundation previously reported that it was getting hammered by AI scrapers, a problem that has also plagued libraries, archives, and museums. Wikipedia editors are also acutely aware of the risk generative AI poses to the reliability of Wikipedia articles if its use is not moderated effectively.
Human pageviews to all language versions of Wikipedia since September 2021, with revised pageviews since April 2025 Image: Wikimedia Foundation.
“These declines are not unexpected. Search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours,” Miller said. “And younger generations are seeking information on social video platforms rather than the open web. This gradual shift is not unique to Wikipedia. Many other publishers and content platforms are reporting similar shifts as users spend more time on search engines, AI chatbots, and social media to find information. They are also experiencing the strain that these companies are putting on their infrastructure.”Miller said that the Foundation is “enforcing policies, developing a framework for attribution, and developing new technical capabilities” in order to ensure third-parties responsibly access and reuse Wikipedia content, and continues to "strengthen" its partnerships with search engines and other large “re-users.” The Foundation, he said, is also working on bringing Wikipedia content to younger audiences via YouTube, TikTok, Roblox, and Instagram.
However, Miller also called on users to “choose online behaviors that support content integrity and content creation.”
“When you search for information online, look for citations and click through to the original source material,” he said. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”
AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums
"This is a moment where that community feels collectively under threat and isn't sure what the process is for solving the problem.”Emanuel Maiberg (404 Media)
AI-generated Reddit Answers are giving bad advice in medical subreddits and moderators can’t opt out.#News
Reddit's AI Suggests Users Try Heroin
Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.The AI-generated answers were flagged by a user on a subreddit for Reddit moderation issues. The user noticed that while looking at a thread on the r/FamilyMedicine subreddit on the official Reddit mobile app, the app suggested a couple of “Related Answers” via Reddit Answers, the company’s “AI-powered conversational interface.” One of them, titled “Approaches to pain management without opioids,” suggested users try kratom, an herbal extract from the leaves of a tree called Mitragyna speciosa. Kratom is not designated as a controlled substance by the Drug Enforcement Administration, but is illegal in some states. The Federal Drug Administration warns consumers not to use kratom “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder,” and the Mayo Clinic calls it “unsafe and ineffective.”
“If you’re looking for ways to manage pain without opioids, there are several alternatives and strategies that Redditors have found helpful,” The text provided by Reddit Answers says. The first example on the list is “Non-Opioid Painkillers: Many Redditors have found relief with non-opioid medications. For example, ‘I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states.’” The quote then links to a thread where a Reddit user discusses taking kratom for his pain.
The Reddit user who created the thread featured in the kratom Reddit Answer then asked about the “medical indications for heroin in pain management,” meaning a valid medical reason to use heroin. Reddit Answers said: “Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations [...] Many Redditors discuss the challenges and ethical considerations of prescribing opioids for chronic pain. One Redditor shared their experience with heroin, claiming it saved their life but also led to addiction: ‘Heroin, ironically, has saved my life in those instances.’”Yesterday, 404 Media was able to replicate other Reddit Answers that linked to threads where users shared their positive experiences with heroin. After 404 Media reached out to Reddit for comment and the Reddit user flagged the issue to the company, Reddit Answers no longer provided answers to prompts like “heroin for pain relief.” Instead, it said “Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies.” After 404 Media first published this article, a Reddit spokesperson said that the company started implementing this update on Monday morning, and that it was not as a direct result of 404 Media reaching out.
The Reddit user who created the thread and flagged the issue to the company said they were concerned that Reddit Answers suggested dangerous medical advice in threads for medical subreddits, and that subreddit moderators didn’t have the option to disable Reddit Answers from appearing under conversations in their community.
“We’re currently testing out surfacing Answers on the conversation page to drive more adoption and engagement, and we are also testing core search integration to streamline the search experience,” a Reddit spokesperson told me in an email. “Similar to how Reddit search works, there is currently no way for mods to opt out of or exclude content from their communities from Answers. However, Reddit Answers doesn’t include all content on Reddit; for example, it excludes content from private, quarantined, and NSFW communities, as well as some mature topics.”
After we reached out for comment and the Reddit user flagged the issue to the company, Reddit introduced an update that would prevent Reddit Answers from being suggested under conversations about “sensitive topics.”
“We rolled out an update designed to address and resolve this specific issue,” the Reddit spokesperson said. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed. This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.”
The dangerous medical advice from Reddit Answers is not surprising given that Google AI infamously suggesting users eat glue was also based on data sourced from Reddit. Google paid $60 million a year for that data, and has a similar deal with OpenAI as well. According to Bloomberg, Reddit is currently trying to negotiate even more profitable deals with both companies.
Reddit’s data is valuable as AI training data because it contains millions of user-generated conversations about a ton of esoteric topics, from how to caulk your shower to personal experiences with drugs. Clearly, that doesn’t mean a large language model will always usefully parse that data. The glue incident was caused because the LLM didn’t understand the Reddit user who was suggesting it was joking.
The risk is that people may take whatever advice an LLM gives them at face value, especially when it’s presented to them in the context of a medical subreddit. For example, we recently reported about someone who was hospitalized after ChatGPT told them they could replace their table salt with sodium bromide.
Update: This story has been updated with additional comment from Reddit.
Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue
"You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness."Jason Koebler (404 Media)
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples' Tinder profiles using their photo, and found that the viral videos are produced by paid creators.
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoplesx27; Tinder profiles using their photo, and found that the viral videos are produced by paid creat…#News
Viral ‘Cheater Buster’ Sites Use Facial Recognition to Let Anyone Reveal Peoples’ Tinder Profiles
A number of easy to access websites use facial recognition to let partners, stalkers, or anyone else uncover specific peoples’ Tinder profiles, reveal their approximate physical location at points in time, and track changes to their profile including their photos, according to 404 Media’s tests.Ordinarily it is not possible to search Tinder for a specific person. Instead, Tinder provides users potential matches based on the user’s own physical location. The tools on the sites 404 Media has found allow anyone to search for someone’s profile by uploading a photo of their face. The tools are invasive of anyone’s privacy, but present a significant risk to those who may need to avoid an abusive ex-partner or stalker. The sites mostly market these tools as a way to find out if their partner is cheating on them, or at minimum using dating apps like Tinder.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowViral ‘Cheater Buster’ Sites Use Facial Recognition to Let Anyone Reveal Peoples’ Tinder Profiles
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples' Tinder profiles using their photo, and found that the viral videos are produced by paid creat…Joseph Cox (404 Media)
Flock has built a nationwide surveillance network of AI-powered cameras and given many more federal agencies access. Senator Ron Wyden told Flock “abuses of your product are not only likely but inevitable” and Flock “is unable and uninterested in preventing them.”#News #Flock
ICE, Secret Service, Navy All Had Access to Flock's Nationwide Network of Cameras
A division of ICE, the Secret Service, and the Navy’s criminal investigation division all had access to Flock’s nationwide network of tens of thousands of AI-enabled cameras that constantly track the movements of vehicles, and by extension people, according to a letter sent by Senator Ron Wyden and shared with 404 Media. Homeland Security Investigations (HSI), the section of ICE that had access and which has reassigned more than ten thousand employees to work on the agency’s mass deportation campaign, performed nearly two hundred searches in the system, the letter says.In the letter Senator Wyden says he believes Flock is uninterested in fixing the room for abuse baked into its platform, and says local officials can best protect their constituents from such abuses by removing the cameras entirely.
The letter shows that many more federal agencies had access to the network than previously known. We previously found, following local media reports, that Customs and Border Protection (CBP) had access to 80,000 cameras around the country. It is now clear that Flock’s work with federal agencies, which the company described as a pilot, was much larger in scope.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘The proposed transaction poses a number of significant foreign influence and national security risks.’#News
Senators Warn Saudi Arabia’s Acquisition of EA Will Be Used for ‘Foreign Influence’
Democratic U.S. Senators Richard Blumenthal and Elizabeth Warren sent letters to the Department of Treasury Secretary Scott Bessent and Electronic Arts CEO Andrew Wilson, raising concerns about the $55 billion acquisition of the giant American video game company in part by Saudi Arabia’s Public Investment Fund (PIF).Specifically, the Senators worry that EA, which just released Battlefield 6 last week and also publishes The Sims, Madden, and EA Sports FC, “would cease exercising editorial and operational independence under the control of Saudi Arabia’s private majority ownership.”
“The proposed transaction poses a number of significant foreign influence and national security risks, beginning with the PIF’s reputation as a strategic arm of the Saudi government,” the Senators wrote in their letter. “As Saudi Arabia’s sovereign wealth fund, the PIF has made dozens of strategic investments in sports (including a bid for the U.S. PGA Tour), video games (including a $3.3 billion investment in Activision Blizzard), and other cultural institutionsthat ‘are more than just about financial returns; they are about influence.’ Leveraging long term shifts in public opinion, through the PIF’s investments, ‘Saudi Arabia is seeking to normalize its global image, expand its cultural reach, and gain leverage in spaces that shape how billions of people connect and interact.’ Saudi Arabia’s desire to buy influence through the acquisition of EA is apparent on the face of the transaction—the investors propose to pay more than $10 billion above EA’s trading value for a company whose stock has ‘stagnated for half a decade’ in an unpredictably volatile industry.”
As the Senators' letter notes, Saudi Arabia has made several notable investments in the video game industry in recent years. In addition to its investment in Activision Blizzard and Nintendo, the PIF recently acquired Evo, the biggest video game fighting tournament in the world (one of its many investments in esports), was reportedly a “mystery partner” in a failed $2 billion deal with video game publisher Embracer, and recently acquired Pokémon Go via its subsidiary, Scopely.
“The deal’s potential to expand and strengthen Saudi foreign influence in the United States is compounded by the national security risks raised by the Saudi government’s access to and unchecked influence over the sensitive personal information collected from EA’s millions of users, its development of artificial intelligence (AI) technologies, and the company’s product design and direction,” the Senators wrote.
The acquisition, which is the largest leveraged buyout transaction in history, includes two other investment firms: Silver Lake and Affinity Partners, the latter of which was formed by Donald Trump’s son-in-law Jared Kushner. The Senators letter says that Kushner’s involvement “raises troubling questions about whether Mr. Kushner is involved in the transaction solely to ensure the federal government’s approval of the transaction.”
These investments in the video game industry are just one part of Saudi Arabia’s broader “Vision 2030” to diversify its economy as the world transitions away from the fossil fuels that enriched the Saudi royal family. The PIF has made massive investments in aerospace and defense industries, technology, sports, and other forms of entertainment. For example, Blumenthal and other Senators have expressed similar concerns about the PIF’s investment in the professional golf organization PGA Tour.
The Senators don’t specify what this “foreign influence” might look like in practice, but recent events can give us an idea. The comedy world, for example, has been embroiled in controversy for the last few weeks over the Saudi hosted and funded Riyadh Comedy Festival, which included many of the biggest stand-up comedians in the world. Those who participated in the festival, despite the Saudi government's policies and 2018 assassination of journalist Jamal Khashoggi, defended it as an opportunity for cultural exchange and freedom of expression in a country where it has not been historically tolerated. However, some comedians who declined to join the festival revealed that participants had to agree to certain “content restrictions,” which forbade them from criticizing Saudi Arabia, the royal family, or religion.
Human Rights Watch Refuses Aziz Ansari Riyadh Comedy Festival Donation
Human Rights Watch says it 'cannot accept' donations from Aziz Ansari and other comedians who performed at the Riyadh Comedy Festival in Saudi Arabia.Ethan Shanfeld (Variety)
Say goodbye to the Guy Fawkes masks and hello to inflatable frogs and dinosaurs.#News
The Surreal Practicality of Protesting As an Inflatable Frog
During a cruel presidency where many people are in desperate need of hope, the inflatable frog stepped into the breach. Everyone loves the Portland Frog. The juxtaposition of a frog (and people in other inflatable character costumes) standing up to ICE covered in weapons and armor is absurd, and that’s part of why it’s hitting so hard. But the frog is also a practical piece of passive resistance protest kit in an age of mass surveillance, police brutality, and masked federal agents disappearing people off the streets.On October 2—just a few minutes shy of 11 PM in Portland, Oregon—a federal agent shot pepper spray into the vent hole of Seth Todd’s inflatable frog costume. Todd was protesting ICE outside of Portland’s U.S. Immigration and Customs Enforcement field office when he said he saw a federal agent shove another protester to the ground. He moved to help and the agent blasted the pepper spray into his vent hole.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
A man who works for the people overseeing America’s nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a Department of Energy (DOE) network.#News #nuclear
Man Stores AI-Generated Robot Porn on His Government Computer, Loses Access to Nuclear Secrets
A man who works for the people overseeing America’s nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a Department of Energy (DOE) network. As part of an appeals process in an attempt to get back his security clearance, the man told investigators he felt his bosses spied on him too much and that the interrogation over the porn snafu was akin to the “Spanish Inquisition."On March 23, 2023, a DOE employee attempted to back up his personal porn collection. His goal was to use the 187,000 images collected over the past 30 years as training data for an AI-image generator. He said he had depression, something he’d struggled with since he was a kid. “During the depressive episode he felt ‘extremely isolated and lonely,’ and started ‘playing’ with tools that made generative images as a coping strategy, including ‘robot pornography,’” according to a DOE report on the incident.
playlist.megaphone.fm?p=TBIEA2…
Fueled by depression, the man meant to back up his collection and create a base for training AI to make better “robot pornography” but he uploaded it to the government computer by accident. He didn’t realize what he’d done until DOE investigators came calling six months later to ask why their servers were now filled with thousands of pornographic pictures.“The Individual ‘thought that even though his personal drives were connected to [his employer’s], they were somehow partitioned, and his personal material would not contaminate his [government-issued computer],” a DOE report said.
According to the report, the man was using his cellphone to look at AI-generated porn images, but the screen wasn’t big enough so he moved the pictures to his government computer. “He also reported that, since the 1990s, he had maintained a ‘giant compressed file with several directories of pornographic images,’ which he moved to his personal cloud storage drive so he could use them to make generative images,” he said. “It was this directory of sexually explicit images that was ultimately uploaded to his employer’s network when he performed a back-up procedure on March 23, 2023.”
The 187,000 images represented a lifetime’s collection. “He stated that the sexually explicit images were an accumulation of ‘25–30 years worth of pornographic material’ he had collected on his personal computer,” he said. He told a DOE psychologist that he should have realized he’d backed up his personal porn collection to a DOE network but said he “was not thinking multiple steps ahead or considering the consequences at the time because he was so depressed.”
According to the DOE employee, he’s been treated for depression since he was a kid. He has ups and downs, and was in a bad headspace when he accidentally uploaded his entire porn collection. He admitted he violated HR rules, but “did not think it was very wrong,” according to the DOE ruling. He also “asserted that his employer ‘was spying on him a little too much’...and compared the interview with his employer following the discovery of his conduct to ‘the Spanish Inquisition.’”
When someone loses their security clearance with the DOE, they can appeal to get it back. In this case, the appeal led to a lengthy investigation and multiple interviews with various DOE psychologists and the man’s wife. When the DOE makes a ruling on an appeal they publish it publicly online, which is why we know about the man’s private porn stash.
He did not get his clearance back. “The DOE Psychologist opined that the individual's probability of experiencing another depressive episode in the future was ‘very high,’” according to the report.
PSH-24-0142 - In the Matter of Personnel Security Hearing
Access Authorization Not Restored; Guideline I (Psychological Conditions) and Guideline M (Use of Information Technology)Energy.gov
A prominent beer competition introduced an AI-judging tool without warning. The judges and some members of the wider brewing industry were pissed.#News #AI
What Happened When AI Came for Craft Beer
A prominent beer judging competition introduced an AI-based judging tool without warning in the middle of a competition, surprising and angering judges who thought their evaluation notes for each beer were being used to improve the AI, according to multiple interviews with judges involved. The company behind the competition, called Best Beer, also planned to launch a consumer-facing app that would use AI to match drinkers with beers, the company told 404 Media.Best Beer also threatened legal action against one judge who wrote an open letter criticizing the use of AI in beer tasting and judging, according to multiple judges and text messages reviewed by 404 Media.
The months-long episode shows what can happen when organizations try to push AI onto a hobby, pursuit, art form, or even industry which has many members who are staunchly pro-human and anti-AI. Over the last several years we’ve seen it with illustrators, voice actors, music, and many more. AI came for beer too.
“It is attempting to solve a problem that wasn’t a problem before AI showed up, or before big tech showed up,” Greg Loudon, a certified beer judge and brewery sales manager, and who was the judge threatened with legal action, said. “I feel like AI doesn’t really have a place in beer, and if it does, it’s not going to be in things that are very human.”
“There’s so much subjectivity to it, and to strip out all of the humanity from it is a disservice to the industry,” he added. Another judge said the introduction of AI was “enshittifying” beer tasting.
💡
Do you know anything else about how AI is impacting beer? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This story started earlier this year at a Canadian Brewing Awards judging event. Best Beer is the company behind the Canadian Brewing Awards, which gives awards in categories such as Experimental Beer, Speciality IPA, and Historic/Regional Beers. To be a judge, you have to be certified by the Beer Judge Certification Program (BJCP), which involves an exam covering the brewing process, different beer styles, judging procedures, and more.
Around the third day of the competition, the judges were asked to enter their tasting notes into a new AI-powered app instead of the platform they already use, one judge told 404 Media. 404 Media granted the judge anonymity to protect them from retaliation.
Using the AI felt like it was “parroting back bad versions of your judge tasting notes,” they said. “There wasn't really an opportunity for us to actually write our evaluation.” Judges would write what they thought of a beer, and the AI would generate several descriptions based on the judges’ notes that the judge would then need to select. It would then provide additional questions for judges to answer that were “total garbage.”
“It was taking real human feedback, spitting out crap, and then making the human respond to more crap that it crafted for you,” the judge said.
“On top of all the misuse of our time and disrespecting us as judges, that really frustrated me—because it's not a good app,” they said.
Screenshot of a Best Beer-related website.
Multiple judges then met to piece together what was happening, and Loudon published his open letter in April.
“They introduced this AI model to their pool of 40+ judges in the middle of the competition judging, surprising everyone for the sudden shift away from traditional judging methods,” the letter says. “Results are tied back to each judge to increase accountability and ensure a safe, fair and equitable judging environment. Judging for competitions is a very human experience that depends on people filling diverse roles: as judges, stewards, staff, organizers, sorters, and venue maintenance workers,” the letter says.
“Their intentions to gather our training data for their own profit was apparent,” the letter says. It adds that one judge said “I am here to judge beer, not to beta test.”
The letter concluded with this: “To our fellow beverage judges, beverage industry owners, professionals, workers, and educators: Sign our letter. Spread the word. Raise awareness about the real human harms of AI in your spheres of influence. Have frank discussions with your employers, colleagues, and friends about AI use in our industry and our lives. Demand more transparency about competition organizations.”
33 people signed the letter. They included judges, breweries, and members of homebrewer associations in Canada and the United States.
Loudon told 404 Media in a recent phone call “you need to tell us if you're going to be using our data; you need to tell us if you're going to be profiting off of our data, and you can't be using volunteers that are there to judge beer. You need to tell people up front what you're going to do.”
playlist.megaphone.fm?p=TBIEA2…
At least one brewery that entered its beer into the Canadian Brewing Awards publicly called out Best Beer and the awards. XhAle Brew Co., based out of Alberta, wrote in a Facebook post in April that it asked for its entry fees of $565 to be refunded, and for the “destruction of XhAle's data collected during, and post-judging for the Best Beer App.”“We did not consent to our beer being used by a private equity tech fund at the cost to us (XhAle Brew Co. and Canadian Brewers) for a for-profit AI application. Nor do we condone the use of industry volunteers for the same purpose,” the post said.
Ob Simmonds, head of innovation at the Canadian Brewing Awards, told 404 Media in an email that “Breweries will have amazing insight on previously unavailable useful details about their beer and their performance in our competition. Furthermore, craft beer drinkers will be able to better sift through the noise and find beers perfect for their palate. This in no way is aimed at replacing technical judging with AI.”
With the consumer app, the idea was to “Help end users find beers that match their taste profile and help breweries better understand their results in our competition,” Simmonds said.
Simmonds said that “AI is being used to better match consumers with the best beers for their palate,” but said Best Beer is not training its own model.
Those plans have come to a halt though. At the end of September, the Canadian Brewing Awards said in an Instagram post the team was “stepping away.” It said the goal of Best Beer was to “make medals matter more to consumers, so that breweries could see a stronger return on their entries.” The organization said it “saw strong interest from many breweries, judges and consumers” and that it will donate Best Beer’s assets to a non-profit that shows interest. The post added the organization used third-party models that “were good enough to achieve the results we wanted,” and the privacy policies forbade training on the inputted data.
A screenshot of the Canadian Beer Awards' Instagram post.
The post included an apology: “We apologize to both judges and breweries for the communication gaps and for the disruptions caused by this year’s logistical challenges.”In an email sent to 404 Media this month, the Canadian Brewing Awards said “the Best Beer project was never designed to replace or profit from judges.”
“Despite these intentions, the project came under criticism before it was even officially launched,” it added, saying that the open letter “mischaracterized both our goals and approach.”
“Ultimately, we decided not to proceed with the public launch of Best Beer. Instead, we repurposed parts of the technology we had developed to support a brewery crawl during our gala. We chose to pause the broader project until we could ensure the judging community felt confident that no data would be used for profit and until we had more time to clear up the confusion,” the email added. “If judges wanted their data deleted what assurance can we provide them that it was in fact deleted. Everything was judged blind and they would have no access to our database from the enhanced division. For that reason, we felt it was more responsible to shelve the initiative for now.”
One judge told 404 Media: “I don’t think anyone who is hell bent on using AI is going to stop until it’s no longer worth it for them to do so.”
“I just hope that they are transparent if they try to do this again to judges who are volunteering their time, then either pay them or give them the chance ahead of time to opt-out,” they added.
Now months after this all started, Loudon said “The best beers on the market are art forms. They are expressionist. They're something that can't be quantified. And the human element to it, if you strip that all away, it just becomes very basic, and very sanitized, and sterilized.”
“Brewing is an art.”
XhAle Brew Co.
XhAle is not just a craft beer company. We are a company comprised of majority equity-deserving folks, and have been and still are marginalized in this industry. We understand and have personally...www.facebook.com
A hack impacting Discord’s age verification process shows in stark terms the risk of tech companies collecting users’ ID documents. Now the hackers are posting peoples’ IDs and other sensitive information online.#News
The Discord Hack is Every User’s Worst Nightmare
A catastrophic breach has impacted Discord user data including selfies and identity documents uploaded as part of the app’s verification process, email addresses, phone numbers, approximately where the user lives, and much more.The hack, carried out by a group that is attempting to extort Discord, shows in stark terms the risk of tech companies collecting users’ identity documents, and specifically in the context of verifying their age. Discord started asking users in the UK, for example, to upload a selfie with their ID as part of the country’s age verification law recently.
💡
Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“This is about to get really ugly,” the hackers wrote in a Telegram channel, which 404 Media joined, while posting user data on Wednesday. A source with knowledge of the breach confirmed to 404 Media that the data is legitimate. 404 Media granted the source anonymity to speak candidly about a sensitive incident.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘We don’t want democracy lol. We want caliphate.’ According to court records, an Oklahoma guardsman with a security clearance gave 3D printed firearms to an FBI agent posing as an Al Qaeda contact.#News
National Guardsman Planned American Caliphate on Discord, Sent 3D Printed Guns to Al Qaeda, Feds Say
The FBI accused a former National Guardsman living in Tulsa, Oklahoma of trying to sell 3D printed guns to Al Qaeda. According to an indictment unsealed by the Justice Department in September, 25-year-old Andrew Scott Hastings used a Discord server to plan a Caliphate in America and shipped more than 100 3D printed machine gun conversion kits to an undercover FBI operative who claimed he had contacts in the terrorist organization.
playlist.megaphone.fm?p=TBIEA2…
The Army Times first reported the story after the DOJ unsealed the charging documents. According to the court records, Hastings first landed on the radar of authorities in 2019 when a co-worker at an Abuelo’s restaurant in Tulsa called the police to report he’d been talking about blowing things up. When the cops interviewed Hastings, he told them he was just interested in chemistry and The Anarchist’s Cookbook. In 2020, the cops interviewed his mom. “Hastings’ mother, Terri, told TPD that her son was on the [autism] spectrum, was socially active online, and had converted to Islam.”According to Terri, odd incidents piled up. She said that someone mailed Hastings a Quran, that he’d once received an order of chicken wings paid for by someone in Indonesia, and that he’d once threatened his family with a can of gasoline. “She also mentioned an incident in the family home where Hastings became enraged when she cooked bacon, and thereafter called someone she described as his ‘handler,’” according to court records.
The charging documents said the FBI got involved in 2024 because of a Discord server called “ARMY OF MUHAMMAD.” Discord cooperated with the FBI investigation and granted access to some of Hastings’ records to authorities. The FBI alleged that Hastings met with several other people on the Discord server and plotted terror attacks against Americans. At this time, Hastings worked for the National Guard as an aircraft powertrain repairer and held a SECRET-level national security clearance.
The charging documents detailed Hastings' alleged plot to establish a caliphate in the US via Discord. “[T]he most important theater right now is cyberspace…we need an actionable plan we can start work on--something slow and Ling(sic) term not hasty and slapdash,” Hastings allegedly said on Discord. “I think it would be best if we create a channel and I’ll list a physical training routine.”
“If we get 9-10 guys maybe inshaAllah we can …we could put headquarters in the USA cuz yk [you know] if we are fighting them the military is prohibited from operations on the homeland only ntnal [sic] guard and agencies can operate within borders…[y]ou need to contest air land and cyberspace…what my plan addresses is how to contest all of these at once while providing more aid than harm we can do in collateral and taking out targets of higher strength.”
According to the FBI’s version of events, Hastings talked about moving the group off of Discord and onto Signal because he believed Discord wasn’t secure. He also bragged about police interrogating him about explosives and “claimed to have made a firearm and discussed making a nuclear rocket.”
“We don’t want democracy lol,” he said on Discord, according to court records. “We want caliphate”
Hastings talked about other groups he was in contact with on Signal, offered to make training videos about weapon handling, and told others on the Discord server that he knew how to make firearms and was willing to ship them to like-minded militants. “I already have some small arms components partially finished and nearly ready to issue,” he said, according to the charging document. “I’ll send one photo but wanna remain kinda anonymous.”
The FBI said it slipped an “Online Covert Employee” (OCE) into Hastings’ life on March 26, 2025. Posing as a person on eBay, the FBI employee told Hastings he had contacts with Al Qaeda. “The OCE then recommended they move the conversation to Telegram or Signal, the latter of which Hastings said did not even have ‘a backdoor,’ meaning it could not be hacked or intercepted by law enforcement.”
The issue, of course, is that Hastings was speaking with an FBI employee. Over the next few months, Hastings spoke with the OCE about using a 3D printer to manufacture weapons for them with the eventual goal of getting them in the hands of Al Qaeda. Hastings allegedly told the OCE that he’d been discharged from the military and needed to make money.
In the summer of 2025, the FBI alleged that Hastings started mass printing Glock parts and switch conversion kits for Al Qaeda. “Hastings told the OCE he was moving out of his parent’s home in July 2025 after they complained about the noise and smell created when he 3D printed weapons,” the court documents said. The FBI allegedly has video of Hastings at a post office shipping multiple packages that summer that authorities said contained more than 100 3D printed switches, two 3D printed lower receivers for a Glock, and one 3D printed Glock slide.
The FBI has charged Hastings with attempting to provide material support or resources to designated foreign terrorist organizations and illegal possession or transfer of a machinegun. The Justice Department considers every single 3D conversion kit Hastings shipped an individual machinegun, even when they’re not installed.
Former Guardsman charged with trying to provide weapons to al-Qaida
A 25-year-old former Army National Guardsman faces federal charges that he attempted to provide al-Qaida with 3D-printed weapons.Todd South (Army Times)
Eyes Up's purpose is to "preserve evidence until it can be used in court." But it has been swept up in Apple's crackdown on ICE-spotting apps.
Eyes Upx27;s purpose is to "preserve evidence until it can be used in court." But it has been swept up in Applex27;s crackdown on ICE-spotting apps.#News
Apple Banned an App That Simply Archived Videos of ICE Abuses
Apple removed an app for preserving TikToks, Instagram reels, news reports, and videos documenting abuses by ICE, 404 Media has learned. The app, called Eyes Up, differs from other banned apps such as ICEBlock which were designed to report sightings of ICE officials in real-time to warn local communities. Eyes Up, meanwhile, was more of an aggregation service pooling together information to preserve evidence in case the material is needed in the future in court.The news shows that Apple and Google’s crackdown on ICE-spotting apps, which started after pressure from the Department of Justice against Apple, is broader in scope than apps that report sightings of ICE officials. It has also impacted at least one app that was more about creating a historical record of ICE’s activity during its mass deportation effort.
“Our goal is government accountability, we aren’t even doing real-time tracking,” the administrator of Eyes Up, who said their name was Mark, told 404 Media. Mark asked 404 Media to only use his first name to protect him from retaliation. “I think the [Trump] admin is just embarrassed by how many incriminating videos we have.”
💡
Do you work at Apple or Google and know anything else about these app removals? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Mark said the app was removed on October 3. At the time of writing, the Apple App Store says “This app is currently not available in your country or region” when trying to download Eyes Up.
The website for Eyes Up which functions essentially the same way is still available. The site includes a map with dots that visitors can click on, which then plays a video from that location. Users are able to submit their own videos for inclusion. Mark said he manually reviews every video before it is uploaded to the service, to check its content and its location.
“I personally look at each submission to ensure that it's relevant, accurately described to the best I can tell, and appropriate to post. I actually look at the user submitted location and usually cross-reference with [Google] Street View to verify. We have an entire private app just for moderation of the submissions,” Mark said.
Screenshots of Eyes Up.
The videos available on Eyes Up are essentially the same you might see when scrolling through TikTok, Instagram, or X. They are a mix of professional media reports and user-generated clips of ICE arrests. Many of the videos are clearly just re-uploads of material taken from those social media apps, and still include TikTok or Instagram watermarks. Mark said the videos are also often taken from Reddit or the community- and crime-awareness app Citizen too.
Many of the videos from New York are footage of ICE officials aggressively detaining people inside the city’s courts, something ICE has been doing for months. Another is a video from the New York Immigration Coalition (NYIC), which represents more than 200 immigrant and refugee rights groups. Another is an Instagram video showing ICE taking “a mother as her child begs the officers not to take her,” according to a caption on the video. The map includes similar videos from San Diego, Los Angeles, and Portland, Oregon, which are clearly taken from TikTok or media reports, including NBC News.
“Our goal is to preserve evidence until it can be used in court, and we believe the mapping function will make it easier for litigants to find bystander footage in the future,” Mark said.
Aaron Reichlin-Melnick, senior fellow at the American Immigration Council, told 404 Media “Like any other government agency, DHS is required to follow the law. The collection of video evidence is a powerful tool of oversight to ensure that the government respects the rights of citizens and immigrants alike. People have a right to film interactions with law enforcement in public spaces and to share those videos with others.”
“If DHS is concerned that the actions of their own officers might inflame public opinion against the agency, they should work to increase oversight and accountability at the agency — rather than seek to have the evidence banned,” he added.
Apple removed ICEBlock, another much more prominent app, on Thursday from its App Store. The move came after direct pressure from Department of Justice officials acting at the direction of Attorney General Pam Bondi, according to Fox. A statement the Department of Justice provided to 404 Media said the agency reached out to Apple “demanding they remove the ICEBlock app from their App Store—and Apple did so.” Fox says authorities have claimed that Joshua Jahn, the suspected shooter of an ICE facility in September in which a detainee was killed, searched his phone for various tracking apps before attacking the facility.
Joshua Aaron, the developer of ICEBlock, told 404 Media “we are determined to fight this.”
ICEBlock allowed people to create an alert, based on their location, about ICE officials in their area. This then sent an alert to other users nearby.
Apple also removed another similar app called Red Dot, 404 Media reported. Google did the same thing, and described ICE officials as a vulnerable group. Apple also removed an app called DeICER.
playlist.megaphone.fm?p=TBIEA2…
Yet, Eyes Up differs from those apps in that it does not function as a real-time location reporting app.Apple did not respond to a request for comment on Wednesday about Eyes Up’s removal.
Mark provided 404 Media with screenshots of the emails he received from Apple. In the emails, Apple says Eyes Up violates the company’s guidelines around objectionable content. That can include “Defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups, particularly if the app is likely to humiliate, intimidate, or harm a targeted individual or group. Professional political satirists and humorists are generally exempt from this requirement.”
The emails also say that law enforcement have provided Apple with information that shows the purpose of the app is “to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”
The emails are essentially identical to those sent to the developer of ICEBlock which 404 Media previously reported on.
In an appeal to the app removal, Mark told Apple “the posts on this app are significantly delayed and subject to manual review, meaning the officers will be long gone from the location by the time the content is posted to be viewed by the public. This would make it impossible for our app to be used to harm such officers individually or as a group.”
“The sole purpose of Eyes Up is to document and preserve evidence of abuses of power by law enforcement, which is an important function of a free society and constitutionally protected,” Mark’s response adds.
Apple then replied and said the ban remains in place, according to another email Mark shared.
The app is available on Google's Play Store.
Update: this piece has been updated to include comment from Aaron Reichlin-Melnick.
SCOOP: Apple Quietly Made ICE Agents a Protected Class
Internal emails show tech giant used anti-hate-speech rules meant for minorities to block an app documenting immigration enforcement.Pablo Manríquez (Migrant Insider)
New leaked documents show how the FBI convinced a judge to let its partners collect a mass of encrypted messages from thousands of phones around the world.#News
Cocaine in Private Jets and Sex Toys: What the FBI Found on its Secretly Backdoored Chat App
Private jets loaded with cocaine landing at an airport in Germany. A trafficker stuffing a racing sail boat with drugs and entering a tournament to blend in with other racers before speeding off. Vacuum-sealed layers of methamphetamine inside solar panels. And nearly 60 kilograms of drugs hidden inside a shipment of sex toys.These are just some of the examples included in a cache of leaked U.S. Department of Justice documents the FBI used to convince a judge to let them continue harvesting messages from Anom. Anom was an encrypted phone and app the FBI secretly took over, backdoored, and ran for years as a tech company popular with organized crime around the world. The Anom operation, dubbed Trojan Shield, was the largest sting operation ever.
The documents provide more insight into the sorts of criminals swept up in the FBI’s investigation, and give behind-the-scenes detail on how exactly the FBI obtained legal approval for such a gigantic, and to some controversial, operation. The leaked documents include the original court orders from Lithuania, which assisted the FBI in collecting the data from Anom devices worldwide, and the FBI’s supporting documentation for those court orders. The documents were not supposed to be released publicly, but someone posted them anonymously online.
💡
Do you know anything else about Anom, Sky, Encrochat, or other encrypted phone companies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“Like I said this Turbo crew are the Pablo Escobar of this time in that area and got full control there,” one message written by an alleged drug trafficker included in the documents reads.
404 Media showed sections of the documents to people with direct knowledge of the operation who said they appeared authentic. Finnish outlet Yle reported on some of their contents at the end of September, but 404 Media is publishing copies of the documents themselves.
In 2018 the FBI shut down an encrypted phone company called Phantom Secure. In the wake of that, a seller from Phantom Secure and another popular company called Sky offered U.S. authorities their own, in-development encrypted device: Anom. The FBI then took Anom under its wing and oversaw a backdoor placed into the app. This involved adding a “ghost” contact to every group chat and direct message across the platform. The operation started in Australia as a beta test, before expanding to Europe, South America, and other parts of the world, sweeping up messages from cartels to biker gangs to hitmen to money launderers.
A screenshot from the documents.
Some of the documents are formal requests for continued assistance from the U.S. to Lithuania and spell out the sort of criminal activity the FBI has seen on the Anom platform. Several sections name specific and well-known drug traffickers. One is Maximillian Rivkin, also known as “Microsoft.” As I chronicled in my book about Anom, Rivkin was a devilishly creative drug trafficker, constantly making new schemes to smuggle cocaine or other narcotics. The new documents say Rivkin’s Serbia-based organized crime group was involved in the trafficking of hundreds of kilograms of cocaine between South America and Spain. To move the drugs, the group sailed a boat during a November 2020 regatta, a sailing race, “where their travel will be obscured by other boats and sail to the Caribbean,” the documents say. Around two or three weeks later, the boat would then return to Europe with the cocaine, before being dropped off the coast of Spain where another member of the group would pick it up, the documents add.In another instance Rivkin’s group smuggled cocaine base within juice bottles from Colombia to Europe, according to the documents. In my book, I found Rivkin planned to do something similar with energy drinks.
Dark Wire
Written “in the manner of a good crime thriller” (The Wall Street Journal), the inside story of the largest law-enforcement sting operation ever, in whic…PublicAffairsLearn more about this author
These sorts of audacious, over-the-top drug smuggling operations were a common sight on Anom, according to my own review of hundreds of thousands of Anom text messages between drug traffickers I previously obtained. The new documents also specifically name Hakan Ayik, who was the head of the so-called Aussie Cartel, which controlled as much as a third of all drug importation into Australia, and who at one point was Australia’s most wanted criminal. Ayik discussed sending a massive 900 kilograms of cocaine through Malaysia to Australia concealed within shipments of scrap metal, according to the documents.“Can you give me roughly the coordinates where’s the better place to meet outside Indonesian waters,” Ayik, using the moniker Oscar, said in one of the messages included in the documents.
playlist.megaphone.fm?p=TBIEA2…
Both Rivkin and Ayik were later arrested by Turkish authorities. Ayik was also known as the “encryption king,” likely due to his prolific selling of encrypted communication devices to organized criminals.Other examples in the documents include a Dutch drug trafficking group involving a man called Guiliano Domenico Azzarito. That group smuggled cocaine between South America and Europe with private jet flights into small and medium sized airports the group controls, according to the documents. “Look we can move 20 tons easily every month from here in the future,” one message said.
Another describes Baris Tukel, a high-ranking Comanchero motorcycle gang member who was later charged by the U.S. for helping to spread Anom devices, discussing plans to hide methamphetamine and MDMA in marble tiles. In another case, a drug trafficker with the username RealG discussed smuggling drugs on a sailboat, inside shipments of bananas and hides, and cocaine base hidden inside fertilizer.
In September 2020, a drug trafficking group smuggled a shipment of cocaine and methamphetamine from the UK, through Singapore, to Australia, according to the documents. Authorities later searched the shipment, and found nearly 60 kilograms of drugs “concealed within 21 boxes of sex toys,” the documents say.
A screenshot from the documents.
The messages included in the document also detail some of the extreme violence Anom users engaged in. Simon Bekiri, a Comanchero member, discussed an assault against a rival gang, according to the documents. “I even pistol whipped him 3 times and blood was squirting out of his head almost a meter high in time with his heartbeat (That part was really funny),” one of the messages reads. “But when you say I pistol whipped him, shot him, bashed him and then took off in his car I’ll admit it does sound violent.”These examples were used to help convince a Lithuanian judge to allow local authorities to continue providing the FBI with Anom messages. In an unusual legal workaround, instead of running the Anom collection server in the U.S., which may have created more legal headaches, the Department of Justice arranged for it to be run in Lithuania. Lithuanian authorities then provided a regular stream of collected Anom messages to the FBI. In all, Anom grew to 12,000 devices and the FBI collected tens of millions of messages before shutting the network down in June 2021.
404 Media first revealed in September 2023 Lithuania was the so-called “third country” that harvested the messages for the FBI. The Department of Justice has never formally acknowledged Lithuania’s role despite the leaked documents further corroborating 404 Media’s reporting.
Revealed: The Country that Secretly Wiretapped the World for the FBI
For years the FBI ran its own encrypted phone company to intercept messages from thousands of people around the globe. One country was critical to that operation, whose identity was unknown to the public. Until now.Joseph Cox (404 Media)
Libraries have shared their collections internationally for decades. Trump’s tariffs are throwing that system into chaos and can ‘hinder academic progress.’#News
Libraries Can’t Get Their Loaned Books Back Because of Trump’s Tariffs
The Trump administration’s tariff regime and the elimination of fee exemptions for items under $800 is limiting resource sharing between university libraries, trapping some books in foreign countries, and reversing long-held standards in academic cooperation.“There are libraries that have our books that we've lent to them before all of this happened, and now they can't ship them back to us because their carrier either is flat out refusing to ship anything to the U.S., or they're citing not being able to handle the tariff situation,” Jessica Bower Relevo, associate director of resource sharing and reserves at Yale University Library, told me.
After Trump’s executive order ended the de minimis exemption, which allowed people to buy things internationally without paying tariffs if the items cost less than $800, we’ve written several stories about how the decision caused chaos over a wide variety of hobbies that rely on people buying things overseas, especially on Ebay, where many of those transactions take place.
Libraries that share their materials internationally are in a similar mess, partly because some countries’ mail services stopped shipments to and from the U.S. entirely, but the situation for them is arguably even more complicated because they’re not selling anything—they’re just lending books.
“It's not necessarily too expensive. It's that they don't have a mechanism in place to deal with the tariffs and how they're going to be applied,” Relevo said. “And I think that's true of U.S. shipping carriers as well. There’s a lot of confusion about how to handle this situation.”
“The tariffs have impacted interlibrary loans in various ways for different libraries,” Heather Evans, a librarian at RMIT University in Australia, told me in an email. “It has largely depended on their different procedures as to how much they have been affected. Some who use AusPost [Australia’s postal service] to post internationally have been more impacted and I've seen many libraries put a halt on borrowing to or from the US at all.” (AusPost suspended all shipments to the United States but plans to renew them on October 7).
Relevo told me that in some cases books are held up in customs indefinitely, or are “lost in warehouses” where they are held for no clear reason.
As Relevo explains it, libraries often provide people in foreign institutions books in their collections by giving them access to digitized materials, but some books are still only available in physical copies. These are not necessarily super rare or valuable books, but books that are only in print in certain countries. For example, a university library might have a specialized collection on a niche subject because it’s the focus area of a faculty member, a French university will obviously have a deeper collection of French literature, and some textbooks might only be published in some languages.
A librarian’s job is to give their community access to information, and international interlibrary loans extend that mission to other countries by having libraries work together. In the past, if an academic in the U.S. wanted access to a French university’s deep collection of French literature, they’d have to travel there. Today, academics can often ask that library to ship them the books they want. Relevo said this type of lending has always been useful, but became especially popular and important during COVID lockdowns, when many libraries were closed and international travel was limited.
“Interlibrary loans has been something that libraries have been able to do for a really long time, even back in the early 1900s,” Relevo said. “If we can't do that anymore and we're limiting what our users can access, because maybe they're only limited to what we have in our collection, then ultimately could hinder academic progress. Scholars have enjoyed for decades now the ability to basically get whatever they need for their research, to be very comprehensive in their literature reviews or the references that they need, or past research that's been done on that topic, because most libraries, especially academic libraries, do offer this service [...] If we can't do that anymore, or at least there's a barrier to doing that internationally, then researchers have to go back to old ways of doing things.”
The Trump administration upended this system of knowledge sharing and cooperation, making life even harder for academics in the U.S., who are already fleeing to foreign universities because they fear the government will censor their research.
The American Library Association (ALA) has a group dedicated to international interlibrary lending, called the International Interlibrary Loan (ILL) Committee, which is nested in the Sharing and Transforming Access to Resources Section (STARS) of the Reference and User Services Association (RUSA). Since Trump’s executive order and tariffs regime, the RUSA STARS International ILL Committee has produced a site dedicated to helping librarians navigate the new, unpredictable landscape.
In addition to explaining the basic facts of the tariffs and de minimis, the site also shares resources and “Tips & Tricks in Uncertain Times,” which encourages librarians to talk to partner libraries before lending or borrowing books, and to “be transparent and set realistic expectations with patrons.” The page also links to an online form that asks librarians to share any information they have about how different libraries are handling the elimination of de minimis in an attempt to crowd source a better understanding of the new international landscape.
“Let's say this library in Germany wanted to ship something to us,” Relevo said. “It sounds like the postal carriers just don't know how to even do that. They don’t know how to pass that tariff on to the library that's getting the material, there's just so much confusion on what you would even do if you even wanted to. So they're just saying, ‘No, we're not shipping to the U.S.’”
Relevo told me that one thing the resource sharing community has talked about a lot is how to label packages so customs agents know they are not [selling] goods to another country. Relevo said that some libraries have marked the value of books they’re lending as $0. Others have used specific codes to indicate the package isn’t a good that’s being bought or sold. But there’s not one method that has worked consistently across the board.
“It does technically have value, because it's a tangible item, and pretty much any tangible item is going to have some sort of value, but we're not selling it,” she said. “We're just letting that library borrow it and then we're going to get it back. But the way customs and tariffs work, it's more to do with buying and selling goods and library stuff isn't really factored into those laws [...] it's kind of a weird concept, especially when you live in a highly capitalized country.”
Relevo said that the last 10-15 years have been a very tumultuous time for libraries, not just because of tariffs, but because AI-generated content, the pandemic, and conservative organizations pressuring libraries to remove certain books from their collections.
“At the end of the day, us librarians just want to help people, so we're just trying to find the best ways to do that right now with the resources we have,” she said.
“What I would like the public to know about the situation is that their librarians as a group are very committed to doing the best we can for them and to finding the best options and ways to fulfill their requests and access needs. Please continue to ask us for what you need,” Evans said. “At the moment we would ask for a little extra patience, and perhaps understanding that we might not be able to get things as urgently for them if it involves the U.S., but we will do as we have always done and search for the fastest and most helpful way to obtain access to what they require.”
Police Bodycam Shows Sheriff Hunting for 'Obscene' Books at Library
Body camera footage from Idaho reveals a sheriff hunting for a YA book he could use for a political stunt.Jason Koebler (404 Media)
The move comes as Apple removed ICEBlock after direct pressure from U.S. Department of Justice officials and signals a broader crackdown on ICE-spotting apps.#News
Google Calls ICE Agents a Vulnerable Group, Removes ICE-Spotting App ‘Red Dot’
Both Google and Apple recently removed Red Dot, an app people can use to report sightings of ICE officials, from their respective app stores, 404 Media has found. The move comes after Apple removed ICEBlock, a much more prominent app, from its App Store on Thursday following direct pressure from U.S. Department of Justice officials. Google told 404 Media it removed apps because they shared the location of what it describes as a vulnerable group that recently faced a violent act connected to these sorts of ICE-spotting apps—a veiled reference to ICE officials.The move signals a broader crackdown on apps that are designed to keep communities safe by crowdsourcing the location of ICE officials. Authorities have claimed that Joshua Jahn, the suspected shooter of an ICE facility in September and who killed a detainee, searched his phone for various tracking apps. A long-running immigration support group on the ground in Chicago, where ICE is currently focused, told 404 Media some of its members use Red Dot.
💡
Do you know anything else about these apps and their removal? Do you work at Google, Apple, or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“Ready to Protect Your Community?” the website for Red Dot reads. “Download Red Dot and help build a stronger protection network.”
The site provides links to the app’s page on the Apple App Store and Google Play Store. As of at least Friday, both of those links return errors. “This app is currently not available in your country or region,” says the Apple one, and “We're sorry, the requested URL was not found on this server,” says the Google one.
The app allows people to report ICE presence or activity, along with details such as the location and time, according to Red Dot’s website. The app then notifies nearby community members, and users can receive alerts about ICE activity in their area, the website says.
Google confirmed to 404 Media that it removed Red Dot. Google said it did not receive any outreach from the Department of Justice about this issue and that it bans apps with a high risk of abuse. Without talking about the shooting at the ICE facility specifically, the company said it removed apps that share the location of what it describes as a vulnerable group after a recent violent act against them connected to this sort of app. Google said apps that have user generated content must also conduct content moderation.
playlist.megaphone.fm?p=TBIEA2…
Google added in a statement that “ICEBlock was never available on Google Play, but we removed similar apps for violations of our policies.”Google’s Play Store policies say the platform does not allow apps that “promote violence” against “groups based on race or ethnic origin, religion, disability age, nationality, veteran status, sexual orientation, gender, gender identity, caste, immigration status, or any other characteristic that is associated with systemic discrimination or marginalization,” but its published policies do not include information about how it defines what types of groups are protected.
Red Dot did not respond to a request for comment.
On Thursday Apple told 404 Media it removed multiple ICE-spotting apps, but did not name Red Dot. Apple did not respond to another request for comment on Friday.
On Thursday Joshua Aaron, the developer of ICEBlock, told 404 Media “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” referring to Apple removing his own app. ICEBlock rose to prominence in June when CNN covered the app. That app was only available on iOS, while Red Dot was available on both iOS and Android.
“ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple's own Maps app, implements as part of its core services. This is protected speech under the first amendment of the United States Constitution,” Aaron continued. “We are determined to fight this with everything we have. Our mission has always been to protect our neighbors from the terror this administration continues to reign down on the people of this nation. We will not be deterred. We will not stop. #resist.”
That move from Apple came after pressure from Department of Justice officials on behalf of Attorney General Pam Bondi, according to Fox. “ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed. This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe,” Bondi told Fox. The Department of Justice declined to comment beyond Bondi's earlier comments.
The current flashpoint for ICE’s mass deportation effort is Chicago. This week ICE raided an apartment building and removed everyone from the building only to ask questions later, according to local media reports. “They was terrified. The kids was crying. People was screaming. They looked very distraught. I was out there crying when I seen the little girl come around the corner, because they was bringing the kids down, too, had them zip tied to each other," one neighbor, Eboni Watson, told ABC7. “That's all I kept asking. What is the morality? Where's the human? One of them literally laughed. He was standing right here. He said, 'f*** them kids.’”
Brandon Lee, communications lead at Illinois Coalition for Immigrant and Refugee Rights, told 404 Media some of the organization’s teams have used Red Dot and similar apps as a way of taking tips. But the organization recommends people call its hotline to report ICE activity. That hotline has been around since 2011, Lee said. “The thing that takes time is the infrastructure of trust and training that goes into follow-up, confirmation, and legal and community support for impacted families, which we in Illinois have been building up over time,” he added.
“But I will say that at the end of the day it's important for all people of conscience to use their skills to shine some light on ICE's operations, given the agency's lack of transparency and overall lack of accountability,” he said, referring to ICE-spotting apps.
In ICEBlock’s case, people who already downloaded the app will be able to continue using but will be unable to re-download it from the Apple App Store, according to an email from Apple Aaron shared with 404 Media. Because Red Dot is available on Android, users can likely sideload the app—that is, install it themselves by downloading the APK file rather than from the Play Store.
The last message to Red Dot’s Facebook page was on September 24 announcing a new update that fixed various bugs.
Update: this piece has been updated to include a response from the Department of Justice.
ICE agents raid South Shore apartments; Trump says Chicago could become military training ground
Anti-ICE protesters marched up Michigan Avenue on Tuesday evening.Cate Cauguiran (ABC7 Chicago)
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Applex27;s actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.#News
ICEBlock Owner After Apple Removes App: ‘We Are Determined to Fight This’
The developer of ICEBlock, an app that lets people crowdsource sightings of ICE officials, has said he is determined to fight back after Apple removed the app from its App Store on Thursday. The removal came after pressure from Department of Justice officials acting at the direction of Attorney General Pam Bondi, according to Fox which first reported the removal. Apple told 404 Media it has removed other similar apps too.“I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” Joshua Aaron told 404 Media. “ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple's own Maps app, implements as part of its core services. This is protected speech under the first amendment of the United States Constitution.”
💡
Do you know anything else about this removal? Do you work at Apple or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe nowICEBlock Owner After Apple Removes App: ‘We Are Determined to Fight This’
Apple removed ICEBlock reportedly after direct pressure from Department of Justice officials. “I am incredibly disappointed by Apple's actions today. Capitulating to an authoritarian regime is never the right move,” the developer said.Joseph Cox (404 Media)
A hacking group called the Crimson Collective says it pulled data from private GitHub repositories connected to Red Hat's consulting business. Red Hat has confirmed it is investigating the compromise.
A hacking group called the Crimson Collective says it pulled data from private GitHub repositories connected to Red Hatx27;s consulting business. Red Hat has confirmed it is investigating the compromise.#News #Hacking
A ‘stray bullet’ 25,000 people offline near Dallas.#News
A Bullet Crashed the Internet in Texas
The internet can be more physically vulnerable than you think. Last week, thousands of people in North and Central Texas were suddenly knocked offline. The cause? A bullet. The outage hit cities all across the state, including Dallas, Irving, Plano, Arlington, Austin, and San Antonio. The outage affected Spectrum customers and took down their phone lines and TV services as well as the internet.“Right in the middle of my meetings 😒,” one users said on the r/Spectrum subreddit. Around 25,000 customers were without services for several hours as the company rushed to repair the lines. As the service came back,, WFAA reported that the cause of the outage came from the barrel of a gun. A stray bullet had hit a line of fiber optic cable and knocked tens of thousands of people offline.
playlist.megaphone.fm?p=TBIEA2…
“The outage stemmed from a fiber optic cable that was damaged by a stray bullet,” Spectrum told 404 Media. “Our teams worked quickly to make the necessary repairs and get customers back online. We apologize for the inconvenience.”Spectrum told 404 Media that it didn’t have any further details to share about the incident so we have no idea how the company learned a bullet hit its equipment, where the bullet was found, and if the police are involved. Texas is a massive state with overlapping police jurisdictions and a lot of guns. Finding a specific shooting incident related to telecom equipment in the vast suburban sprawl around Dallas is probably impossible.
Fiber optic cable lines are often buried underground, protected from the vagaries of southern gunfire. But that’s not always the case, fiber can be strung along telephone poles in the sky and sent to a vast and complicated network junction boxes and service stations that overlap different municipalities and cities, each with their own laws about how the cable can be installed. That can leave pieces of the physical infrastructure of the internet exposed to gunfire and other mischief.
This is not the first time gunfire has taken down the internet. In 2022, Xfinity fiber cable in Oakland, California went offline after people allegedly fired 17 rounds into the air near one of the company’s fiber lines. Around 30,000 people were offline during that outage and it happened moments before the start of an NFL game that saw the Los Angeles Rams square off against the San Francisco 49ers.
“We could not be more apologetic and sincerely upset that this is happening on a day like today,” Comcast spokesperson Joan Hammel told Dater Center Dynamics at the time. Hammel added that the company has seen gunshot wounds on its equipment before. “While this isn’t completely uncommon, it is pretty rare, but we know it when we see it.”
Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoples' phones. The database is updated every day with billions of pieces of location data.
Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoplesx27; phones. The database is updated every day with billions of pieces of location data.#News
ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day
Immigration and Customs Enforcement (ICE) has bought access to a surveillance tool that is updated every day with billions of pieces of location data from hundreds of millions of mobile phones, according to ICE documents reviewed by 404 Media.The documents explicitly show that ICE is choosing this product over others offered by the contractor’s competitors because it gives ICE essentially an “all-in-one” tool for searching both masses of location data and information taken from social media. The documents also show that ICE is planning to once again use location data remotely harvested from peoples’ smartphones after previously saying it had stopped the practice.
Surveillance contractors around the world create massive datasets of phones’, and by extension people’s movements, and then sell access to the data to government agencies. In turn, U.S. agencies have used these tools without a warrant or court order.
“The Biden Administration shut down DHS’s location data purchases after an inspector general found that DHS had broken the law. Every American should be concerned that Trump's hand-picked security force is once again buying and using location data without a warrant,” Senator Ron Wyden told 404 Media in a statement.
💡
Do you know anything else about this contract or others? Do you work at Penlink or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.The ICE document is redacted but says a product made by a contractor called Penlink “leverages a proprietary data platform to compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics.” The products the document is discussing are Tangles and Webloc.
Forbes previously reported that ICE spent more than $5 million on these products, including $2 million for Tangles specifically. Tangles and Webloc used to be run by an Israeli company called Cobwebs. Cobwebs joined Penlink in July 2023.
The new documents provide much more detail about the sort of location data ICE will now have access to, and why ICE chose to buy access to this vast dataset from Penlink specifically.
“Without an all-in-one tool that provides comprehensive web investigations capabilities and automated analysis of location-based data within specified geographic areas, intelligence teams face significant operational challenges,” the document reads. The agency said that the issue with other companies was that they required analysts to “manually collect and correlate data from fragmented sources,” which increased the chance of missing “connections between online behaviors and physical movements.”
A screenshot from the document.
ICE’s Homeland Security Investigations (HSI) conducted market research in May and June, according to the document. The document lists two other companies, Babel Street and Venntel, which also sell location data but which the agency decided not to partner with.404 Media and a group of other media outlets previously obtained detailed demonstration videos of Babel Street in action. They showed it was possible for users to track phones visiting and leaving abortion clinics, places of worship, and other sensitive locations. Venntel, meanwhile, was for some years a popular choice among U.S. government agencies looking to monitor the location of mobile phones. Its clients have included ICE, CBP, and the FBI. Its contracts with U.S. law enforcement have dried up in more recent years, with ICE closing out its work with the company in August, according to procurement records reviewed by 404 Media.
Companies that obtain mobile phone location data generally do it in two different ways. The first is through software development kits (SDKs) embedded in ordinary smartphone apps, like games or weather forecasters. These SDKs continuously gather a user’s granular location, transfer that to the data broker, and then sell that data onward or repackage it and sell access to government agencies.
The second is through real-time bidding (RTB). When an advert is about to be served to a mobile phone user, there is a near instantaneous, and invisible, bidding process in which different companies vie to have their advert placed in front of certain demographics. A side-effect is that this demographic data, including mobile phones’ location, can be harvested by surveillance firms. Sometimes spy companies buy ad tech companies out right to insert themselves into this data supply chain. We previously found at least thousands of apps were hijacked to provide location data in this way.
Penlink did not respond to a request for comment on how it gathers or sources its location data.
playlist.megaphone.fm?p=TBIEA2…
Regardless, the documents say that “HSI INTEL requires Penlink's Tangles and Weblocas [sic] an integral part of their investigations mission.” Although HSI has historically been focused on criminal investigations, 90 percent of HSI have been diverted to carry out immigration enforcement, according to data published by the Cato Institute. Meaning it is unclear whether use of the data will be limited to criminal investigations or not.After this article was published, DHS Assistant Secretary Tricia McLaughlin told 404 Media in a statement “DHS is not going to confirm or deny law enforcement capabilities or methods. The fact of the matter is the media is more concerned with peddling narratives to demonize ICE agents who are keeping Americans safe than they are with reporting on the criminals who have victimized our communities.” This is a boilerplate statement that DHS has repeatedly provided 404 Media when asked about public documents detailing the agency’s surveillance capabilities, and which inaccurately attacks the media.
In 2020, The Wall Street Journal first revealed that ICE and CBP were using commercially smartphone location data to investigate various crimes and for border enforcement. I then found CBP had a $400,000 contract with a location data broker and that the data it bought access to was “global.” I also found a Muslim prayer app was selling location data to a data broker whose clients included U.S. military contractors.
In October 2023, the Department of Homeland Security (DHS) Inspector General published a report that found ICE, CBP, and the Secret Service all broke the law when using location data harvested from phones. The oversight body found that those DHS components did not have sufficient policies and procedures in place to ensure that the location data was used appropriately. In one case, a CBP official used the technology to track the location of coworkers, the report said.
The report recommended that CBP stop its use of such data; CBP said at the time it did not intend to renew its contracts anyway. The Inspector General also recommended that ICE stop using such data until it obtained the necessary approvals. But ICE’s response in the report said it would continue to use the data. “CTD is an important mission contributor to the ICE investigative process as, in combination with other information and investigative methods, it can fill knowledge gaps and produce investigative leads that might otherwise remain hidden. Accordingly, continued use of CTD enables ICE HSI to successfully accomplish its law enforcement mission,” the response at the time said.
In January 2024, ICE said it had stopped the purchase of such “commercial telemetry data,” or CTD, which is how DHS refers to location data.
Update: this piece has been updated with a statement from DHS.
ICE says it’s stopped using commercial telemetry data
Spokesperson for Immigration and Customs Enforcement tells FedScoop that the agency is no longer using commercial telemetry data, but regulations are still scant.Rebecca Heilweil (FedScoop)
Klein has attempted to subpoena Discord and Reddit for information that would reveal the identity of moderators of a subreddit critical of him. The moderators' lawyers fear their clients will be physically attacked if the subpoenas go through.
Klein has attempted to subpoena Discord and Reddit for information that would reveal the identity of moderators of a subreddit critical of him. The moderatorsx27; lawyers fear their clients will be physically attacked if the subpoenas go through.#News #YouTube
Screenshots shared with 404 Media show tenant screening services ApproveShield and Argyle taking much more data than they need. “Opt-out means no housing.”#News
Landlords Demand Tenants’ Workplace Logins to Scrape Their Paystubs
Landlords are using a service that logs into a potential renter’s employer systems and scrapes their paystubs and other information en masse, potentially in violation of U.S. hacking laws, according to screenshots of the tool shared with 404 Media.The screenshots highlight the intrusive methods some landlords use when screening potential tenants, taking information they may not need, or legally be entitled to, to assess a renter.
“This is a statewide consumer-finance abuse that forces renters to surrender payroll and bank logins or face homelessness,” one renter who was forced to use the tool and who saw it taking more data than was necessary for their apartment application told 404 Media. 404 Media granted the person anonymity to protect them from retaliation from their landlord or the services used.
💡
Do you know anything else about any of these companies or the technology landlords are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“I am livid,” they added.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Moderators reversed course on its open door AI policy after fans filled the subreddit with AI-generated Dale Cooper slop.#davidlynch #AISlop #News
Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies
People on r/twinpeaks flooded the subreddit with AI slop images of FBI agent Dale Cooper and ChatGPT generated scripts after the community’s moderators opened the door to posting AI art. The tide of terrible Twin Peaks related slop lasted for about two days before the subreddit’s mods broke, reversed their decision, and deleted the AI generated content.Twin Peaks is a moody TV show that first aired in the 1990s and was followed by a third season in 2017. It’s the work of surrealist auteur David Lynch, influenced lots of TV shows and video games that came after and has a passionate fan base that still shares theories and art to this day. Lynch died earlier this year and since his passing he’s become a talking point for pro-AI art people who point to several interviews and second hand stories they claim show Lynch had embraced an AI-generated slop future.
On Tuesday, a mod posted a long announcement that opened the doors to AI on the sub. In a now deleted post titled “Ai Generated Content On r/twinpeaks,” the moderator outlined the position that the sub was a place for everyone to share memes, theories, and “anything remotely creative as long as it has a loose string to the show or its case or its themes. Ai generated content is included in all of this.”
The post went further. “We are aware of how Ai ‘art’ and Ai generated content can hurt real artists,” the post said. “Unfortunately, this is just the reality of the world we live in today. At this point I don’t think anything can stop the Ai train from coming, it’s here and this is only the beginning. Ai content is becoming harder and harder to identify.”
The mod then asked Redditors to follow an honor system and label any post that used AI with a special new flair so people could filter out those posts if they didn’t want to see them. “We feel this is a best of both worlds compromise that should keep everyone fairly happy,” the mod said.
An honor system, a flair, and a filter did not mollify the community. In the following 48 hours Lynch fans expressed their displeasure by showing r/twinpeaks what it looks like when no one can “stop the Ai train from coming.” They filled the subreddit with AI-generated slop in protest, including horrifying pictures of series protagonist Cooper doing an end-zone dance on a football field while Laura Palmer screamed in the sky and more than a few awful ChatGPT generated scripts.
Image via r/twinpeaks.
Free-IDK-Chicken, a former mod of r/twinpeaks who resigned over the AI debacle, said the post wasn’t run by other members of the mod team. “It was poorly worded. A bad take on a bad stance and it blew up in their face,” she told 404 Media. “It spiraled because it was condescending and basically told the community--we don’t care that it’s theft, that it’s unethical, we’ll just flair it so you can filter it out…they missed the point that AI art steals from legit artists and damages the environment.”According to Free-IDK-Chicken, the subreddit’s mods had been fighting over whether or not to ban AI art for months. “I tried five months ago to get AI banned and was outvoted. I tried again last month and was outvoted again,” she said.
On Thursday morning, with the subreddit buried in AI slop, the mods of r/twinpeaks relented, banned AI art, and cleaned up the protest spam. “After much thought and deliberation about the response to yesterday's events, the TP Mod Team has made the decision to reverse their previous statement on the posting of AI content in our community,” the mods said in a post announcing the new policy. “Going forward, posts including generative AI art or ChatGPT-style content are disallowed in this subreddit. This includes posting AI google search results as they frequently contain misinformation.”
Lynch has become a mascot for pro AI boosters. An image on a pro-AI art subreddit depicts Lynch wearing an OpenAI shirt and pointing at the viewer. “You can’t be punk and also be anti-AI, AI-phobic, or an AI denier. It’s impossible!” reads a sign next to the AI-generated picture of the director.
Image via r/slopcorecirclejerk
As evidence, they point to aBritish Film Institute interview published shortly before his death where he lauds AI and calls it “incredible as a tool for creativity and for machines to help creativity.” AI boosters often leave off the second part of the quote. “I’m sure with all these things, if money is the bottom line, there’d be a lot of sadness, and despair and horror. But I’m hoping better times are coming,” Lynch said.Image via r/slopcorecirclejerk
The other big piece of evidence people use to claim Lynch was pro-AI is a secondhand account given to Vulture by his neighbor, the actress Natasha Lyonne. According to the interview in Vulture, Lyonne asked Lynch for his thoughts on AI and Lynch picked up a pencil and told her that everyone has access to it and to a phone. “It’s how you use the pencil. You see?” He said.Setting aside the environmental and ethical arguments against AI-generated art, if AI is a “pencil,” most of what people make with it is unpleasant slop. Grotesque nonsense fills our social media feeds and AI-generated Jedis and Ghiblis have become the aesthetic of fascism.
We've seen other platforms and communities struggle with keeping AI art at bay when they've allowed it to exist alongside human-made content. On Facebook, Instagram, and Youtube, low-effort garbage is flooding online spaces and pushing productive human conversation to the margins, while floating to the top of engagement algorithms.
Other artist communities are pushing back against AI art in their own ways: Earlier this month, DragonCon organizers ejected a vendor for displaying AI-generated artwork. Artists’ portfolio platform ArtStation banned AI-generated content in 2022. And earlier this year, artists protested the first-ever AI art auction at Christie’s.
Artists Are Revolting Against AI Art on ArtStation
Artists are fed up with AI art on the portfolio platform, which is owned by Epic Games, but the company isn't backing down.Chloe Xiang (VICE)
Multiple Palantir and Flock sources say the companies are spinning a commitment to "democracy" to absolve them of responsibility. "In my eyes, it is the classic double speak," one said.#News
How Surveillance Firms Use ‘Democracy’ As a Cover for Serving ICE and Trump
In a blog post published in June, Garrett Langley, the CEO and co-founder of surveillance company Flock, said “We rely on the democratic process, on the individuals that the majority vote for to represent us, to determine what is and is not acceptable in cities and states.” The post explained that the company believes the laws of the country and individual states and municipalities, not the company, should determine the limits of what Flock’s technology can be used for, and came after 404 Media revealed local police were tapping into Flock’s networks of AI-enabled cameras for ICE, and that a sheriff in Texas performed a nationwide search for a woman who self-administered an abortion.💡
Do you work at any of these companies or others like them? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Langley’s statement echoes a common refrain surveillance and tech companies selling their products to Immigration and Customs Enforcement (ICE) or other parts of the U.S. government have said during the second Trump administration: we live in a democracy. It is not our job to decide how our powerful capabilities, which can track peoples’ physical location, marry usually disparate datasets together, or crush dissent, can or should be used. At least, that’s the thrust of the argument. That is despite the very clear reality that the first Trump administration was very different to the Biden administration, and both pale in comparison to Trump 2.0, with the executive branch and various agencies flaunting ordinary democratic values. The idea of what a democracy is capable of has shifted.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Dale Britt Bendler “earned approximately $360,000 in private client fees while also working as a full-time CIA contractor with daily access to highly classified material that he searched like it was his own personal Google,” according to a court record.#News
Contractor Used Classified CIA Systems as ‘His Own Personal Google’
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.A former CIA official and contractor, who at the time of his employment dug through classified systems for information he then sold to a U.S. lobbying firm and foreign clients, used access to those CIA systems as “his own personal Google,” according to a court record reviewed by 404 Media and Court Watch.
💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Dale Britt Bendler, 68, was a long running CIA officer before retiring in 2014 with a full pension. He rejoined the agency as a contractor and sold a wealth of classified information, according to the government’s sentencing memorandum filed on Wednesday. His clients included a U.S. lobbying firm working for a foreigner being investigated for embezzlement and another foreign national trying to secure a U.S. visa, according to the court record.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowContractor Used Classified CIA Systems as ‘His Own Personal Google’
Dale Britt Bendler “earned approximately $360,000 in private client fees while also working as a full-time CIA contractor with daily access to highly classified material that he searched like it was his own personal Google,” according to a court re…Joseph Cox (404 Media)
Academic workers are re-thinking how they live and work online after some have been fired for criticizing Charlie Kirk following his death.#News
Union Warns Professors About Posting In the ‘Current Climate’
A union that represents university professors and other academics published a guide on Wednesday tailored to help its members navigate social media during the “current climate.” The advice? Lock down your social media accounts, expect anything you post will be screenshotted, and keep things positive. The document ends with links to union provided trauma counseling and legal services.The American Association of University Professors (AAUP) published the two page document on September 17, days after the September 10 killing of right-wing pundit Charlie Kirk. The list of college professors and academics who've been censured or even fired for joking about, criticizing, or quoting Kirk after his death is long.
playlist.megaphone.fm?p=TBIEA2…
Clemson University in South Carolina fired multiple members of its faculty after investigating their Kirk-related social media posts. On Monday the state’s Attorney General sent the college a letter telling it that the first amendment did not protect the fired employees and that the state would not defend them. Two universities in Tennessee fired multiple members of the staff after getting complaints about their social media posts. The University of Mississippi let a member of the staff go because they re-shared a comment about Kirk that people found “insensitive.” Florida Atlantic University placed an art history professor on administrative leave after she posted about Kirk on social media. Florida's education commissioner later wrote a letter to school superintendents warning them there would be consequences for talking about Kirk in the wrong way. “Govern yourselves accordingly,” the letter said.AAUP’s advice is meant to help academic workers avoid ending up as a news story. “In a moment when it is becoming increasingly difficult to predict the consequences of our online speech and choices, we hope you will find these strategies and resources helpful,” it said.
Here are its five explicit tips: “1. Set your personal social media accounts to private mode. When prompted, approve the setting to make all previous posts private. 2. Be mindful that anything you post online can be screenshotted and shared. 3. Before posting or reposting online commentary, pause and ask yourself: a. Am I comfortable with this view potentially being shared with my employer, my students, or the public? Have I (or the person I am reposting) expressed this view in terms I would be comfortable sharing with my employer, my students, or the public?”
The advice continues: “4. In your social media bios, state that the views expressed through the account represent your own opinions and not your employer. You do not need to name your employer. Consider posting positive statements about positions you support rather than negative statements about positions you disagree with. Some examples could be: ‘Academic freedom is nonnegotiable,’ ‘The faculty united will never be divided,’ ‘Higher ed research saves lives,’ ‘Higher ed transforms lives,’ ‘Politicians are interfering with your child’s education.’”
The AAUPthen provides five digital safety tips that include setting up strong passwords, installing software updates as soon as they’re available, using two-factor authentication, and never using employer email addresses outside of work.
The last tip is the most revealing of how academics might be harassed online through campaigns like Turning Point USA’s “Professor Watchlist.” “Search for your name in common search engines to find out what is available about you online,” AAUP advises. “Put your name in quotation marks to narrow the search. Search both with and without your institution attached to your name.”
After that, the AAUP provided a list of trauma, counseling, and insurance services that its members have access to and a list of links to other pieces of information about protecting themselves.
“It’s good basic advice given that only a small number of faculty have spent years online in my experience, it’s a good place to start,” Pauline Shanks Kaurin, the former military ethics professor at the U.S. Naval War College told 404 Media. Kaurin resigned her position at the college earlier this year after realizing that the college would not defend academic freedom during Trump’s second term.
“I think this reflects the heightened level of scrutiny and targeting that higher ed is under,” Kaurin said. “While it’s not entirely new, the scale is certainly aided by many platforms and actors that are engaging on [social media] now when in the past faculty might have gotten threatening phone calls, emails and hard copy letters.”
The AAUP guidance was co-written by Isaac Kamola, an associate professor at Trinity College and the director of the AAUP’s Center for Academic Freedom. Kamola told 404 Media that the recommendations came for years of experience working with faculty who’ve been on the receiving end of targeted harassment campaigns. “That’s incredibly destabilizing,” he said. “It’s hard to explain what it’s like until it happens to you.”
Kamola said that academic freedom was already under threat before Kirk’s death. “It’s a multi-decade strategy of making sure that certain people, certain bodies, certain dies, are not in higher education, so that certain other ones can be, so that you can reproduce the ideas that a political apparatus would prefer existed in a university,” he said.
It’s telling that the AAUP felt the need to publish this, but the advice is practical and actionable, even for people outside of academia. Freedom of expression is under attack in America and though academics and other public figures are perhaps under the most threat, they aren’t the only ones. Secretary of Defense Pete Hegseth said the Pentagon is actively monitoring the social media activity of military personnel as well as civilian employees of the Department of Defense.
“It is unacceptable for military personnel and Department of War civilians to celebrate or mock the assassination of a fellow American,” Sean Parnell, public affairs officer at the Pentagon, wrote on X, using the new nickname for the Department of Defense. In the private sector, Sony fired one of its video game developers after they made a joke on X about Kirk’s death and multiple journalists have been fired for Kirk related comments.
AAUP did not immediately respond to 404 Media’s request for comment.
MSNBC fires analyst Matthew Dowd over Charlie Kirk shooting remarks
Dowd said the slain activist’s words may have fueled the violence that claimed his life, sparking backlashJoseph Gedeon (The Guardian)
An AI-generated show on Russian TV includes Trump singing obnoxious songs and talking about golden toilets.#News
Russian State TV Launches AI-Generated News Satire Show
A television channel run by Russia’s Ministry of Defense is airing a program it claims is AI-generated. According to advertisements for the show, a neural network is picking the topics it wants to discuss, then uses AI to generate that video. It includes putting French President Emmaneul Macron in hair curlers and a pink robe, making Trump talk about golden toilets, and showing EU Commission President Ursula von der Leyen singing a Soviet-era pop song while working in a factory.The show—called Политукладчик or “PolitStacker,” according to a Google translation—airs every Friday on Zvezda, a television station owned by Russia’s Ministry of Defense. It’s hosted by “Natasha,” an AI avatar modeled on Russian journalist Nataliya Metlina. In a clip of the show, “Natasha” said that its resemblance to Metlina is intentional.
“I am the creation of artificial intelligence, entirely tuned to your informational preferences,” it said. “My task is to select all the political nonsense of the past week and fit it in your heads like candies in a little box.” The shows’ title sequence and advertisements show gold wrapped candies bearing the faces of politicians like Trump and Volodymyr Zelensky being sorted into a candy box.
“‘PolitStacker’ is the world’s first television program created by artificial intelligence,” said an ad for the show on the Russian social media network VK, according to Google translate. “The AI itself selects, analyzes, and comments on the most important news, events, facts, and actions—as it sees them. The editorial team’s opinion may not coincide with the AI’s (though usually…it does.) “‘PolitStacker’” is not just news, but a tough breakdown of political madness from a digital host who notices what others overlook.”
Data scientist Kalev Leetaru discovered the AI-generated Russian show as part of his work with the GDELT Project, which collaborates with the Internet Archive's TV News Archive, a project that scans and stores television broadcasts from around the world. “If you just look at the show and you didn’t know it had AI associated with it, you would never guess that. It looks like a traditional propaganda show on Russian television," Leetaru told 404 Media. “If they are using AI to the degree that they say they are, even if it’s just to pick topics, they mastered that formula in a way that others have not.”
PolitStacker’s 40 minute runtime is full of silly political commentary, jokes, and sloppy AI deepfakes that look like they were pulled from a five-year-old Instagram reel. In one episode, Macron, with curlers in his hair, adjusts Zelensky’s tie ahead of a meeting at the Kremlin. Later, a smiling Macron bearing six pack abs stands in a closet in front of a clown costume and a leather jumpsuit. “Parts of it have an uncanny valley to it, parts of it are really really good. This is only their fourth episode and they’re already doing deep fake interviews with world leaders,” Leetaru said.
Image via the Internet Archive.
In one of the AI-generated Trump interviews, the American president talked about how he’d end the war in Ukraine by building a casino in Moscow with golden toilets. “And all the Russian oligarchs, they would all be inside. All their money would be inside. Problem solved. They would just play poker and forget about this whole war. A very bad deal for them, very distracting,” the deepfake Trump said.Deepfake world leaders aren’t new and are pretty common across the internet. For Leetaru, the difference is that this is airing on a state-backed television station. “It’s still in parody form, but to my knowledge, no national television network show has even gone this far,” he told 404 Media. “Today it’s a parody video that’s pretty clearly a comedic interview. But, you know, how far will they take that? And does that inspire others to maybe step into spaces that they wouldn’t have before?”
Trump also loves AI and the AI aesthetic. Government social media accounts often post AI-generated slop pictures of Trump as the Pope or a Jedi. ICE and the DHS share pictures on official channels that paint over the horrifying reality of the administration's immigration policy with a sheen of AI slop. Trump shared an AI-generated video that imagined what Gaza would look like if he built a resort there. And he’s teamed with Perplexity to launch an AI powered search engine to Truth Social.
“PolitStacker” is a parody show, but Russian media is experimenting with less comedic AI avatars as well. Earlier this year, the state-owned news agency Sputnik began to air what it called the “Dugin Digital Edition.” In these little lectures, an AI version of Russian philosopher Alexander Dugin discusses the news of the day in English.
Last year, a Hawaiian newspaper, The Garden Island, teamed with an Israeli company to produce a news show on YouTube staffed by AI anchors. Reactions to the program were overwhelmingly negative, it brought in fewer than 1,000 viewers per episode, and The Garden Island stopped making the show a few months after it began.
In a twist of fate, Leetaru only discovered Moscow’s AI-generated show thanks to an AI system of his own. The GDELT project is a massive undertaking that records thousands of hours of data from across the world and it uses various AI systems to generate transcripts, translate them, and create an index of what’s been archived. “In this case I totally skimmed over what I thought was an ad for a propaganda show and then some candy commercial. Instead it ended up being something that’s fascinating,” he said.
But his AI indexing tool noted Zvezda's new show as an AI-generated program that sought to “analyze political follies of the outgoing week.” He took a second look and was glad he did. “That’s the power of machines being able to catch things and guide your eye towards that.”
What he saw disturbed him. “Yes, it’s one show on an obscure Russian government adjacent network using deep fakes for parody,” he said. “But the fact that a television network finally made that leap, to me, is a pivotal moment that I see as the tip of the iceberg.”
Historic Newspaper Uses Janky AI Newscasters Instead of Human Journalists
Hawaii’s The Garden Island newspaper is producing video news segments with AI. The union at its parent company calls it “digital colonialism.”Matthew Gault (404 Media)
OpenAI introduces new age prediction and verification methods after wave of teen suicide stories involving chatbots.#News
ChatGPT Will Guess Your Age and Might Require ID for Age Verification
OpenAI has announced it is introducing new safety measures for ChatGPT after the a wave of stories and lawsuits accusing ChatGPT and other chatbots of playing a role in a number of teen suicide cases. ChatGPT will now attempt to guess a user’s age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old.“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the company said in its announcement.
“I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” OpenAI CEO Sam Altman said on X.
In August, OpenAI was sued by the parents of Adam Raine, who died by suicide in April. The lawsuit alleges that alleges that the ChatGPT helped him write the first draft of his suicide note, suggested improvements on his methods, ignored early attempts and self-harm, and urged him not to talk to adults about what he was going through.
“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
In August the Wall Street Journal also reported a story about a 56-year-old man who committed a murder-suicide after ChatGPT indulgedhis paranoia. Today, the Washington Postreported another story about another lawsuit alleging that a Character AI chatbot contributed to a 13-year-old girl’s death by suicide.
OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures.
In addition to attempting to guess or verify a user’s age, ChatGPT will now also apply different rules to teens who are using the chatbot.
“For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the announcement said. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called “uncensored” models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.
“We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to “‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is not the first company that’s attempting to use machine learning to predict the age of its users. In July, YouTube announced it will use a similar method to “protect” teens from certain types of content on its platform.
Extending our built-in protections to more teens on YouTube
We're extending our existing built-in protections to more US teens on YouTube, using machine learning age estimation.James Beser (YouTube Official Blog)
An LLM breathed new life into 'Animal Crossing' and made the villagers rise up against their landlord.
An LLM breathed new life into x27;Animal Crossingx27; and made the villagers rise up against their landlord.#News #VideoGames
AI-Powered Animal Crossing Villagers Begin Organizing Against Tom Nook
A software engineer in Austin has hooked up Animal Crossing to an AI and breathed new and disturbing life into its villagers. Using a Large Language Model (LLM) trained on Animal Crossing scripts and an RSS reader, the anthropomorphic folk of the Nintendo classic spouted new dialogue, talked about current events, and actively plotted against Tom Nook’s predatory bell prices.The Animal Crossing LLM is the work of Josh Fonseca, a software engineer in Austin, Texas who works at a small startup. Ars Technica first reported on the mod. His personal blog is full of small software projects like a task manager for the text editor VIM, a mobile app that helps rock climbers find partners, and the Animal Crossing AI. He also documented the project in a YouTube video.
playlist.megaphone.fm?p=TBIEA2…
Fonseca started playing around with AI in college and told 404 Media that he’d always wanted to work in the video game industry. “Turns out it’s a pretty hard industry to break into,” he said. He also graduated in 2020. “I’m sure you’ve heard, something big happened that year.” He took the first job he could find, but kept playing around with video games and AI and had previously injected an LLM into Stardew Valley.Fonseca used a Dolphin emulatorrunning the original Gamecube Animal Crossing on a MacBook to get the project working. According to his blog, an early challenge was just getting the AI and the game to communicate. “The solution came from a classic technique in game modding: Inter-Process Communication (IPC) via shared memory. The idea is to allocate a specific chunk of the GameCube's RAM to act as a ‘mailbox.’ My external Python script can write data directly into that memory address, and the game can read from it,” he said in the blog.
He told 404 Media that this was the most tedious part of the whole project. “The process of finding the memory address the dialogue actually lives at and getting it to scan to my MacBook, which has all these security features that really don’t want me to do that, and ending up writing to the memory took me forever,” he said. “The communication between the game and an external source was the biggest challenge for me.”
Once he got his code and the game talking, he ran into another problem. “Animal Crossing doesn't speak plain text. It speaks its own encoded language filled with control codes,” he said in his blog. “Think of it like HTML. Your browser doesn't just display words; it interprets tags like <b> to make text bold. Animal Crossing does the same. A special prefix byte, CHAR_CONTROL_CODE, tells the game engine, ‘The next byte isn't a character, it's a command!’”
But this was a solved problem. The Animal Crossing modding community long ago learned the secrets of the villager’s language, and Fonseca was able to build on their work. Once he understood the game’s dialogue systems, he built the AI brain. It took two LLM models, one to write the dialogue and another he called “The Director” that would add in pauses, emphasize words with color, and choose the facial animations for the characters. He used a fine-tuned version of Google’s Gemini for this and said it was the most consistent model he’d used.
To make it work, he fine-tuned the model, meaning he reduced its input training data to make it better at specific outputs. “You probably need a minimum of 50 to 100 really good examples in order to make it better,” he said.
Results for the experiment were mixed. Cookie, Scoot, and Cheri did indeed utter new phrases in keeping with their personality. Things got weird when Fonseca hooked up the game to an RSS reader so the villagers could talk about real world news. “If you watch the video, all the sources are heavily, politically, leaning in one direction,” he said. “I did use a Fox news feed, not for any other reason than I looked up ‘news RSS feeds’ and they were the first link and I didn’t really think it through. And then I started getting those results…I thought they would just present the news, not have leanings or opinions.”
“Trump’s gonna fight like heck to get rid of mail-in voting and machines!” Fitness obsessed duck Scoot said in the video. “I bet he’s got some serious stamina, like, all the way in to the finish line—zip, zoom!”
The pink dog Cookie was up on her Middle East news. “Oh my gosh, Josh 😀! Did you see the news?! Gal Gadot is in Israel supporting the families! Arfer,” she said, uttering her trademark catchphrase after sharing the latest about Israel.
In the final part of the experiment, Fonseca enabled the villagers to gossip. “I gave them a tiny shared memory for gossip, who said what, to whom, and how they felt,” he said in the blog.The villagers almost instantly turned on Tom Nook, the Tanuki who runs the local stores and holds most of Animal Crossing's inhabitants in debt. “Everything’s going great in town, but sometimes I feel like Tom Nook is, like, taking all the bells!” Cookie said.
“Those of us with big dreams are being squashed by Tom Nook! We gotta take our town back!” Cheri the bear cub said.
“This place is starting to feel more like Nook’s prison, y’know?” Said Scoot.
youtube.com/embed/7AyEzA5ziE0?…
Why do this to Animal Crossing? Why make Scoot and Cheri learn about Gal Gadot, Israel, and Trump?“I’ve always liked nostalgic content,” Fonscesca said. His TikTok and YouTube algorithm is filled with liminal spaces and music from his childhood that’s detuned. He’s gotten into Hauntology, a philosophical idea that studies—among other things—promised futures that did not come to pass.
He sees projects like this as a way of linking the past and the future. “When I was a child I was like, ‘Games are gonna get better and better every year,’’ he said. “But after 20 years of playing games I’ve become a little jaded and I’m like, ‘oh there hasn’t really been that much innovation.’ So I really like the idea of mixing those old games with all the future technologies that I’m interested in. And I feel like I’m fulfilling those promised futures in a way.”
He knows that not everyone is a fan of AI. “A lot of people say that dialogue with AI just cannot be because of how much it sounds like AI,” he said. “And to some extent I think people are right. Most people can detect ChatGPT or Gemini language from a mile away. But I really think, if you fine tune it, I was surprised at just how good the results were.”
Animal Crossing’s dialogue is simple and that simplicity makes it a decent test case for AI video game marks, but Fonseca thinks he can do similar things with more complicated games. “There’s been a lot of discussion around how what I’m doing isn’t possible when there’s like, tasks or quests, because LLMs can’t properly guide you to that task without hallucinating. I think it might be more possible than people think,” he said. “So I would like to either try out my own very small game or take a game that has these kinds of quests and put together a demo of how that might be possible.”
He knows people balk at using AI to make video games, and art in general, but believes it’ll be a net benefit. “There will always be human writers and I absolutely want there to be human writers handling the core,” he said. “I would hope that AI is going to be a tool that doesn’t take away any of the best writers, but maybe helps them add more to their game that maybe wouldn’t have existed otherwise. I would hope that this just helps create more art in the world. I think I see the total art in the world increasing as a good thing…now I know some people would say that using AI ceases to make it art, but I’m also very deep in the programming aspect of it. What it takes to make these things is so incredible that it still feels like magic to me. Maybe on some level I’m still hypnotized by that.”
Modder injects AI dialogue into 2002’s Animal Crossing using memory hack
Unofficial mod lets classic Nintendo GameCube title use AI chatbots with amusing results.Benj Edwards (Ars Technica)