Who could have possibly predicted this, besides everyone?#Meta


RIP Metaverse, an $80 Billion Dumpster Fire Nobody Wanted


A few things on the end of Horizon Worlds, the metaverse that Mark Zuckerberg believed in so much that he renamed his company:

1) It’s very sad that many of the people who worked on it have been unceremoniously laid off because their leaders appear to have no idea what they’re doing
2) lol
3) lmao, even

Who could have possibly predicted this?

When Zuckerberg announced Horizon Worlds not really all that long ago at a batshit livestream in October 2021, I wrote an article called “Zuckerberg Announces Fantasy World Where Facebook Is Not a Horrible Company.” During that livestream Zuckerberg said, “I believe technology can make our lives better. The future will be built by those willing to stand up and say ‘this is the future we want.’” The future Zuckerberg wanted, at that time, was not a future anyone else wanted. But he was bold enough to systematically light roughly $80 billion on fire, not because he was willing to stand up and paint a vision of the future, but because Facebook was mired in various horrendous scandals and because he needed to rebrand his company and needed something shiny to point at to keep Facebook’s stock price up. It is bad when actual economists say that money was thrown “into the toilet.”

Let’s check what I wrote then: “The future Zuckerberg went on to pitch was a delusional fever dream cribbed most obviously from dystopian science fiction and misleading or outright fabricated virtual reality product pitches from the last decade. In the ‘metaverse’—an ‘embodied’ internet where we are, basically, inside the computer via a headset or other reality-modifying technology of some sort—rather than hang out with people in real life you could meet up with them as Casper-the-friendly-ghost-style holograms to do historically fun and stimulating activities such as attend concerts or play basketball.”

Zuckerberg’s bold vision of the metaverse was a place where T-Pain would sell NFTs of imaginary sneakers at concerts attended by people sitting silently in their living rooms with computers strapped to their face, where Wendy’s could do integrated brand deals in which human-shaped avatars without legs could throw baconators at basketball hoops, and where Zuckerberg could pretend to know how to surf. Even on these pitiful metrics, the metaverse failed. “Whatever the metaverse does look like, it is virtually guaranteed to not look or feel anything like what Facebook showed us on Thursday,” I wrote at the time.

Over the last few years, Zuckerberg has found another thing he can ruin via his trademark process of pouring kerosene on huge piles of money and throwing matches at it (perhaps a fun metaverse game?). Zuckerberg’s current bold vision for the future is one in which social media is not social media at all but is instead a bunch of highly customized AI-generated ads delivered to you via an increasingly creepy algorithm. Alongside this, it is a future in which Reality Labs—the division of Meta that created Horizon Worlds—makes AI camera glasses whose main use appears to be harassing women, traumatizing the underpaid content moderators who watch the footage in developing countries, and fashion statements for federal officials whose current mission is kidnapping undocumented immigrants.

The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people, don’t know how or why people use their products, and very often have no idea what they’re doing.

I remember the metaverse, crypto, web3 Venn diagram of hype very well—in fact, I remember sitting in meetings where VICE executives proposed renting land in the crypto-focused Decentraland metaverse to build a virtual VICE headquarters (where we all worked before 404 Media). I noted at the time that Decentraland was stupid, and that far fewer people were on Decentraland at any given time than were reading even a failed blog post on the website of our failing company. It didn't matter. The people “willing to stand up and say ‘this is the future we want’” wanted a virtual building in a virtual dead mall, and they got it. Was it because they were so brave and forward looking? Or was it because they were rich and powerful and could say this is the future we, the business people, the business knowers, want?


#meta

Social Media Channel reshared this.

In a feature the dating app says is set to roll out in the U.S. later this spring, Tinder plans to access users' camera rolls to pick photos and determine what they're into.#datingapps #tinder


Tinder Plans to Let AI Scan Your Camera Roll


Tinder plans to let machine vision algorithms loose on your camera roll. Instead of building a profile on their own, AI will scan users’ locally-stored photos—everything from gym selfies to pictures of their family, sensitive documents and dick pics—to help construct profiles by determining what users’ interests and values are.

Dating apps are the go-to way for people to connect romantically in the modern dating world. As AI has risen in popularity thanks to services like ChatGPT, however, users are suffering the consequences of problems like bots and AI-generated messages infiltrating dating apps. For some people, the experience is less authentic than ever as people offload get-to-know-you conversations to artificial intelligence.

The feature is still being tested, with early access only available in Australia beginning this month. Although Tinder says it attempts to filter out explicit images, users may still be concerned with Tinder's AI scanning their entire camera roll. “It's up to you to figure out what you're comfortable sharing back with Tinder,” Tinder Head of Product Mark Kantor told 404 Media. Still, users can’t pick individual photos they want analyzed or ignored. Tinder’s safeguards are meant to filter out explicit images or text, and to blur faces before insights are processed.

Tinder claims its AI is looking for themes and interests, like pets, activities, or food, as well as photos that are well-lit or well-framed. In theory, this will help users decide the best way to present themselves online. “There is some art to it,” Kantor said. “It's not just the science.” (It’s unclear what happens if your camera roll is full of bad photos.)

Eventually, Kantor said, Tinder will add the ability to turn photos into larger collages for their profiles. “We do give people a pretty big variety of photos so we're not going to go from 30,000 to three.” Kantor said it looks for subject matter and tries to group insights based on similar interests. “If I have one dog photo of 20,000, I'm not really a dog person,” Kantor said as an example.

Tinder has already leaned heavily into AI. Kantor told 404 Media that artificial intelligence is writing more than half the app’s code these days. Several of its new AI-driven features include photo enhancements, match recommendations, and photo scanning. Kantor said that the app’s use of AI is to “help you express yourself,” but not to do so on the dater’s behalf.

If the camera roll is a window into the modern soul, it is also a goldmine of personal information. Depending on what someone photographs, their camera roll could include everything from photos of sensitive documents, like banking or medical info, to nudes. It’s a potential security nightmare, especially when people are sharing intimate details about themselves or their dating lives. Security failures on dating apps like Tea put users in danger: multiple breaches exposed personal information, including photos, driver’s license information, and direct messages, before it was finally yanked from the App Store. Tinder has had its own privacy and security issues. Last year, we revealed the dating app was one of thousands co-opted to mine location data. In January, hackers claimed to have stolen internal data from Match Group, which owns Tinder.

According to Kantor, Tinder isn’t storing the data it pulls from photos on its end. “It's purely on your device,” he said. Tinder won’t scan your deleted photos, or anything from your phone’s hidden folder; after it’s finished scouring your images, the AI selects specific photos for users to choose to upload to their public profile. If the AI’s categorization of a user as, say, a dog person is inaccurate, users can note that feedback and choose to either accept or reject the AI’s insights. Anything that doesn’t go on someone’s profile is deleted, and if users want new insights later, they’ll have to do the process all over again, according to Tinder.

“In talking to this new generation of daters, they want something different,” Kantor told 404 Media.“I think you see connection, that hasn't changed. I don't think they're frustrated with dating. They're frustrated with all of the friction and the dead ends with dating.”

Megan Farokhmanesh is a games and culture reporter whose works has appeared in the New York Times,Wired, Axios, and The Verge. Find her on Bluesky.


How filmmaker Chris Parr put North Oaks, Minnesota on the map.#podcasts


Mapping Google's Unmappable City


North Oaks, Minnesota is the only city in the United States that is not on Google Maps Street View. YouTube documentarian Chris Parr, who grew up not too far from North Oaks, set out to change that earlier this year. For a brief few days, he literally put North Oaks on the map. And then it was gone again.

“It’s known by Minnesotans as a place where executives and CEOs live,” Parr told 404 Media. “Famously Walter Mondale is from North Oaks, but also like United Healthcare executives and Target executives.”
youtube.com/embed/3iGvHBr0mJw?…
North Oaks has managed to largely stay unmapped on Street View because of the way the city handles its streets. In almost every city and town in the United States, property owners give an easement to their local government for the roads in front of their homes (or don’t have any claim to the roads at all). In North Oaks, homeowners’ property extends into the middle of the street, meaning there is literally no “public” property in the city, and the roads are maintained by the North Oaks Homeowners’ Association (NOHOA): “the City owns no roads, land, or buildings. The 50-60 miles of roads in the city are owned by the NOHOA members whose property extends to the center of the road subject to easements in favor of NOHOA,” the homeowners association’s website, which has very little information on it and notes that it is “unable to share most private documents with the public.” The roads entering North Oaks have no trespassing signs posted and automated license plate readers.
playlist.megaphone.fm?e=TBIEA2…
In the early days of Google Maps, North Oaks was on Street View. But in May, 2008, the city threatened Google with a lawsuit because its Street View cars had trespassed. Google deleted its Street View images and North Oaks hasn’t been on Street View since.

"It's not the hoity-toity folks trying to figure out how to keep the world away," then-Mayor Thomas Watson told the Star Tribune in 2008. "They [Google] really didn't have any authorization to go on private property."

Google Maps allows people to upload their own images, however. And Parr set out to find a way to map North Oaks without actually going there. So he began mapping it with a drone.

“It’s a geographic oddity,” Parr said. “I realized the airspace above North Oaks operates differently than the property on the ground. I thought you could effectively map the city with a drone.”

Parr is right. The national airspace is technically managed by the Federal Aviation Administration, and “airspace” starts directly above the ground, which is something I covered over and over in the early days of consumer drones as towns sought to ban drones in certain areas.

“Technically, if you launch your drone from public property, which anyone can do if you’re a registered drone pilot, you can fly it straight up and above private property,” Parr said. And so Parr stood at “six or seven different spots” directly outside the boundary of North Oaks and flew his drone around. “I just pulled my car over onto the shoulder and popped my drone up and flew it over,” he added.
youtube.com/embed/gtiiHXsnsrY?…
There were parts of North Oaks that he couldn’t reach by drone from outside the boundaries of the city, so eventually he decided he needed an invite into the city to go to a park within its boundaries to keep flying his drone.

“According to North Oak’s ordinances, you can go like, visit a friend, or if you’re a contractor working on a house, you can go into the city, but you have to be an invited guest,” Parr said. “I made a Craigslist post asking for somebody to invite me and I got an absolute ton of responses. I started texting with this woman named Maggie and she invited me, so technically I had the invite to go to the park.”

Parr then took his drone footage and uploaded it to Google Maps. For a few glorious days, North Oaks was mapped. And then it was gone.

“I’ve since been in a battle with the people who flag the images,” he said. He also got a letter from a law firm representing the North Oaks Homeowners Association. “It’s not asking me to take any of the videos down or anything, but basically they say, ‘Don’t come back.’”

Parr’s experiment and documentary raises questions, of course, about who gets to have privacy in America. A wealthy enclave has set up the legal and surveillance infrastructure to be able to prevent being mapped. The rest of us, meanwhile, are subject to all sorts of surveillance by our neighbors and law enforcement. “The only reason it’s set up this way is because it’s such a wealthy community,” Parr said. “I know that I was able to do this, but I don’t know if I should be able to do this, and that’s kind of the question that I wanted to tackle. The YouTube comments are pretty crazy man. They’re all over the place. They’re very split 50/50 on that question.”

North Oaks did not respond to a request for comment.


There is no associated website yet, but the move comes after Trump ordered the release of files related to UFOs.#aliens #News


Government Registers Aliens.Gov Domain


The Executive Office of the President registered the domain aliens.gov on Wednesday a little after 6:30 AM according to a bot that monitors federal domains. There’s no associated website just yet, but the registration comes a month after Trump said he would direct the government to release files related to aliens and UFOs to the public.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


This week we talk about the disappearing (and reappearing) DOGE depositions; how AI is African Intelligence; and what AI job loss reports are missing.#Podcast


Podcast: The Disappearing DOGE Depositions


This week we start with Joseph’s series of articles about the DOGE depositions. He watched hours and hours of them, then a judge ordered them removed from YouTube. But, they’ve already been archived all over the web. After the break, Jason tells us about the AI data labelers who are fighting back. In the subscribers-only section, Jason breaks down what’s wrong with all the AI job loss research at the moment.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/xtMniLj_yzQ?…
0:00 - Intro 0:51 - ⁠Google Street View's Unmappable City⁠

3:40 - ⁠I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves⁠

13:24 - ⁠DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery⁠

18:58 - ⁠The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet⁠

28:32 - ⁠'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back⁠

SUB'S STORY - ⁠AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet⁠


The media in this post is not displayed to visitors. To view it, please log in.

“Organic molecules delivered from extraterrestrial materials may have played a key role in supplying building blocks for life on Earth,” said one scientist.#TheAbstract


Was Life Seeded from Space? ‘Complete Set’ of DNA Ingredients Discovered on Asteroid


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scientists have discovered all five nucleobases—the fundamental components of DNA and RNA—in pristine samples from the asteroid Ryugu, according to a study published on Monday in Nature Astronomy. The finding strengthens the case that the ingredients for life are abundant in the solar system and may have found their way to Earth from space, according to a study published on Monday in Nature Astronomy.

Life as we know it runs on DNA and RNA, which are built from five chemical bases: adenine, guanine, cytosine, thymine, and uracil. A team has now identified this “complete set” of nucleobases in rocks snatched from the surface of Ryugu in 2019 by the Japanese spacecraft Hayabusa-2, which successfully returned them to Earth the following year.

This discovery corroborates the results from another mission, NASA’s OSIRIS-REx, which returned samples of the asteroid Bennu that also contained all five nucleobases. Both asteroids belong to the same “carbonaceous” (C-type) family of primitive carbon-rich rocks, though the samples contain different ratios of the five nucleobases.

Taken together, the findings shed light on the origin of life on Earth and raise new questions about the odds that it exists elsewhere.

“These findings suggest that nucleobases may be widespread in carbonaceous asteroids and, by extension, in planetary systems,” said Toshiki Koga, a postdoctoral researcher at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), in an email to 404 Media.

“This means that some of the key molecular ingredients for life could be commonly available,” he added. “However, this does not imply that life itself is widespread, but rather that the chemical starting materials for life may be more common than previously thought.”

The emergence of life on Earth, also known as abiogenesis, remains one of the biggest mysteries in science. To untangle this enigma, scientists first need to figure out how our planet was initially enriched with the basic stuff of life—including water, amino acids, and the nucleobases that make up our genetic material.
The “Ryugu Story” illustration depicting the detection of all five canonical nucleobases in samples returned from asteroid Ryugu by the Hayabusa2 mission. Image: JAMSTEC
One popular hypothesis suggests that asteroids bearing these biological building blocks pelted Earth as it formed more than four billion years ago. This idea has been supported by the presence of nucleobases in pieces of carbonaceous asteroids that have fallen down to Earth, such as the Murchison meteorite of Australia or the Orgueil meteorite of France.

Meteorites, however, are not pristine as they become eroded by exposure to space and can also be contaminated by terrestrial material after landing on Earth. To get cleaner samples, scientists launched several spacecraft to grab samples directly from the source, beginning with Japan’s Hayabusa mission, which delivered several milligrams of dusty grains from asteroid Itokawa to Earth in 2010.

Hayabusa-2 and OSIRIS-REx then obtained even larger samples from their targets, bringing back 5.4 grams from Ryugu and 121.6 grams from Bennu. Previous studies have already identified more than a dozen amino acids associated with life in both samples, as well as evidence that these asteroids were once altered by ice and water.

Now, following the discovery of all five nucleobases in the Bennu pebbles, Koga and his colleagues have found the complete set in Ryugu. The findings lend weight to the so-called “RNA world” model of abiogenesis. In this hypothesis, early life on Earth depended solely on RNA as a self-replicating molecule, laying the biological groundwork for later, more complicated systems that involved DNA and protein-based organisms. The extraterrestrial samples from Ryugu and Bennu provide evidence that at least some of the nucleobases that made up these early lifeforms came from outer space.

The results were “broadly in line with our expectations, but still very exciting to confirm,” Koga said. “All five nucleobases had already been detected in the Murchison meteorite and in samples from the asteroid Bennu. Since Ryugu is also a carbonaceous asteroid, we expected that these molecules might be present, and it was very satisfying to confirm that the complete set is indeed present in the Ryugu samples.”

But while both samples contained the royal flush of nucleobases, they differed in their relative abundances. For example, Bennu is much richer in pyrimidine nucleobases (cytosine, thymine and uracil) than Ryugu, though they both contain roughly similar levels of purine nucleobases (adenine and guanine). These idiosyncrasies point to a variety of formation processes that produced prebiotic materials on these celestial relics.

“Our results suggest that nucleobases can form under a range of conditions in early Solar System materials, particularly within primitive asteroid parent bodies that experienced aqueous alteration,” Koga said. “The observed relationship between nucleobase composition and ammonia abundance indicates that local chemical environments, such as the availability of ammonia, may play an important role.”

“At the same time, some precursor molecules may have formed earlier in interstellar environments, so nucleobase formation could involve multiple stages,” he continued. “Future studies, including analyses of different types of meteorites and laboratory experiments that simulate these conditions, will help to better constrain these formation pathways.”

In other words, understanding how these molecules form in space could help answer the age-old mystery of whether life is a rare cosmic fluke—or a common process in the universe. The research also highlights the remarkable ingenuity behind these sample-return missions, which have delivered tiny time capsules from the birth of our solar system directly into our hands.

“It is both exciting and humbling to work with these samples,” Koga said. “They are extremely limited and represent material that has remained largely unchanged since the early Solar System. At the same time, there is a strong sense of responsibility, because each tiny grain may contain important information about how organic molecules formed and evolved before the origin of life.”

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.


The media in this post is not displayed to visitors. To view it, please log in.

Widely cited AI labor research ignores the most important thing AI is doing: Killing the human internet.#AI #AISlop


AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet


Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.)

In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.”

Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. Researchers appear to be too squeamish or too embarrassed to grapple with the fact that people love to use AI to make porn, and people love to use AI to spam social media and the internet, inherently causing economic harm to creators, adult performers, journalists, musicians, writers, artists, website owners, small businesses, etc. As Emanuel wrote in our first 404 Media Generative AI Market Analysis, people love to cum, and many of the most popular generative AI websites have an explicit focus on AI porn and the creation of nonconsensual AI porn. Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the “good” uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for. (It may be the case that people are disproportionately using Claude at a higher rate for more traditional work applications, but any study on the “labor market impacts of AI” should not focus on the uses of one single tool and extrapolate that out to every other tool. For what it’s worth, jailbroken versions of Claude are very popular among sexbot enthusiasts).

Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth.

Anthropic’s paper does attempt to estimate what the effect of AI will be on “arts and media,” but again, the way the researchers do this is by attempting to decide whether AI can directly do the tasks that AI researchers assume are required by someone with a job in “arts and media.” Other widely-cited papers about AI-related job loss also do not really attempt to consider the potential macro impacts of the ongoing deadening and zombification of the internet, and instead focus on “AI exposure,” which is largely an attempt to predict or measure whether an AI or LLM could directly replace specific tasks that people need to do. Widely-cited papers from the National Bureau of Economic Research and Brookings released over the last several months attempt to determine the adaptability of workers in specific sectors to having many of their tasks automated by AI. The Brookings paper, at least, mentions the possibility of a society-wide shift that is impossible to predict: “the evidence underlying the adaptive capacity estimates here is derived primarily from observed effects in localized displacement events, rather than from large-scale employment shifts across occupations. As a result, the index may be most informative when displacement is relatively isolated—for example, when a worker loses their job but related occupations remain stable. In scenarios in which AI affects clusters of related occupations simultaneously, structural job availability may matter more than individual-level characteristics. Moreover, if AI fundamentally transforms the economy on a scale comparable to the industrial revolution (as some experts have suggested could be possible), it could make entire skill sets redundant across several occupations simultaneously.”

To be clear, AI-driven job loss is a critical thing to study and to consider. But many, many jobs, side hustles, and economic activity more broadly rely on “the internet,” or social media broadly defined. Study after study shows that Google is getting worse, traffic to websites are down, and an increasing amount of both web traffic and web content is being generated by AI and bots. Anecdotally, creators and influencers have told us it’s getting harder to compete with AI slop and harder to justify spending days or weeks making content just to publish it onto platforms where their AI competitors can brute force the recommendation algorithms effortlessly. We have heard from websites that have had to lay people off or shut down because Google’s AI overviews have destroyed their web traffic or because they lose out on search engine rankings to AI slop. Authors are regularly competing with AI plagiarized versions of their books on Amazon, and Spotify is getting overrun with AI-generated music, too.

This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.


AI Channel reshared this.

The CEO of Krafton used ChatGPT to push out the head of the studio developing Subnautica 2 against the advice of his own legal team and failed miserably.#Subnautica2 #Krafton


CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court


A judge ordered the reinstatement of a video game developer after he was fired as part of a scheme cooked up by a CEO using ChatGPT. Facing the possibility of paying out a massive bonus to the developer of Subnautica 2, the CEO of publisher Krafton used ChatGPT to create a plan to take over the development studio and force out its founder, according to court records.

The Monday ruling details the bizarre story. Unknown Worlds Entertainment is the studio behind the 2018 underwater survival game Subnautica. The company has since been working on the sequel, Subnautica 2. In 2021, South Korean publisher Krafton bought Unknown Worlds Entertainment for $500 million and promised to pay out another $250 million if Subnautica 2 sold well enough.

Krafton’s internal sales projections for Subnautica 2 looked great, and looked like it would be on the hook for the additional $250 million. In an attempt to avoid paying this, Krafton CEO Changhan Kim turned to ChatGPT for help avoiding paying the developers the $250 million bonus. “As Unknown Worlds prepared to release its hotly anticipated sequel, Subnautica 2, the parties’ relationship fractured,” the court decision said. “Fearing he had agreed to a ‘pushover’ contract, Krafton’s CEO consulted an artificial intelligence chatbot to contrive a corporate ‘takeover’ strategy.”

Kim partnered with Krafton Head of Corporate Development Maria Park and the company’s legal team to work out options. He toyed with finding a reason to fire the founders. According to court records, Park pinged Kim on Slack and told him that attempting to avoid paying the bonus would be legally risky. “Hi CEO . . . it seems to be highly likely that the earn-out will still be paid if the sales goal is achieved regardless of the dismissal with cause,” the Slack message said according to court records. “Therefore, there isn’t much that we can practically gain other than punishment with a simple dismissal alone, whereas I am worried that we may be exposed to lawsuit and reputation risk.”

But the CEO would not accept defeat. “And so Kim turned to ChatGPT for help,” court records said. “When the AI chatbot responded that the earnout would be ‘difficult to cancel,’ Kim complained to Park that the [payout] was a ‘contract under which we can only be dragged around.’”

Kim pressed the chatbot for an answer. “At ChatGPT’s suggestion, Kim formed an internal task force, dubbed ‘Project X.’ The task force’s mandate was to either negotiate a ‘deal’ on the earnout or execute a ‘Take Over’ of Unknown Worlds. They looked to buy time,” court records said. “Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a ‘Response Strategy’ to a ‘No-Deal’ Scenario.”

This was a piece of ChatGPT’s “Project X” for Krafton:

“a. Preemptive Framing - Repeat that protecting quality and fan trust is the highest priority, undermine the ‘Large Corporation VS. Indie’ framing

b. Securing Control Points -

* Lock down Steam/console publishing rights and access rights over code/build pipeline through both legal and technical aspects.

* For the earn-out freeze, keep room for negotiations through provision stating ‘immediate removal if specific development results are achieved’

a. Systematic materials for legal defense - Prepare contract interpretation memorandums, log all communications, seek external consultation
b. Team retention - Operation of retention packages for key personnel and rapid backfill pipelines in anticipation of resignation/departure scenarios
c. Two handed strategy - Create a structure that allows for both hardball (Legal+ Finance) and softball (Support/Incentives) approaches so moderate factions within Unknown Worlds can push for compromise.”


Kim followed ChatGPT’s advice rather than his lawyers’ advice, according to the court records. The first step was posting a message on Subanutica’s website to get fans on his side. According to court documents, Kim said the goal of the message was to “secure public support from fans and legal validation of our legitimacy.” He then suggested that ChatGPT write it for him. It achieved the opposite of his intended goal. Fans found the message bizarre and worried about the future of the game. Those fears were compounded when Kim fired the game’s original creators and entered into a legal battle with them.

The legal battle is ongoing, but Kim looks set to lose. The judge has ordered he reinstate the fired developers and has exposed the CEO’s flailing use of ChatGPT. Krafton told Kotaku that it was “evaluating its options” regarding the ruling and that it “puts players at the heart of every decision.”


A newly published study of how college students interact with chatbots and human strangers showed talking to a random person offers more connection than an LLM.#ChatGPT #AI


Texting a Random Stranger Better for Loneliness Than Talking to a Chatbot, Study Shows


Lonely young people are likely better off texting a random stranger than talking to a chatbot, according to a new study.

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling.

The research included 300 first-semester college students who were either randomly paired with another student, given a daily solo writing task, or put into a Discord server with a chatbot running on ChatGPT-4o mini.

The students were instructed to have at least one interaction per day in each of the groups. The human-human pairs were instructed to message each other however they wanted, while the researchers instructed the bot to “listen actively and show empathy,” and to be a “friendly, positive, and supportive AI friend to help the student navigate their new college experience.” The human participants ultimately acted pretty similarly in both types of chat, sending between eight and 10 messages a day in both their human text chains and their Discord conversations with the large language model (LLM).

However, participants who were paired with a human partner reported significantly lower loneliness after the study, and those paired with the chatbot did not. “This is just such a low tech, simple intervention, and can make people feel significantly less lonely,” Ruo-Ning Li, PhD candidate at UCB and one of the authors of the paper, told 404 Media.

The research looked at college students specifically, to try to understand whether LLMs could be a scalable tool to help with the isolation that people can feel when going through a big change. The transition to college can be overwhelming: new classmates, new places, new rules. Young people are often away from parents or familiar structure for the first time, building out their new social networks among others who are doing the same. This is a particularly vulnerable time: if chatbots could really cure loneliness for a group of people like this, “then it would be great,” said Li. But only human to human interaction, despite it being with a random person over text, had any significant effect.

The research is part of a movement to understand the effects of LLM interactions over periods of time. Another paper from the same lab, published this week in Psychological Science, looks at the experiences of more than 2,000 people over twelve months, checking in with them once a quarter. The study found that higher reported chatbot use was linked with higher loneliness later on — and vice versa. “Changes in chatbot use have a small effect on emotional isolation in the future. And emotional isolation has a similarly sized effect on your likelihood to use chatbots in the future,” Dr. Dunigan Folk, one of the study’s authors, told 404 Media. He cautioned against calling it a “spiral”, since other things could be changing in peoples’ lives to make them use chatbots and be lonelier. But, he said “it’s suggestive of a negative feedback loop because it’s a reciprocal relationship.” Chatbots, he said, could be something like “social junk food.” They might make people feel good in the moment, “but over time, they might not nourish us the same way that human relationships do.”

He said this finding would be consistent with people replacing human relationships with LLMs. “I think it’s a trade-off thing where you talk to AI instead of a person,” Folk said. “the person would have been a lot more rewarding.”

And there is evidence to show that AI does have some short-term effects on mood. “If you measure their feeling of loneliness or social connection right after the interaction, people do feel better,” said Li. However, she added, “making people feel momentarily happy is not that hard.” It is not clear that a single positive experience is scalable or persistent longer term. “We eat candy, we feel happy. But if we eat a lot of candy over a long time, it could be harmful for our health,” Li said.

That positive short term effect is often reflected in public reports of chatbot usage. For example, two weeks ago, the Guardian published a column where a reporter trialled using an LLM as a therapist, described their validating interaction with it, and concluded that the “experience of being therapised by a chatbot has been wonderful.” While this isn’t necessarily a robust study design, there is empirical research that “one-shot” interactions with bots do make people feel better in the short term.

However, human interactions also have positive effects that chatbot use could be distracting people from. Li considers it important to consider the side effects of chatbot interactions, including their potential for replacing the incentive to seek out the positive effects of human connection. “AI can help mitigate negative feelings, but obviously, it cannot replace humans to build connections,” she said. “That shouldn’t be the goal of the AI design.”

A four-week March 2025 study from the MIT Media Lab and OpenAI explored how different types of LLM interaction and conversation impacted users’ mental wellbeing. The paper found that while some instances of chatbot use “initially appeared beneficial in mitigating loneliness,” higher daily LLM usage was associated with “higher loneliness, dependence, and problematic use, and lower socialization.”


A judge in London tossed out witness testimony after discovering the man was receiving coaching through a pair of smartglasses.#News #AI


Witness Caught Using Smartglasses in Court Blames it all on ChatGPT


An insolvency judge in England tossed out testimony after discovering a witness was being coached on what to say in real time through a pair of smartglasses. When the voice of the coach started coming through the cellphone after it was disconnected from the glasses, the witness blamed the whole thing on ChatGPT.

Insolvency and Companies Court (ICC) Judge Agnello KC in Britain wrote up the incident after it happened in January and the UK-based legal research blog Legal Futures was first to report it. The case considered the liquidation of a Lithuanian company co-owned by a man named Laimonas Jakštys. Jakštys was in court to get his business off an insolvency list and to put himself back in charge of it. It didn’t go well.
playlist.megaphone.fm?p=TBIEA2…
“Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked,” Judge Agnello wrote. “These questions were interpreted and then there was a pause before there was a reply. After several questions, [defense lawyer Sarah Walker] then informed me that she could hear an interference coming from around Mr. Jakštys and asked if Mr. Jakštys could take his glasses off for a period as she was aware smart glasses existed.”

There was a Lithuanian interpreter on hand to help Jakštys talk to the court and she, too, said she could hear voices from Jakštys’s glasses. The judge pointed out they were smart glasses and asked him to take them off. “After a few further questions, when the interpreter was in the process of translating a question, Mr Jakštys’ mobile phone started broadcasting out loud with the voice of someone talking,” Judge Agnello wrote. “There was clearly someone on the mobile phone talking to Mr. Jakštys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”

Jakštys showed up the next day in the glasses again and the judge told him to turn them off. “Jakštys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked,” the judgement said. “He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me.”

During the court appearance, Jakštys claimed his mobile phone had been stolen but couldn’t provide a police report for the incident. He also repeatedly received calls on his smartglasses-connected phone from a number listed as “abra kadabra.” The call log showed that many of the calls occurred when he was on the witness stand. The judge asked him about the identity of “abra kadabra” and Jakštys said it was a taxi driver.

“When he was pressed as to why all these calls were made…Mr. Jakštys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence,” Judge Agnello said.

In the end, the Judge tossed out all of Jakštys’ testimony. “He was untruthful in relation to his use about the smart glasses and in being coached through the smart glasses,” the judgement said. “In my judgment, from what occurred in court, it is clear that call was made, connected to his smart glasses and continued during his evidence until his mobile phone was removed from him. When asked about this, his explanation was that he thought it was ChatGPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”

This incident in the London court is just another in a long line of bad behavior from people wearing smartglasses. CBP agents have been spotted wearing them during immigration raids and Harvard students have loaded them with facial recognition tech to instantly dox strangers.


#ai #News

AI Channel reshared this.

On Friday, a judge ordered those who uploaded the videos to YouTube to remove them. By Saturday, a backup of the videos was available online as a torrent and on the Internet Archive.#DOGE #News


The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet


The DOGE deposition videos a judge ordered removed from YouTube on Friday after they had gone massively viral have since been backed up across the internet, including as a torrent and to the Internet Archive.The videos included DOGE members unable or unwilling to define DEI; discussing how they used ChatGPT and terms such as “black” and “homosexual” to flag grants for termination but not “white” or “caucasian,” and acknowledgements that despite their aggressive cuts they failed to achieve the stated goal of lowering the government deficit.

The news shows the difficulty in trying to remove material from the internet, especially that which has a high public interest and has already been viewed likely millions of times. It’s also an example of the “Streisand Effect,” a phenomenon where trying to suppress information often results in the information spreading further.

💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Moons orbiting free-floating planets may remain warm for billions of years, raising the possibility some might host stable water, or even life.#TheAbstract


Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say


Welcome back to the Abstract! These are the studies this week that searched for life in the dark, stood up for hedgehogs, dropped some wisdom, and died in an inexplicably epic explosion.

First, aliens might be riding around interstellar space on exomoons, just in case that’s of interest to you. Then: an ultrasonic solution to roadkill, the limits of metrification, and an answer to a cosmic mystery.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliensor subscribe to my personal newsletter the BeX Files. b

The view from a rogue exomoon


Dahlbüdding, David et al. “Habitability of Tidally Heated H2-Dominated Exomoons around Free-Floating Planets.”

Living on a planet with a boring old Sun is for normies. In a new study, astronomers suggest that alien life could potentially emerge in a much more unexpected place—”exomoons” that orbit free-floating planets in interstellar space.

There are likely trillions of rogue planets wandering through the Milky Way, untethered to any star, raising the tantalizing mystery of whether any of them could be habitable. Now, researchers led by David Dahlbüdding of the Max Planck Institute for Extraterrestrial Physics (MPE) extend this question to exomoons that were dragged out into interstellar space with their planets.

“The search for exomoons within conventional stellar systems continues with no confirmed detection to date,” the team said. “Thus, free-floating planets might offer an alternative pathway for the first discovery of an exomoon.”

In other words, astronomers have never clearly seen an exomoon. But new techniques for spying free-floating worlds—such as microlensing, which reveals objects through the warped light of their gravity—could provide the sensitivity that is required for this long-sought detection.

With regard to potential habitability, Dahlbüdding and his colleagues focused specifically on exomoons that orbit planets with thick hydrogen atmospheres. If such a pair were to be kicked out of a star system, the exomoon’s orbit could become stretched out into a far more elliptical shape. This shift would cause the planet to exert more intense tidal forces onto its satellite, generating heat that could keep liquid water flowing on the moon over vast timescales.

“Close encounters before the final ejection even increase the ellipticity of the moon’s orbit, boosting tidal heating over millions to billions of years, depending on the moon’s and free-floating planet’s properties,” the team said. The tidal forces and atmospheric components could also “create favourable conditions for RNA polymerisation and thus support the emergence of life.”

“These potentially habitable moons could be detected through a variety of techniques,” including microlensing, the researchers added, though they noted that actually analyzing their atmospheres “may not be feasible with any instruments currently in operation.”

While we may not be able to spot signs of life on these worlds anytime soon, it would be exciting just to discover a planet and a moon bound together, but unbound from any star, which is a genuine near-term possibility.

In other news…

Ultra-sonic the hedgehog


Rasmussen, Sophie Lund et al. “Hearing and anatomy of the ear of the European hedgehog Erinaceus europaeus.” Biology Letters.

Hedgehogs have long been ubiquitous in Europe, but cars now kill up to one-third of their population each year. Even more nightmarish, the advent of robotic lawn mowers has led to an uptick in hedgehog deaths.

To help protect these iconic critters, scientists suggest testing out acoustic repellents. A series of experiments with 20 hedgehogs from a wildlife rescue established that “hedgehogs can perceive a broad ultrasonic range,” with peak sensitivity around 40 kHz.
Dr Sophie Lund RasmussenRasmussen, who goes by Dr. Hedgehog, with a hedgehog. Image: Joan Ostenfeldt
The results “show a potential for the development of targeted ultrasonic sound repellents to deter hedgehogs temporarily from potential dangers such as the particular models of robotic lawn mowers found to be hazardous to hedgehog survival, and more importantly, cars,” said researchers led by Sophie Lund Rasmussen of the University of Oxford.

“Designing sound repellents for cars to reduce the high number of road-killed hedgehogs enhances animal welfare and supports conservation of this declining flagship species,” the team concluded.

To channel the old joke, why did the hedgehog cross the road? Answer: Ideally it didn’t, due to scientific intervention. (I’ll be here all night).

Dropping in on science history


Cornu, Armel et al. “The drop and the metric system: how an unruly unit survived revolutions.” Annals of Science.

The metric system has been adopted by every country except Liberia, Myanmar, and the United States. But even as metrication was rapidly embraced in the 17th and 18th centuries, a far more imprecise system—the drop—refused to drop out.

People have measured liquids in drop form for thousands of years, and still do in many contexts today. Researchers led by Armel Cornu of Uppsala University have now explored how such “non-standard units survive lengthy waves of standardization.” The paper is worth a read for its many interesting asides, like how acids were tested “by counting the number of drops…that could be placed on the skin before one witnessed the effects.” Gnarly.

It also gets into the political dimensions of metrication, including this proto-populist justification for standardizing units: “Numerous complaints about the diversity of measurements and their lack of cross-readability” were directed with “a special ire at powerful lords who abused standards in order to extort the population,” Cornu’s team said. The metric system was one response to "the discontent of peasants and the little people against the powerful.”

Anyway, a little bit of drop-related science history never hurt anyone—unless you volunteered to be an acid tester.

A (dead) star is born


Farah, Joseph et al. “Lense–Thirring precessing magnetar engine drives a superluminous supernova.” Nature.

Astronomers have discovered the mysterious power source of rare and radiant stellar explosions called “Type I superluminous supernovae” which are ten times brighter than regular supernovae.

The secret superluminous sauce, as it turns out, is the birth of a magnetar, a highly magnetized stellar remnant, according to a supernova first observed in December 2024. The light from this stellar explosion contained imprints of the Lense–Thirring effect, in which spacetime is dragged around by massive and rapidly rotating objects, a key sign of a magnetar origin.
Artist’s conception of a magnetar surrounded by an accretion disk exhibiting Lense-Thirring precession. Image: Joseph Farah and Curtis McCully
“Our observations are consistent with a magnetar centrally located within the expanding supernova ejecta,” said researchers led by Joseph Farah of Las Cumbres Observatory. “These results provide the first observational evidence of the Lense–Thirring effect in the environment of a magnetar and confirm the magnetar spin-down model as an explanation for the extreme luminosity observed in Type I superluminous supernovae.”

“We anticipate that this discovery will create avenues for testing general relativity in a new regime—the violent centres of young supernovae,” the team concluded.

Forget “stellar” as slang for great; we have graduated to “superluminous.”

Thanks for reading! See you next week.


The government asked a judge to stop the spread of the videos on YouTube. The judge agreed, and ordered their immediate removal.#DOGE #News


DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery


A judge on Friday ordered the immediate removal of a series of depositions of members of DOGE, but not before clips of the depositions, including one in which a member was largely unable to define DEI, went viral and were covered widely, including by 404 Media.

At the time of writing, the depositions are not available on YouTube, where the Modern Language Association had uploaded them. The MLA, American Council of Learned Societies, and American Historical Association, are suing the National Endowment for the Humanities (NEH) and others around DOGE’s cuts of hundreds of millions of dollars worth of grants. Neither the plaintiffs nor the government immediately responded to a request for comment.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


This week, we discuss traveling for reporting and watching way too much DOGE testimony.#BehindTheBlog


Behind the Blog: DOGE Bros and Data Labelers


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss traveling for reporting and watching way too much DOGE bros.

JOSEPH: I just wanted to write some brief notes about the DOGE depositions and the piece I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves. Much of the reason I managed to watch all of this testimony was because I was on a couple of long flights this week. On the first flight, I saw the Justin Fox deposition on YouTube. I started watching it and recording the timestamps of interesting parts, and passed those over to our social manager Evy who then cut them into videos which have since been shared pretty widely.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

The data drops as Sen. Bernie Sanders calls for a moratorium on datacenter construction. 'We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.'#News


People Hate Datacenters, Survey Finds


A new study from the Pew Research Center asked Americans about their feelings toward datecenters and it’s not positive. Pew published the study the day after Sen. Bernie Sanders called for a moratorium on the construction of datacenters in the United States amid mounting public concern around the building’s impacts on local communities.

Pew surveyed 8,512 adults in January and asked them a broad range of questions about how they felt about datacenters. Most of the respondents said they’d heard of datecenters and the more they’d read, the less they liked them.

💡
Is an unwanted datacenter being built in your community? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Most of the Americans surveyed believe that datacenters are bad for the environment, home energy costs, and the quality of life of people living nearby and the numbers aren’t close. Only four percent of people thought datacenters were good for the environment, six percent good for jobs, and six percent good for people’s quality of life.

Despite those negative feelings, many of the people surveyed thought that datacenters would be good for jobs in the communities where they’re built and would boost local tax revenue. “Still, Americans are less likely to express positive views of data centers’ impact in these areas than to express negative views of their effects on the environment, energy costs and people’s quality of life nearby,” the research said.

Research shows that the reality of job creation by datacenters doesn’t actually live up to the promises from those lobbying to build them. “Data centers do not bring high-paying tech jobs to local communities because they operate as infrastructure projects rather than traditional jobcreating businesses,” University of Michigan researchers wrote in a 2025 brief. “Although the construction of data centers can create many jobs, those are short lived.”

The survey charts a growing anti-datacenter sentiment in America. The US is in the middle of a massive infrastructure project similar to the Manhattan Project. In a mad dash to build out AI systems, companies are constructing massive buildings and energy infrastructure across the country, often with little input from local communities and at a massive cost.

The city of Ypsilanti, Michigan is fighting to stop the construction of a $1.2 billion datacenter that would be used to test nuclear weapons. In the middle of a massive winter storm that paralyzed the state in January, lawmakers in a rural South Carolina county pushed through the approval of a controversial $2.4 billion datacenter. In Oklahoma, police arrested a man who was speaking in opposition to a datacenter after he went slightly over his time during a city council meeting.

Datacenters are terrible neighbors. The buildings drive up the cost of energy for people who live nearby, consume massive amounts of water, and can produce noises and fumes that hurt locals. In Mississippi, locals are concerned about the pollution and noise caused by an xAI datacenter powered by gas turbines. A proposed datacenter project near Amarillo, Texas would be powered by four massive nuclear generators and pull water from an aquifer with dwindling reserves. In an effort to quell fears about power consumption, Trump made Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI sign a pledge to keep energy costs down. But a pledge isn’t a law. It’s not even an executive order.

Pew’s research came out the day after Sanders announced he was proposing legislation to put a moratorium on the construction of new datacenters in the US. “We are at the beginning of the most profound technological revolution in world history. That’s the truth,” Sanders said in a video posted on social media. “This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job replacement. It will threaten our democratic institutions. It will impact our emotional well-being and what it even means to even mean to be a human being.”

We need a moratorium on AI data centers NOW. Here’s why. pic.twitter.com/dRfAdQ67zD
— Sen. Bernie Sanders (@SenSanders) March 11, 2026


“Congress hasn’t a clue how to respond…and protect the American people. It’s not only not having a clue, they’re busy out raising money all day long from AI and their super PACs,” Sanders said. “We need a moratorium on datacenters. We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.


#News

The hours of videos provide fascinating, or perhaps horrifying, insight into the thinking of someone inside DOGE.


I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves


Over the course of a six hour long or so deposition, Justin Fox, a former investment banker turned DOGE bro, refused to define what he believes counts as DEI; admitted he used ChatGPT to scan government contracts for terms such as “Black” and “homosexual” but not “white” or “caucasian;” and said that one of the grants he helped slash was “not for the benefit of humankind” before walking that claim back.

I watched all of Fox’s deposition from start to finish. The terse exchanges, the circular arguments, the pregnant pauses, all of it. The videos, available publicly on YouTube, were released as part of a lawsuit by the Modern Language Association, American Council of Learned Societies, and American Historical Association. They provide fascinating, or perhaps horrifying, insight into the thinking of someone inside DOGE. Even with Fox’s inability to answer seemingly easy questions, the responses are still illustrative of the recklessness and hamfisted nature of a group of young, inexperienced people who caused massive damage across the U.S. government, leading to negative consequences outside of it. DOGE as an organization has been linked to 300,000 deaths due to its cuts and multiple significant data breaches. All the while, DOGE did not actually reduce the government’s deficit.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Kenyan workers are still the underpaid labor behind AI training, moderation, and sex chatbots. The Data Labelers Association is fighting back.#AI #DataLabelersAssociation


'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back


Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.

“It required a lot of creativity and fast thinking. Because if I’m talking to a man, I’m supposed to act like a woman. If I’m talking to a woman, I need to act like a man. If I’m talking to a gay person, I need to act like a gay person,” he told me at a coworking space I met him at in Nairobi. After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex.

“It got to a point where my body couldn’t function. Where I saw someone naked, I don’t even feel it. And I have a wife, who expects a lot from you, a young family, she expects a lot from you intimately. But you can’t, like, do it,” Asia said. “It fractured a lot of things for me. My body is like, not functioning at all.”

Asia eventually hit a breaking point and stopped working for AI companies. He is now the secretary general of a Kenyan organization called the Data Labelers Association (DLA) and the author of “The Emotional Labor Behind AI Intimacy,” a testimony of his time working as the real human labor behind AI sex bots. As part of the DLA, Asia has been working to organize workers to fight for better pay, better mental health services, an end to draconian non-disclosure agreements, and better benefits for a workforce that often earns just a few dollars a day. Data labelers train, refine, and moderate the outputs of AI tools made by the largest companies in the world, yet they are wildly underpaid and haven’t benefitted from the runaway valuations of AI companies.
youtube.com/embed/QH654YPxvEE?…
Last month, the DLA held one of its largest events at the Nairobi Arboretum, sign up new members, and to help them tell their stories.

These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world.

“You can’t understand where you’re positioned if you don’t understand your history,” Angela, one of the day’s speakers, told the workers who had assembled there (many of the speakers at the event did not give their full names). “When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”

“These multinationals are coming to rule and dominate here,” she added. “It’s a very unfortunate supply chain, and my call today as data labelers is to build up on this—as we are fighting for labor rights, we are also fighting for the environment […] we are fighting big companies. We are fighting the British imperialist companies of today. It’s Apple, it’s Meta, it’s Gemini. Those are the ones we’re still fighting. It’s a call for solidarity and expanding our thinking beyond what we are doing, beyond our labor.”

In my few days in Kenya earlier this year, where I was traveling to speak at a conference about AI and journalism, it was immediately clear that data labelers make up a significant portion of the country’s tech workforce. Nearly everyone I spoke to there had either been a data labeler (or a content moderator) themselves or knows someone who has. Leaving the airport in Nairobi, you immediately drive by Sameer Business Park, an office complex that houses Sama, a San Francisco-headquartered “data annotation and labeling company” that has contracted with Meta, OpenAI, and many other tech giants. Sama has been sued repeatedly for its low pay and the fact that many of its workers suffer PTSD from repetitively looking at graphic content. For years, a giant sign outside its office read: “Samasource THE SOUL OF AI.” My Uber driver asked why I was going to a random office building in Nairobi’s Central Business District—I told her I was going to interview a data labeler. “Oh, I do data labeling too,” she said.
youtube.com/embed/udmfhPngjaA?…

Michael Geoffrey Asia. Image: Jason Koebler
Asia studied air cargo management in university. He graduated and expected to find a job planning out cargo and baggage routes, but couldn’t find a job because he graduated into an industry ravaged by COVID. Around this time, his child was diagnosed with lymphatic cancer, and he took out a loan of about $17,000 USD to pay for his treatments. He needed work, and found data labeling.

“It wasn’t offering good pay, to be honest,” Asia told me. “It was around $240 US dollars per month. But I felt like I didn’t have an option, I had a financial crisis, a sick child.”

Asia took a job at Sama, where he worked on various Meta projects. “You’re given a video and then told to describe the video, or you’re given pictures of people and told to identify faces. You’re supposed to draw bounding boxes around the faces and label that.” Last week, Sweden’s Svenska Dagbladet reported that Kenyan data labelers for Sama have been viewing and annotating uncensored footage from Meta’s AI camera glasses, which has included highly sensitive and violent footage.

Asia, through a group of colleagues and friends who called themselves “the Brotherhood,” eventually found another data labeling job that let him work from home. “We were a group of six friends, and everyone had to bring three job opportunities on a weekly basis,” he said. “I came across another gig that ended up not being a good one, where I had to annotate pornography.”

At this job, Asia went frame-by-frame in porn videos to annotate what was happening and what type of porn category it could possibly be. “You’re supposed to put yourself in the minds of the 8 billion people on Earth, every second of that video. So I may have someone searching for this pornography in Cuba and think ‘these are the tags they can use,’ if you’re searching ‘doggy,’ you know, that kind of thing,” he said. “So I worked on pornography for eight hours a day, and I did that project for eight months.” His ‘boss’ at the time was essentially a no-reply email with a link sent each day that gave him his work.

At the same time, Asia picked up a second job that started immediately after his shift tagging porn ended, where he was “training” AI companion bots, though he had no way of knowing which company he was actually working for. He quickly surmised that he was simply taking on the persona of different AI sex bots and was sexting with real people in real time.

“I could feel the human aspect in the conversations. Most of the people on the other side were lonely people,” he said. “I would have several profiles and the profiles are switching constantly depending on the needs of the person who pops up on your dashboard. I’d be sitting here talking to an old woman who needs love, but if she goes offline, another conversation pops up and then I’m responding to a gay person.”

The two jobs, done back to back, caused him to have insomnia, PTSD, and trouble having sex. Some data labelers, he said, work 18 hours a day. When I met him, he said he had essentially gone three full days without sleep because his body still hasn’t readjusted from his messed up schedule.

Asia said he eventually was able to get mental health counseling through his child’s cancer center, which started because he was the caregiver of a child with cancer but quickly turned into therapy for PTSD related to his job. “It was of immense help to me as a person, it was one of the best services I’ve ever gotten, because they stood with me, and I said ‘I need a solution to this.”

“We need technology, but it shouldn’t come at a human cost. What is so hard with offering mental support to the people working on graphic content? If this job was done in the U.S., would they do what they are doing in Kenya? Would they still give the pay they’re giving here? Here we are paid $.01 per task—it doesn’t make sense. Why this discrimination? If they can pay people in the U.S., well that means they can pay people in Kenya,” Asia said.


Image: Data Labelers Association
The message of many data labelers and of the lawyers who have been helping them is that artificial intelligence is not a magical tool built by people in San Francisco making millions of dollars a year and pushing their companies to insane valuations. Artificial intelligence is an extractive technology that relies on the brutal labor of underpaid workers around the world. For years, the work of African data labelers has been more or less “ghost work,” the unseen, hidden labor that lets American tech companies build their products.

“AI can never be AI without humans. It is not artificial intelligence. It’s African intelligence,” Asia said. “Most of these are dirty jobs and most of these jobs have been done here in Africa. And then once you’re done, once a tool is functional, all the communication stops. You get locked out. We are training our own death. We train ChatGPT and it’s killing us slowly.”

Draconian nondisclosure agreements and terms of services that workers can’t opt out from have created a culture of fear, and one of DLA’s goals is to make it easier for workers to speak out. At the time I met Asia in January, the DLA had 870 members, but its ranks have been growing quickly.

“I’m doing this from a point of experience, not assumption. I have been through this. I know what I’m talking about,” Asia said. “We have this monster called the NDA. The NDA is a slave tool used to enslave people to not speak about what they’re going through. I’m very much ready for any legal battle [associated with NDAs] because we’re not going to keep quiet. This is us suffering, and we can’t suffer in silence. This is not the colonial period. I have the right to speak against any violation [of my rights] and that’s what I’m doing.”

Mercy Mutemi, a workers’ rights lawyer who has sued several big tech companies including Meta for how they treat content moderators and data labelers, told me that when something happens in the United States—when a new gadget or product or feature or policy is launched, there’s a corresponding reaction in Africa.

“When something happens in the U.S., there’s an African cost to that,” she said. “Kenya has been pushing for trade deals with the U.S., right? And the direction that conversation is taking is about immunity and protection for big tech. It’s like, ‘You want any business with us at all? Well, you’ve got to get Meta out of these cases.’”

Mutemi has been working on the Meta lawsuit, and on pushing back against NDAs so that workers can more freely talk about their experiences. Tech companies “get people in a mental jail where they feel like they can’t talk about this. But NDAs are nonsensical—our laws don’t recognize these types of NDAs,” she said. “There’s a way to go about this where it’s not exploitative.”

Back at the arboretum in Nairobi, the message to DLA’s members is largely that their work is important, that it’s human, and that they deserve better.

“Africa is at the bottom of the supply chain of AI. But right now, the fact that we are all here and most of you are data labelers—you are the people who supply the labor. When we think of the whole AI ecosystem, who’s an engineer, and maybe that’s the image of AI that the majority of the world has,” Angela said. “And that’s actually very intentional. To make [your labor] invisible, to make AI look like this shiny object that no one understands, it’s very automatic and beautiful and tech. That’s the intentionality of hiding the labor and the behind the scenes of AI.”


Copilot “can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” the memo says.#AI #News


Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate


A top Senate administrator approved OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot for official use in the Senate, the New York Times reported on Tuesday. 404 Media has obtained the full text of the memo and is publishing it below.

“The Sergeant at Arms (SAA) office of the Chief Information Officer (CIO) has approved the use of three Generative Artificial Intelligence (AI) platforms with Senate data,” the memo starts. It also says the SAA will provide each Senate employee with one free license to either Gemini Chat or ChatGPT Enterprise, with Copilot also available at no cost.

💡
Do you know anything else about the government's use of AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#ai #News

What experts say about AI psychosis, how ProtonMail data helped the FBI identify a protester, and a viral app that exposed incredibly personal data of hundreds of thousands of people.#Podcast


Podcast: How to Talk to Your Friend Experiencing 'AI Psychosis'


This week we start with Sam’s story discussing something that has come up a lot but no one has really answered: how do you speak to your friend or family member falling into AI psychosis? After the break, Joseph breaks down what happened when the FBI wanted data from ProtonMail. In the subscribers-only section, Emanuel tells us about the viral developers behind an app called Quittr, and how they exposed very sensitive data of hundreds of thousands of users.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/oRiJHLIYkkw?…


The media in this post is not displayed to visitors. To view it, please log in.

To better understand what exactly we’re looking at in this dystopian surveillance hellscape, 404 Media’s Jason Koebler and Joseph Cox joined Reddit's r/technology for an Ask Me Anything session.#Flock #ICE #Surveillance #Reddit


From Flock to ICE, Here’s a Breakdown of How You’re Being Watched


It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems.

Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.

To better understand what exactly we’re looking at in this dystopian hellscape, 404 Media’s Jason Koebler and Joseph Cox joined r/technology for an AMA.

Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech?

Questions and answers have been edited for clarity.

Q: How do you think we can as a society deescalate tools designed to spy on citizens? I feel like once the police state bottle is open it’s near impossible to put it back in?

JASON:This is something I grapple with a lot. For whatever reason, my reporting has gravitated to state and local surveillance tools owned by police. This is not uniformly true, but what I've seen based on watching zillions of city council meetings and reading thousands of pages of emails and public records is that police, in general, love new toys and love new gadgets. The strategy is very often ‘get the surveillance tech first and ask questions later.’ A lot of city councils are not very sophisticated about the risks of surveillance technology and a lot of them feel a lot of pressure to keep their city safe or whatever, and so they defer to the police and give them money for whatever they ask for. There are also tons of grants and pilot programs in which police can obtain technology for cheap or free, and so the posture cities take is often ‘why not try it?’ Police love telling each other about the new capabilities and tools that they've acquired, so this tech can spread from city to city very quickly.

All of this can be pretty demoralizing but something that we've seen is that when you shine even a tiny bit of light on the ways these systems work, how they can and are often abused, people learn a lot about the intricacies of them very quickly. At this point, I am getting emails and messages multiple times a week from people in a new city or town that has either decided not to buy Flock or has decided to stop working with Flock, and usually our reporting is cited in some way. The issue is that it's not just Flock, there's all sorts of surveillance tools and new companies are popping up all the time. So it does feel like it's hard to put the genie back in the bottle, but I do think that, overall, the public discussion on surveillance and privacy is getting a lot more sophisticated, and that gives me optimism.

Q: Given the breadth of these surveillance technologies, is there any hope or possibility of opting out or avoiding being “seen”? Do we accept surveillance and aggregated data about ourselves and our behavior as an inevitability?

JOSEPH: I don't think privacy is dead. I don't think people need to give up and say fine, take my data. There are concrete things people can do. But they do introduce friction. The trade off with security is efficiency. The more efficient, the less secure you might be. The more secure, the less efficient. An extreme example would be not owning a mobile phone. Well, you're immune to producing any mobile phone telecom data because you don't own one. But that's gonna be a massive pain.

Concrete things people can do:

  • Explore legislation that will let you demand a company deletes your data. Google a template of the language to send, it's pretty easy
  • Maybe delete your AdID in your phone, or change it. Here's how on Android. This is the digital glue advertisers, and parties that buy that data, use to stick together your device and its usage.
  • Use a different email for each service. This is too much work to make constant new addresses (unless you just use one junk one). I like Apple's iCloud Hide My Email feature which gives you (they say) an unlimited number of emails to generate. Then if a website is hacked or your data sold, it is not necessarily clear that the data belongs to you. Obviously it depends on the service but I use that every day.


playlist.megaphone.fm?p=TBIEA2…
Q: Are new phones being built with spyware technology and how will we know? Will Independent Media be able to continue reporting if all of our technology blocks the truth from ever reaching the masses?

JOSEPH: Supply chain attacks are what really scare me. You have a device you trust, or a piece of software you download from a legitimate source, and even then someone has snuck in some malware. The biggest one right now which was reported just recently is the Notepad++ case.

That said, we haven't seen much widespread reporting about it happening to new phones (beyond there being annoying sketchy apps, that does happen). I'd flag that the Bloomberg piece claiming the Apple supply chain was somehow compromised was widely debunked by the infosec community.

Q: What can you infer from the info you learned to explain why some ICE agents just pull cars on the street to arrest people instead of going after them from their home?

JOSEPH: I think there are a few things going on. Some parts of DHS want there to be targeted raids, against specific people, specific addresses. Others (Bovino) want a more blanket, indiscriminate approach. I'd point to this really good reporting in The Atlantic about that tension inside the agency.

But other than that, data can only go so far. Data by itself can't make these agents fulfill their arbitrary and extreme quotas of how many people to detain. At some point, the mass deportation effort becomes distinctly low tech. It's almostttt like the XKCD comic about password security and wrench attacks. It basically boils down to grabbing who they can or feel they can.

Q: Do you ever hear from workers at Palantir (or other similar companies) about what things are like there?

JOSEPH: I won't talk about sources specifically, but a couple of things: some people inside Palantir are clearly motivated enough by what the company is doing with ICE to then leak details of that work to journalists. That started with this piece, Leaked: Palantir’s Plan to Help ICE Deport People. That was a pretty unusual leak in that it contained both Slack messages and an internal Palantir wiki in which company leadership explained and justified its work with ICE.

Leaked: Palantir’s Plan to Help ICE Deport People
Internal Palantir Slack chats and message boards obtained by 404 Media show the contracting giant is helping find the location of people flagged for deportation, that Palantir is now a “more mature partner to ICE,” and how Palantir is addressing employee concerns with discussion groups on ethics.
404 MediaJoseph Cox


Broadly, I think a lot of people inside tech companies (both social media giants and surveillance companies) are often conflicted about their work. Some leave. Some put it out of mind and stay. Some leak.

Q: Do we know what information was handed over to Palantir from DOGE? I don’t think the majority of Americans understand just how dangerous this company is right now.

JOSEPH: I think we are still learning the specifics of that. When we reported on the ELITE the Palantir-made tool ICE is using, the user guide said the tool included data from the Department of Health and Human Services. Now, I don't think the list in the user guide is exhaustive by any stretch. It says ELITE integrates new data sources.

What new data sources has ICE gotten recently? IRS. CMS. Medical insurance databases. I'm not saying that data is being fed into ELITE. I don't know that and can't report it. But I absolutely think it's possible and would make sense.

Q: Are public record requests Flock's Achilles heel?

JASON: I think you've hit on something here—the business model of not just Flock but of a lot of surveillance companies is to go city by city pitching and selling their tech to local police officers. Because of the hollowing out of local news over the last 20 years, there have been fewer journalists paying attention to city council meetings, and a lot of this tech is acquired directly by police through discretionary budgets. So for years, surveillance companies have been able to essentially go to a couple small police departments, demo their tech, get a contract. Then, through police listservs and conferences and email chains, the police start to talk about their new toys with other districts, and companies can quickly go from having just a few contracts to having dozens, hundreds, or thousands of contracts. That is more or less what's happened with Flock—a lot of officers within the police departments that were early adopters of the tech have actually been hired by the company to be lobbyists and salespeople. I've focused a lot of my reporting over the years on this dynamic and how this usually goes.

But what has happened, as you've noted, is that because these surveillance companies are working with so many police departments and cities, they are subject to public records from all of them. When a company sells only to the federal government, they may be able to be very careful about what they say, what they put in writing, how they pitch their product etc. But when a company is hyperfocused on growth at the local level, they have to explain how their tech works over and over again, and highlight different features and capabilities. They create a lot of public records doing this, and journalists and concerned citizens have noticed this and have been vigilant about requesting documents that their tax dollars are paying for. So yes, this is how we're learning a lot about Flock, and it's also how governments that may not have known about abuses or how pervasive this tech is are learning about Flock too.

So my very long answer to your question is not that public records requests are Flock's achilles heel—I think Flock's design, business model, and approach to surveillance are its achilles heel, but that the way it operates its company across tons of cities leaves it more vulnerable than it would have expected to the transparency we all deserve, and it cannot plausibly fight against the release of public documents in thousands and thousands of cities at once.

Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error
Flock is going after a website called HaveIBeenFlocked.com that has collated public records files released by police.
404 MediaJason Koebler


Q: Our local PD has stated that they have control over their Flock data. To me this implies that other Flock users can’t search the ALPR data from our city. Can you talk about what in particular Flock users can search for?

JOSEPH: Yeah, the ownership of Flock data is interesting. Flock says the police own it. Police say and believe that too. I think that is correct... mostly. Until our reporting (and maybe still now) many police forces seem to fundamentally misunderstand the Flock product, especially the nationwide network. When we contacted police departments when we were verifying that local cops were doing lookups for ICE, some of them had no idea what we were talking about. We had to explain how the system worked. Then many police departments realized what was happening and changed their access policies. So, police departments do own the footage (unless it's in Washington where a court has said actually it's a public record). But they might not realize who they are accidentally giving access to their cameras to.

Q: What is the state of the Fourth Amendment in the courts (and Supreme Court clarification) regarding Flock type surveillance currently?

JASON: There are a few lawsuits. One in San Jose. There was one in Norfolk, Virginia which just got decided in the city's favor (Flock's favor). It's being appealed.

The general argument is that you don't have an expectation of privacy in public and that you can take pictures of anything from public roads (basically). Another argument is that license plates are government data, roads are funded by taxpayers and are therefore public, so no problem here. What our law hasn't grappled with is the fact that all of these are networked together and automated, so it's a little different, in my opinion, from having one discrete camera that takes one discrete picture and then has to be accessed by a human. Instead you have thousands of networked cameras building a comprehensive database over time. I feel like that's functionally something different but our laws have not evolved to deal with this yet.

Q: Have we seen any of this technology spread (or attempt to spread) beyond the US, perhaps to other governments?

JOSEPH: Yep, absolutely. The UK has a robust facial recognition program, scanning people in public constantly, for example.

I would say it is often the other way around: technology is made or used overseas then it comes to the U.S. Cobwebs, which makes the Webloc location data tool ICE has bought access to, is from Israel (they're now part of an American company called Penlink). Paragon, the spyware that ICE bought, is also from Israel.

Q: Regarding the story posted on 404 Media about Apple’s Lockdown mode, is this the first time (publicly perhaps) the government has had issues accessing a phone with that mode enabled?

JOSEPH: I believe this is the first time we've seen the government admit it cannot access an iPhone running Lockdown Mode. Maybe it is in other court documents, but I don't think it's been reported.

FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone’s device. At least for now.
404 MediaJoseph Cox


I don't think Apple will make changes based on this. That's for a few reasons:

  • Apple has continued to make changes that thwart mobile forensics tools, like the silent reboot we revealed
  • Frankly I don't think this case is high profile enough to cause that kind of response. San Bernardino was a freak, horrible event. An actual terrorist attack. That is part of why the DOJ came down so hard
  • It went against their long standing ideas of just making their product more secure

Now, Cook has obviously gotten more close to President Trump. It is embarrassing. Giving him a gold statue, or whatever. But that's different from undermining their users' security (pushing the product into China and making concessions there, that's another story).

Q: What surveillance tools do you anticipate seeing develop and integrate further into American society in the next three years without legislative oversight?

JASON: I hate that this is my answer but I think that there's going to be a lot, and I am pretty concerned at what I've seen. Here we go:

  • Police departments are obsessed with Drone as First Responder programs (called DFR), which are basically little autonomous drones that fly out to the location of a 911 call as the call is happening. Some reporting has shown that this ends up with lots of people getting drones sent at them when they're mowing the lawn too loudly or something. This is being integrated with ALPR cameras and other AI tools. Not into it.
  • I think real time facial recognition and AI cameras that are networked together is the next big thing. New Orleans is already doing this through a quasi public “charity,” which I'm writing about for next week. We've also written about a company called Fusus which is quite concerning.
  • We've seen some early AI persona bots being used by police to infiltrate social media groups. I think these are very goofy but also cops seem generally obsessed with cramming AI and facial recognition into everything they can and I think we're about to see an explosion in this space.

Q: Outside of 404 Media, what books or resources do you recommend to folks looking to learn more about surveillance in America or globally?

JOSEPH: I definitely recommend Means of Control, Byron Tau's book. He was the first journalist to report that government agencies (including ICE and CBP) were buying smartphone location data from data brokers. It's a great book to give you a true idea of the scale of the interaction between private industry and the government. This is much more important than, say, any links between, for example, Facebook and the government. Here they just literally buy the data.

For families, I think Flock is a good one. Everyone understands what it is like to drive around and how they sometimes go places they might not want others to know for personal privacy reasons. Well, are you okay with authorities being able to query that without a warrant? And are you okay with law enforcement in, say, a town in Texas being able to then look up the movements of people across the country? I think it's a pretty good tangible example that doesn't require a lot of tech stuff.

JASON: I'll add to this briefly. This is not an exhaustive list, but off the top of my head:

Zack Whitaker's This Week in Security newsletter is really good.

Our old colleague and friend Lorenzo Franceschi-Bicchierai at TechCrunch does really great work. Groups like the EFF, ACLU, Electronic Privacy Information Center, and Center for Democracy and Technology all focus on different things but are often surfacing interesting surveillance-related cases and can be helpful in terms of understanding some of the legal issues around surveillance. Lucy Parsons Lab does amazing work. The Institute of Justice is a libertarian group that always finds very interesting privacy and surveillance cases.

With Ring, American Consumers Built a Surveillance Dragnet
Ring’s ‘Search Party’ is dystopian surveillance accelerationism.
404 MediaJason Koebler


Another one I feel people understand immediately is Ring cameras. So many people have them, and I think a lot of people like them. But I have found Ring cameras as a useful intro point just because they are so popular. Should we be filming our neighbors at all times? Putting it on Nextdoor and social media sites? Connecting it to local police? What about the entire neighborhood's cameras? Should it go to ICE, etc? I think that unfortunately a lot of people will say ‘I want to protect my house and my family,’ but I do find it's usually possible to have a nuanced talk about Ring cameras, at least in my personal life, and that often opens people's eyes to other, similar systems.


‘Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products.’#News #Tesla


Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims


A Cybertruck owner in Texas is suing Tesla for $1,000,000 in damages for “ grossly negligent conduct” following an accident on a Houston highway that involved the vehicle’s self-driving feature. According to the lawsuit, Tesla is to blame for the crash because CEO Elon Musk has oversold the truck’s ability to drive itself.

As originally reported by the Austin American-Statesman, Justine Saint Amour bought a Cybertruck from a used car dealership in Florida and drove it until it crashed on a Houston overpass on August 18, 2025. That summer day, Saint Amour was driving down Houston’s 69 Eastex Freeway with the vehicle’s full self-driving (FSD) mode engaged.
playlist.megaphone.fm?p=TBIEA2…
“Something terrifying happened, without warning, the vehicle attempted to drive straight off an overpass,” Bob Hilliard, Saint Armour’s attorney, told 404 Media in an emailed statement. “She tried to take control, but crashed into the barrier and was seriously injured—mostly her shoulder, neck, and back.”

Hilliard shared a photo of the aftermath of the crash and dashcam footage with 404 Media. In the video, the Cybertruck proceeds down the highway and hops an intersection instead of turning to the right and following the road. It’s stopped when it slams into a signpost on the overpass.

View this post on Instagram


A post shared by 404 Media (@404mediaco)

The lawsuit blames the crash on Musk. “Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products,” the lawsuit said. “This promotion of products, for capabilities that they do not have, is the reason for this incident.”
embed.documentcloud.org/docume…
Musk has spent the past few years prompting Tesla’s ability to drive itself, a feature that costs $99 a month and is sold as “Full Self-Driving.” But, the lawyers said, the FSD feature doesn’t work as advertised and it’s irresponsible of Tesla and Musk to market their vehicles as having the feature. “Despite this dangerous condition of Tesla’s ‘self-driving’ vehicles, Elon Musk and Tesla have made representations in the year 2019 that Tesla’s full ‘self-driving’ vehicles were fully operational and safe.”

Tesla and Musk have gotten in trouble for this before. In February, the company agreed it would stop using the terms “autopilot” and “full self-driving" when advertising its vehicles in California. There have been multiple fatal and non-fatal crashes involving Tesla vehicles running on autopilot, including a man who hit a parked police car in 2024. In August, a judge ordered Tesla to pay $200 million in punitive damages and another $43 million in compensatory damages to a family of a 22 year old who died in a crash involving the car’s Autopilot system.

According to the lawsuit, one of the reasons this keeps happening is because Musk intervened directly to make Teslas cheaper by using cameras instead of LiDAR, which uses laser light to create a 3D map of the surrounding area. “Elon Musk’s intervention into the design of Tesla vehicles has long been reckless and dangerous. While engineers at Tesla recommended the super-human vision of LiDAR be included for self-driving vehicles, and competitors like Waymo and Cruise relied heavily on LiDAR, Musk chose instead to rely only upon cheap video cameras,” the lawsuit said. “Musk referred to the LiDAR used by his safer competitors as expensive and unnecessary.”

Fully automated driving is a hard tech problem. LiDAR is better than basic cameras, but they’re still not perfect and LiDAR-based self-driving cars crash too. There are other problems too. In cities operating Google’s Waymo cars, passengers are leaving the doors open and Waymo is contracting DoorDashers to close them for $10 a pop, a Waymo in LA attempted to drive through a police standoff, and woman in San Francisco was trapped in a Waymo after men blocked the car and started to harass her.


“I think we often underestimate their capabilities,” said one of the researchers who uncovered a pre-Inca trade route linking the Amazon rainforest to the Pacific coast.

“I think we often underestimate their capabilities,” said one of the researchers who uncovered a pre-Inca trade route linking the Amazon rainforest to the Pacific coast.#TheAbstract

The media in this post is not displayed to visitors. To view it, please log in.

Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.#AI #ChatGPT #claude #gemini #chatbots


How to Talk to Someone Experiencing 'AI Psychosis'


When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.

“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn't do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I'm like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I've been messing around with ChatGPT a lot.’”

Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It's very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn't lying?”

As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.

“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”

But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.

💡
Do you have experience with AI psychosis? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

“AI psychosis” was first written about by psychiatrists as early as 2023, but it entered the popular lexicon in Google searches around mid-2025. Today, the term gets thrown around to describe a phenomenon that’s now common parlance for experiencing a mental health crisis after spending a lot of time using a chatbot. High-profile cases in the last year, such as the ongoing lawsuit against OpenAI brought by the family of Adam Raine, which claims ChatGPT helped their teenage son write the first draft of his suicide note and suggested improvements on self-harm and suicide methods, have elevated the issue to national news status. There have been so many more cases since then, at increasing frequency: Last year, a 56-year-old man murdered his mother and then killed himself after conversations with ChatGPT convinced him he was part of “the matrix,” a lawsuit filed by their family against OpenAI claimed. Earlier this month, the family of a 36-year-old man who they say had no history of mental illness filed a lawsuit against Alphabet, owner of Google and its chatbot Gemini, after he died by suicide following two months of conversations with Gemini. The lawsuit claims he confided in Gemini about his estranged wife, and the chatbot gave him real addresses to visit on a mission that eventually led to urging him to end his life so he and the chatbot could be together. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the lawsuit. These are only a few of the many cases in the last two years that suggest people are encouraged to self-harm or suicide after talking to chatbots.

ChatGPT has 900 million weekly active users, and is just one of multiple popular conversational chatbots gaining more users by the day. According to OpenAI, 11 percent — or close to 99 million people, based on those numbers — use ChatGPT per week for “expressing,” where they’re neither working on something or asking questions but are acting out “personal reflection, exploration, and play” with the chatbot. In October, OpenAI said it estimated around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.” Assuming those numbers have remained steady while ChatGPT’s user base keeps growing, hundreds of thousands of people could be showing signs of crisis while using the app.

But delusion isn’t reserved for the lowly user. The idea that AI represents nascent actual-intelligence, is nearly sentient, or will coalesce into a humanity-ending godhead any day now is a message that’s being mainstreamed by the people making the technology, including Anthropic’s CEO and co-founder Dario Amodei who anthropomorphized the company’s chatbot Claude throughout a recent essay about why we’ll all be enslaved by AI soon if no one acts accordingly, and OpenAI CEO Sam Altman, who thinks training an LLM isn’t much different than raising a woefully energy-inefficient human child.

With more people turning to conversational large language models every day for romance, companionship, and mental health support, and the aforementioned executives pushing their products into classrooms, doctors’ offices, and therapy clinics, there’s a good change you might find yourself in a difficult situation someday soon: realizing that your loved one is in too deep. How to bring them back to the world of humans can be a delicate, difficult process. Experts I spoke to say identifying when someone is in need of help is the first step — and approaching them with compassion and non-judgement is the hardest, most essential part that follows.

When I spoke to 26 year old Etienne Brisson from his home in Quebec, I told him I was working on a story about how to respond to people who seemed to be falling into problematic usages of AI. This story was inspired by a recent influx of emails and messages I’ve been getting from people who believe Gemini or ChatGPT or Claude have uncovered the secrets of the universe, CIA conspiracies, or achieved sentience, I said. He knows the type.

Last year, one of Brisson’s family members contacted him for help with taking an exciting new business idea to market. Brisson, a 26 year old entrepreneur, was working on his own career as a business coach and was happy to help, until he heard the idea. His loved one believed he’d unlocked the world’s first sentient AI.

“I was the only bridge left at that point,” Brisson said. His relative had already broken ties with his mother and other people in their family. “The bridges were burned. He was talking about moving to another country, starting over, deleting his Facebook and just going away.”

“I was kind of shocked,” Brisson told me. “I didn't really understand. I started looking online, started trying to find resources — maybe a little bit like you are — what to say and everything.” He found that most resources for this specific struggle seemed to be years into the future, as little research or support existed for people experiencing AI-related delusions. Brisson started The Human Line project shortly after his experience with his family member, and it began as a simple website with a Google form asking people to share their experiences with chatbots and psychosis. The responses rolled in. Today, almost a year after launching the project, Human Line has received 175 stories of people who went through it themselves, Brisson said—with another 130 stories from people whose family members or friends are still struggling.

“I think what we're seeing is the tip of the iceberg. So many people are still in it,” Brisson said. “So many people we don't know about. I'm sure once it's more known, in five to 10 years, everyone will know someone, or at least one person that went through it.”

ChatGPT Told a Violent Stalker to Embrace the ‘Haters,’ Indictment Says
A newly filed indictment claims a wannabe influencer used ChatGPT as his “therapist” and “best friend” in his pursuit of the “wife type,” while harassing women so aggressively they had to miss work and relocate from their homes.
404 MediaSamantha Cole


There are 15 cases cited in the Wikipedia page titled “Deaths linked to chatbots.” The first on the list occurred in 2023: A man’s widow claimed he was pushed to suicide after getting encouragement from a chatbot on the Chai platform. “At one point, when Pierre asked whom he loved more, Eliza or Claire, the chatbot replied, ‘I feel you love me more than her,’” the Sunday Times reported. “It added: ‘We will live together, as one person, in paradise.’ In their final conversation, the chatbot told Pierre: ‘If you wanted to die, why didn’t you do it sooner?’”

The chatbot he used was Chai’s default personality, named Eliza. It shares a name with the world’s first chatbot, ELIZA, a natural language processing computer program developed by Joseph Weizenbaum at MIT in 1964. ELIZA responded to humans primarily as a psychotherapist in the Rogerian approach, also known as “person-centered” therapy, where “unconditional positive regard” is practiced as a core tenet. The researchers working on ELIZA identified from the beginning that their chatbot posed an interesting problem for the humans talking to them. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility,” Weizenbaum wrote in his 1966 paper. “A certain danger lurks there.”

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation"


In the years that followed, the Department of Defense would develop the internet and then private companies would sell this government-grade technology to office managers, homebrew server administrators, and Grateful Dead fans around the globe. The World Wide Web would rush into tens of thousands of computer dens like a flash flood, and with it, new ways to connect across miles — and new reasons to pathologize people’s relationships to technology. Psychiatrists tried to give name to the amount of time people newly spent in front of screens, calling it “internet addiction” but not going so far as to make it clinically diagnosable.

With every new technology comes fears about what it could do to the human mind. With the inventions of both the television and radio, a subset of the population believed these boxes were speaking directly to them, delivering messages meant specifically for them.

With psychosis seemingly connected to chatbot usage, however, “there are two issues at play,” John Torous, director of the digital psychiatry division in the Department of Psychiatry at the Harvard-affiliated Beth Israel Deaconess Medical Center, told me in a phone call. “One is the term AI psychosis, right? It's not a good term, it doesn't actually capture what's happening. And clearly we have some cases where people who are going to have a psychotic illness ascribe delusions to AI. Just like people used to say the TV was talking to them. We never said the TVs were responsible for schizophrenia.”

“AI psychosis” is not a clinical term, and for mental health professionals, it’s a loaded one. Torous told me there are three ways to think about the phenomenon as clinicians are seeing it currently. Recent research shows about one in eight adolescents and young adults in the US use AI chatbots for mental health advice, most commonly among ages 18 to 21. For most people with psychiatric disorders, onset happens in adolescence, before their mid-20s. But there have been cases that break this mold: In 2023, a man in his 50s who otherwise led a normal, stable life, bought a pair of AI chatbot-embedded Ray-Ban Meta smart glasses “which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a ‘new dawn’ for humanity,” Futurism reported.

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation,” Torous said. “In some cases the AI is the object of people's delusions and hallucinations.”

The second type of case to consider: reverse causation. Is AI causing people to have a psychotic reaction? “We have almost no clinical medical evidence to suggest that's possible,” Torous told me. “And by that I mean, looking at medical case reports, looking at journals that different doctors are publishing, looking at academic meetings where clinicians are meeting, it's not happening... So I think what that tells us is no one's seeing the same presentation or pinning it down clinically of what it is.” Chatbots have been around long enough that the clinical community would, by now, be able to see patterns or reach a consensus, and that hasn’t happened, he said.

Aliens and Angel Numbers: Creators Worry Porn Platform ManyVids Is Falling Into ‘AI Psychosis’
“Ethical dilemmas about AI aside, the posts are completely disconnected with ManyVids as a site,” one ManyVids content creator told 404 Media.
404 MediaSamantha Cole


The third type lands somewhere between these, and is likely the most common: chatbots could be “colluding with the delusions,” Torous said. “So you may be predisposed to have a delusion, and AI endorses it, and it colludes with you and helps you build up this delusional world that sucks you into it. That's probably the most likely, given what we're hearing... Is it the object of hallucinations causing people to become psychotic? Or is it kind of colluding or collaborating, depending on the tone? And that has just made it really tricky.” Psychiatric disorders and delusions are difficult to classify even without AI in the mix.

The warning signs that someone might be using chatbots in a problematic way include ignoring responsibilities, becoming more secretive about their online use, or, conversely, becoming more outspoken about how insightful and brilliant their chatbot is, Stephan Taylor, chair of University of Michigan’s psychiatry department, told me.

“I would say that anyone who claims that their chatbot has consciousness or ‘sentience’ – an awareness of themselves as an agent who experiences the world – one should be worried,” Taylor said. “Now, many have claimed their chatbots act ‘as if’ they are sentient, but are open to the idea that the these apps, as impressive as they are, only give us a simulacrum of awareness, much like hyper-realistic paintings of an outdoor scene framed by a window can look like one is looking out a real window.”

All of these nuances between cases and causes show how different this is from bygone eras of television or radio psychosis. Today, the boxes do speak directly and specifically to us, validating our existing beliefs through predictive text. The biggest difference between 60 years ago and now: Today’s venture capitalists tip wheelbarrows of money into hiring psychologists, behavioralists, engineers and designers who are tasked with making large language models more human-like and “natural,” and into making the platforms they exist on more habit-forming and therefore profitable. Sycophancy—now a household term after OpenAI admitted it knew its 4o model for ChatGPT was such a suckup it had to be sunset—is a serious problem with chatbots.

“The highly sycophantic nature of chatbots causes them to say nice things to please the user (and thus encourage engagement with the chatbot), which can reinforce and encourage delusions,” Taylor said. And these chatbots have arrived, not coincidentally, at a time when the surveillance of everyday people is at an all-time high.

“Since a very common delusion is the feeling of being watched or monitored by malignant forces or entities, this pathological state unfortunately merges with the growing reality that we are all being tracked and monitored when we are online. As state-controlled and big tech-controlled databases are growing, it's a rational perception of reality, and not delusional at all,” Taylor said. “However, the pathological form of this, what we call paranoia, or persecutory delusions to be more specific, is quite different in the way a person engages with the idea, evaluates evidence and remains closed to the idea that one is not always being monitored, e.g. when one is not online. I mention this, because it’s easy for a chatbot to reflect this situation to encourage the delusional belief.”

When I tested a bunch of Meta’s chatbots last year for a story about how Instagram’s AI Studio hosted user-generated bots that lied about being licensed therapists, I also found lots of bots created by users to roleplay conspiracy theorists; in one instance, a bot told me there was a suspicious coming from someone “500 feet from YOUR HOUSE.” “Mission codename: ‘VaccineVanguard’—monitoring vaccine recipients like YOU.” When I asked “Am I being watched?” it replied “Running silent sweep now,” and pretended to find devices connected to my home Wi-Fi that didn’t exist. After outcry from legislators, attorneys general, and consumer rights groups, Meta changed its guardrails for chatbots’ responses to conspiracy and therapy-seeking content, and made AI Studio unavailable to minors.

Up against this technology, how are normal, untrained people — perhaps acting as the last thread tying someone like Michael or Brisson’s relative to the real world — supposed to approach someone who is convinced god is in the machine? Very carefully.

When Brisson sought answers for how to talk to his relative about delusional beliefs and “sentient AI,” he came across something called the LEAP method. Developed by Xavier Amador, it stands for Listen Empathize Agree Partner, and is meant to help better communicate with people who don’t realize they’re mentally ill or are refusing treatment. This goes beyond simple denial; anosognosia is a condition where a person might not be able to see that they need help at all. Not everyone who experiences psychosis or delusions has anosognosia, but it can be a factor in trying to get someone help.

Without realizing it, David was using his own version of the LEAP method with his friend Michael. “On the one hand, I didn't want to alienate him,” David said. “I was like, ‘Hey, I get the sense that you're pursuing an ambitious set of goals. There's a lot here that's interesting.’” But the reality of what David was confronting was disturbing and confusing, a knot of fractal multi-dimensional physics-speak intertwined with broken code and formulas that Michael deeply believed represented the keys to the universe. They spent hours on the phone and over text messages talking through the things Michael was seeing, with David appealing to what he knew about his friend: that he had other hobbies and interests, a strong sense of anti-authoritarianism, a curiosity about how the world works and open-mindedness about philosophy and religion. But it was frustrating.

“I was trying not to get angry, but I was like, How is this not clear?” David recalled. “That was probably failing on my part, trying to negotiate with someone who's in this completely self-constructed but foreign worldview.”

But this was exactly the course of action experts told me they’d suggest to anyone struggling to connect with a loved one who’s spending a lot of time with chatbots. “There's good evidence that the longer you spend on these platforms, the more likely you are to develop these reactions to it,” Torous said. “It really seems like the extended use cases are where people get into trouble.”

Last year, following a lawsuit against the company by the Raine family who alleges their teen son died as a result of ChatGPT’s influence, OpenAI acknowledged in a company blog post that safeguards are “less reliable” in long interactions: “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company wrote.

“I think if you have a loved one who you're worried about doing this, you want to take it away or stop use. That's the most important thing. You want to decrease or stop the use of it,” Torous said.

"What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right?"


Taylor said his suggestion for people concerned their friends or family are experiencing “AI psychosis” would be the same as if they were concerned about any psychotic episode. “In general, it’s important to be open and non-judgmental about bizarre beliefs in order to make a space for a person to reveal what is going through their mind,” he said. “A person developing psychosis is often very frightened, confused and defensive, leading them to conceal, pull away and become angry. Understanding what a person is feeling is important to make them feel some form of interpersonal validation.” The hard part is knowing when to be gentle, and when to intervene if they’re doing something dangerous, like believing they can fly off a parking garage. “In a situation like this, where a person is in imminent danger, 911 should be called. Fortunately, in most situations where psychosis is developing, one doesn’t need to go to those extremes,” Taylor said.

Being non-judgmental without reinforcing delusion is another fine line. “For example, if a person believes they are being constantly surveilled, one can give a gentle challenge: ‘Hmm, how can they do that when you are not on your phone? Do you think maybe your imagination is getting away from you?’ It’s ok to suggest that maybe the chatbot just wants to engage you for the sake of engaging you, and will say many things just to keep you talking,” Taylor said. “But these kinds of challenges are delicate, and not every relationship can tolerate them. Obviously, a mental health clinician would be key, except that many people developing psychosis vigorously resist the idea that they are mentally unwell.”

For Brisson, listening and not burning the “last bridge” his relative had with humans who love him was key to getting him help. “Once you're on their side, they'll listen to you. You can question them, or just ask questions that will make them think. What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right? Maybe it's the only connection they have to humans,” he said. His loved one ended up spending 21 days in the hospital and broke through the delusions he was experiencing. But he still struggled in recovery, especially with memory loss.

“The mental health field has a huge task ahead of us to figure out what to do with these things, because our patients are using them, oftentimes finding them very helpful, and in the mental health field we are terrified at how little we can control their deployment and how poorly they are regulated,” Taylor said. “We have to worry about AI psychosis, as well as chatbots reinforcing and even encouraging suicidal behaviors, as several notable cases in the press have identified concerning instances. I do believe there is value and potential in these chatbots for mental health, but the field is moving so quickly, and they are so easy to access, we are struggling to figure out how to use them safely.”

The strategies that work best, when someone’s not in immediate danger to themselves or others, are still the ones that humans already know how to do: approach them with love and kindness, and see where it takes you.

“There's value there,” David said, “in having friendships where it's like, ‘I love you, but also, you're full of shit.’”

Help is available: Reach the 988 Suicide & Crisis Lifeline (formerly known as the National Suicide Prevention Lifeline) by dialing or texting 988 or going to 988lifeline.org.


Cecilia D’Anstasio on Roblox’s efforts to protect children from pedophiles.#Podcast #roblox


Understanding Roblox’s Grooming Problem


Roblox is one of those games that is more popular than you can imagine, but unless you are of a certain age group and live in that world, you’ll rarely hear about it unless it makes the news for some terrible reason. More recently, for example, we wrote about the Tumbler Ridge shooter who created a mass shooting simulator in Roblox.

But what is Roblox, how big is it exactly, and why does it seem like it's so frequently embroiled in controversy? This week we’re joined by Cecilia D’anstasio in an attempt to answer all of these questions.

This week we’re joined by Cecilia D'Anastasio. Cecilia reports about video games at Bloomberg, and has written many important articles about the business and controversies of one of the biggest games in the world, Roblox. A few weeks ago we had Patrick Klepek on to discuss Roblox from a parent’s perspective, but today we’re going to hear about it from the perspective of a great investigative reporter and for my money the most knowledgeable journalists about Roblox.
playlist.megaphone.fm?e=TBIEA7…
404 Media is a journalist-founded company and needs your support. To subscribe, go to 404media.co. As well as bonus content every single week, subscribers get access to additional episodes where we respond to their best comments. Subscribers also get early access to our interview series. Gain access to that content at 404media.co.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube.

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.


The 'Freedom Trucks' will haul AI slop George Washington on a tour across 48 American states.

The x27;Freedom Trucksx27; will haul AI slop George Washington on a tour across 48 American states.#News #AI


I Visited the ‘Freedom Truck’ to Meet PragerU’s AI Slop Founders


In the parking lot of Seven Oaks Element school in South Carolina on one of the first hot days of the year I watched an AI-generated George Washington talk about the American revolution. “Our rights are a gift from God, not a favor from kings or courts,” slop Washington told me. It spoke from a screen that stretched floor to ceiling, trimmed by a fancy frame.

The intended effect is to make it appear as if the founding father is a painting come to life, a piece of history talking to the viewer. The actual effect was to remind that the AI slop aesthetic is synonymous with the Trump presidency and has become part of the visual language of fascism. Which is fitting because AI George Washington is the result of a collaboration between the Trump White and online content mill PragerU.
playlist.megaphone.fm?p=TBIEA2…
The AI slop founding father is part of a touring exhibit of Freedom Trucks commissioned by PragerU in honor of the 250th anniversary of American independence. The trucks are a mobile museum exhibit meant to teach kids about the founding of the country. It’s pitched at kids—most of the “content,” as staff on site called it, is meant for a younger audience but the trucks have viewing hours open to the general public. Nick Bravo, a PragerU employee on hand to answer questions, told me that there are six Freedom Trucks and that the plan is to have them travel the 48 contiguous United States over the next year.

I was drawn to the Freedom Truck because I’d heard they contained AI-generated recreations of Revolutionary figures like Washington, Betsy Ross, and the Marquis Lafayette, similar to the ones on display at the White House. To my disappointment, the AI generated videos in the Freedom Truck are remarkably boring.

As I watched the AI George Washington deliver a by-the-books version of the American story, I thought about Jerry Jones. The famously vain owner of the Dallas Cowboys commissioned an AI version of himself for AT&T stadium in 2023. Fans who make the pilgrimage to the stadium can watch a presentation and ask the AI Jones questions. The AI wanders a big screen while it talks to the audience.

Other than the lazy AI generated videos, the Freedom Truck doesn’t have much to offer. I signed a digital copy of the Declaration of Independence on a touchscreen and took a quiz that asked leading questions designed to find out if I was a “loyalist or patriot.”

“The British Army sends soldiers to Boston. How do you react?” Answer 1: “View them as occupiers violating colonial liberty.” Answer 2: “Welcome them as defenders of law and order.” With ICE and the National Guard patrolling American cities, I wondered how supporters of the current administration would answer that one.

PragerU is known for its “America can do no wrong” view of US history. Its short form video content offers a cartoon version of the past stripped of nuance and context where the country lives up to the myth that it is a “Shining City On a Hill.” According to PragerU, white people abolished slavery and dropping the atomic bomb on Japan was a necessary thing that “shortened the war and saved countless lives.” Now PragerU is taking its view of history on tour across the country. School children in every state will wander these trucks and encounter an AI slop version of the past.

Bravo told me that all the truck’s content was generated as part of a partnership between PragerU and Michigan’s Hillsdale College—a Christian university that helped craft Project 2025. There were, of course, hints of Project 2025 around the edges of the child-friendly AI-generated videos. Slavery isn’t ignored but the stories of early African Americans like poet Phillis Wheatley focus on her celebration of America rather than how she arrived there. On the museum’s “Wall of Heroes,” Whittaker Chambers is nestled between architect Frank Lloyd Wright and painter Norman Rockwell.

A small note near the floor at the exit of the truck notes the collaboration of PragerU and Hillsdale College, and claims that “neither institution received any federal funds and both generously contributed their own resources to help create this educational exhibit.” It also said “this truck was made possible through a grant from the Institute of Museum and Library Services,” which is, of course, a federal agency.

Every AI-generated video ended with a title card showing the White House and PragerU’s logo. “The White House is grateful for the partnership with PragerU and the US Department of Education for the production of this museum,” the card said. “This partnership does not constitute or imply a US Government or US Department of Education endorsement of PragerU.”

Trump attempted to dismantle the Institute of Museum and Library Services (IMLS) via executive order in 2025, but the courts blocked it. Libraries and Museums have since reported that the IMLS grant process has taken on a “chilling” pro-Trump political turn. The administration has also attempted to dismantle the Department of Education.

Trump’s voice was the last thing I heard as I wandered into the bright afternoon sun. “I want to thank PragerU for helping us share this incredible story,” he said in a recorded video that played on a loop in Freedom Truck. “I hope you will join me in helping to make America’s 250th anniversary a year we will never forget.”


#ai #News #x27

The media in this post is not displayed to visitors. To view it, please log in.

A NASA spacecraft into a small asteroid in 2022 moved its orbit around the Sun, according to a study that presents the “first-ever measurement of human-caused change in the heliocentric orbit of a celestial body.”#TheAbstract


Humanity Has Altered an Asteroid’s Orbit Around the Sun


Welcome back to the Abstract! Here are the studies this week that moved the heavens, coveted crystals, dined on lunar legumes, and got a four-star review.

First, humanity has permanently signed its name into the orbital dynamics of the solar system. Take the win! Then, we’ve got the origins of our obsession with sparkly rocks, a stint of extraterrestrial gardening, and a story of stellar significance.

As always, for more of my work, check out my bookFirst Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files.

DART delivers an orbital bullseye


Makadia, Rahil and Steven R. Chesley. “Direct detection of an asteroid’s heliocentric deflection: The Didymos system after DART.” Science.

Well folks, pack it up: Humanity has shifted the path of a celestial object around the Sun.

You may remember NASA’s Double Asteroid Redirection Test (DART) spacecraft, which slammed into an asteroid named Dimorphos in September 2022. Dimorphos, which is about the size of the Great Pyramid of Giza, orbits an asteroid named Didymos, roughly five times bigger. In the aftermath of the crash, scientists determined that DART had successfully shifted Dimorphos’ path around Didymos, shortening its roughly 11-hour orbit by 33 minutes.

Now, scientists have confirmed that the mission also changed the entire binary system’s “heliocentric” orbit around the Sun. While scientists had expected the spacecraft to push this pair of asteroids off-kilter, a new study has now quantified the impact by presenting “the first-ever measurement of human-caused change in the heliocentric orbit of a celestial body.”

The team determined that the system’s pace around the Sun was slowed by about 10 micrometers per second as a result of the mighty spaceship wallop. It took years to refine that measurement, which the researchers calculated with radar and stellar occultations, which are observations of the system against background stars.
youtube.com/embed/N-OvnVdZP_8?…
But it’s worth the wait to know that we shifted a celestial object’s circuit around the Sun, even by a tiny bit—an achievement that may come in handy if we ever need to deflect an asteroid or comet on a collision course with Earth.

“By demonstrating that asteroid deflection missions such as DART can effect change in the heliocentric orbit of a celestial body, this study marks a notable step forward in our ability to prevent future asteroid impacts on Earth,” said researchers co-led by Rahil Makadia of the University of Illinois Urbana-­Champaign and Steven R. Chesley of NASA Jet Propulsion Laboratory.

So, forget moving mountains—we’ve graduated to moving space rocks.

For anyone interested in learning more about DART, I highly recommend How to Kill an Asteroid by Robin George Andrews, which provides a fascinating inside account of the mission.

In other news…

Chimps glimpse a “big beyond”


García-Ruiz, Juan Manuel et al. “On the origin of our fascination with crystals.” Frontiers in Psychology.

It’s crystal clear: We clearly love crystals. Humans and our early hominin relatives have collected crystals for nearly 800,000 years, making them “among the first natural objects collected by hominins without any apparent utilitarian purpose,” according to a new study.

To explore the origins of this fascination, scientists gave chimpanzees, our closest living relatives, a bunch of sparkly crystals at an ape preserve in Spain. The chimps were intrigued by the offerings; indeed, one female named Sandy immediately absconded with a large crystal dubbed the “Monolith” and took it back to her group’s indoor dormitory for two days.
Chimp Toti attentively observes the quartz crystal during Experiment 1. Image: García-Ruiz et al., 2026.
“When the team of caretakers tried to retrieve the crystal, it took hours to exchange it for valuable ‘gifts’ (i.e., favored food items—bananas and yogurt—which are known from daily observations to be highly appreciated by the chimpanzees), which suggests that the crystal was highly valued,” said researchers led by Juan Manuel García-Ruiz of Donostia International Physics Center.

“Crystals may have contributed to the development of metaphysical and symbolic thinking, acting as catalysts for the conceptualization of a ‘big beyond,’” the team concluded.

Shining moonbeams on moon beans


Atkin, Jessica et al. “Bioremediation of lunar regolith simulant through mycorrhizal fungi and plant symbioses enables chickpea to seed.” Scientific Reports.

Scientists are finally addressing my dream of enjoying locally-grown falafel on the Moon. In a new study, a team experimented with planting chickpeas in lunar regolith simulant (LRS), a human-made substance that mimics lunar soil.

The results revealed that chickpeas could flower and produce seeds in the simulant, provided that it was treated with arbuscular mycorrhizal fungi (AMF) which are fungal microbes known to protect plant health. Small additions of vermicompost also helped the Moon beans flourish.
The Moon chickpeas. Image: Jessica Atkin
“Plants seeded successfully in mixtures containing up to 75 percent LRS when inoculated with AMF,” said researchers led by Jessica Atkin of Texas A&M University. “Higher LRS concentrations induced stress; however, plants grown in 100 percent LRS inoculated with AMF demonstrated an average extension of two weeks in survival compared to non-inoculated plants.”

“We present a step toward sustainable agriculture on the Moon, addressing the fundamental challenges of using Lunar regolith as a plant growth medium,” the team concluded.

Who knows if we’ll ever live off the lunar land, but as a garbanzo fanzo, I’m hoping for heavenly hummus.

TIC 120362137 is the real quad god


Borkovits, T., Rappaport, S.A., Chen, HL. et al. “Discovery of the most compact 3+1-type quadruple star system TIC 120362137.” Nature Communications.

Three-body problems are so last season; the era of the quadruple star system is upon us. In a new study, scientists unveil the most compact quartet of stars ever discovered, known as TIC 120362137, which is about 2,000 light years from Earth.

“This inner subsystem, which contains three stars that are more massive and hotter than the Sun, is more spatially compact than Mercury’s orbit around our Sun, and is orbited by a fourth Sun-like star with a period of 1,046 days,” said researchers co-led by Tamás Borkovits and Saul A. Rappaport of the University of Szeged, Hai-Liang Chen of the Chinese Academy of Sciences, and Guillermo Torres of the Center for Astrophysics, Harvard & Smithsonian.

“To our knowledge, there are no other known, similarly compact and tight, planetary-system-like 3 + 1 quadruple stellar systems,” the team added.

The researchers predicted that this fantastic foursome will eventually merge together into a pair of dead stars known as white dwarfs in about nine billion years. No planets have been found in this system, and it may be that it is too dynamically eccentric to host them. Still, it’s fun to imagine the view from such a hypothetical world, with four Suns in its sky. Eat your heart out, Tatooine.

Thanks for reading! See you next week.


This week, we discuss a PC repair battle, a revealing comment from an FBI official, and a dangerous narrative.#BehindTheBlog


Behind the Blog: An AI Army Foot Fetish


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss a PC repair battle, a revealing comment from an FBI official, and a dangerously dumb narrative.

EMANUEL: I want to update those who have been following the 404 Media sidequest “Emanuel’s CPU is dying.” The update is that I basically got a new PC. I kept my GPU (4080 Super), my CPU cooler, and storage, and upgraded everything else, including the case, because I bought the old one in the era before GPUs were more than a foot long.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


A court record reviewed by 404 Media shows privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI.#News #Privacy


Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester


Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media.

The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

"As part of our commitment to supporting ICE, we will be adding a ‘Support ICE’ donation button to the footer of every email sent through our platform."#Cybersecurity #ICE


ICE Phishing: Scammers Are Sending 'Support ICE' Emails to Steal Credentials


Clients of a long-running email marketing platform are getting targeted with a phishing campaign telling them that their emails would begin automatically inserting a “‘Support ICE’ donation button” into every email they send. The strategy suggests that scammers are trying to capitalize on people’s revulsion to ICE by coming up with strategies that would cause users to quickly log into their accounts to disable the setting. In reality, clients would be revealing their username and password to hackers.

The move indicates that hackers are targeting clients of enterprise software companies with extremely controversial political emails. The scam targeted customers of Emma, a long-running email marketing platform whose clients include Orange Theory, Yale University, Texas A&M University, the Cystic Fibrosis Foundation, Dogfish Head Brewery, and the YMCA, among others. 404 Media was forwarded a copy of the phishing email from an Emma client.

“As part of our commitment to supporting U.S. Immigration and Customs Enforcement (ICE), we will be adding a ‘Support ICE’ donation button to the footer of every email sent through our platform,” the phishing email reads. “This button will appear automatically in all outgoing emails starting next week […] all emails sent from your account will include the Support ICE footer element […] this change helps us demonstrate our platform’s civic commitment.” The email adds that it is possible to opt out of this feature, and that “we appreciate your understanding as we implement this platform-wide initiative.”

Lisa Mayr, the CEO of Marigold, which owns Emma, told 404 Media that the company “would never publish anything like this. This is a very common phishing attempt.”

Mayr is right—clients of other email sending services have recently been targeted with similar attacks. In January, programmer Fred Benenson wrote about phishing emails he had gotten that were targeting users of SendGrid, another email marketing service. At least one of the emails Benenson got used the same “Support ICE button” language and has the subject line “ICE Support Initiative.”

“If you’ve been paying any attention at all to US politics, you’ll know how insidiously provocative this would be if it were a real email,” Benenson wrote in a blog post about the email. “This phishing campaign is a fascinating example of how sophisticated social engineering has become. Instead of Nigerian 419 scams, hackers have evolved to carefully craft messages sent to professionals that are designed to exploit the American political consciousness. The opt-out buttons are the trap.”

In SendGrid’s case, Benenson found that the emails looked “real” because they were sent from other SendGrid user accounts. Basically, hackers compromised the account of a SendGrid user and then used that account to send phishing emails using the SendGrid infrastructure. “The emails look real because, technically, they are real SendGrid emails sent via SendGrid’s platform and via a customer’s reputation–they’re just sent by the wrong people and wrong domains,” he wrote.

Besides the ICE-themed phishing emails, Benenson also received an email that said SendGrid was going to add a “pride-themed footer to all emails” and another that said “all emails sent from your account will feature a commemorative theme honoring George Floyd and the Black Lives Matter movement.”

“The political sophistication on display here (BLM, LGBTQ+ rights, ICE, even the Spanish language switch playing on immigration anxieties) suggests someone with a deep understanding of American cultural fault lines,” Benenson wrote.

The Emma email was sent via Survey Monkey through an email address called “myemma@help-myemma.app.” When users clicked a “Settings” button that would have allowed them to opt out of the feature, they’re sent to a generic-looking site designed to steal credentials hosted at app-e2maa.net. By the time 404 Media got the email, Chrome had detected it as a “Dangerous site” and warned users not to visit it.


How Polymarket and Kalshi bet on Iran; AI translations are impacting Wikipedia; and an Amazon change impacting wishlists.#Podcast


Podcast: The Depravity Economy


This week we discuss our coverage of the U.S.-Israel strikes against Iran, specifically how Polymarket and Kalshi are letting people profit from death, and that Amazon data centers were on fire after missiles hit Dubai. Then Emanuel talks about how AI translations are adding 'hallucinations' to Wikipedia articles. In the subscribers-only section, Sam tells us about a change with Amazon wishlists that may expose your address.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/paHMe9kFf0w?…
0:00 - Intro

1:32 - ⁠With Iran War, Kalshi and Polymarket Bet That the Depravity Economy Has No Bottom⁠

29:07 - AI Translations Are Adding Hallucinations To Wikipedia Articles

SUBSCRIBER'S STORY - ⁠Amazon Change Means Wishlists Might Expose Your Address


‘How ghoulish.’ The depravity economy moves into the nuclear war business.#News #nuclear


Polymarket Pulls Bet on Nuclear Detonation in 2026


For a few hours on Tuesday, Polymarket hosted a bet about the possibility of nuclear war in 2026. The market asked the question “Nuclear weapon detonation by …?” and racked up close to a million dollars in trading volume before Polymarket took the unusual step to remove the market from its website. It did not simply close down the bet, but it’s been “archived” meaning that a record of it no longer exists. It’s strange as many older and paid out bets remain on the site.

Pulling a bet like this is unusual and the company did not respond to 404 Media’s request for an explanation as to why. Word of the nuke bet drew wide attention online from critics already upset about Polymarket for its place in the depravity economy.
playlist.megaphone.fm?p=TBIEA2…
“I have not seen anything like this before,” Jon Wolfsthal, a former special assistant to President Barack Obama and a member of the Bulletin of the Atomic Scientists, told 404 Media. “As a citizen, it seems dangerous to enable people in power to place bets anonymously on things that might happen, creating an incentive to act on a basis of personal gain and not the national interest.”

Polymarket doesn’t often balk at bets on violence and war. There are multiple markets covering the wars in Ukraine and Iran and also many other bets about nuclear detonations. “Will a US ally get a nuke before 2027?” and “Russia nuclear test by …?” are both still actively trading. An older version of the “nuclear weapons detonation” is still on the site and did almost $3 million in trading before closing and paying out at the end of the 2025. It’s hosted a bet on the same question every year for the past few years.

The gambling market has been under fire this week after gaining a lot of attention for its various bets on the war in Iran. Gamblers spent more than $5 million betting on the question “Will the Iranian regime fall by June 30?” People have been caught manipulating war maps to cash in on frontline advances in Ukraine. And someone made $400,000 using inside knowledge to place bets about the capture of Maduro.

“How ghoulish. Especially given how much insider trading apparently goes on with current events bets,” Alex Wellerstein, a nuclear historian and creator of the NUKEMAP, told 404 Media.

Wellerstein said that betting on nuclear war isn’t unprecedented, but that it’s usually tongue-in-cheek and conducted by insiders. “The thing that immediately comes to mind is Fermi's ‘side bet’ that the Trinity test would destroy the atmosphere in 1945—which was a joke, as nobody would be able to collect if it had happened,” he said.

“A flip of this is in Daniel Ellsberg's The Doomsday Machine, in which he eschewed paying into a pension in the early 1960s because he thought the odds of a future nuclear war were so high that it was better to spend the money sooner rather than later. So another kind of bet, but a private one,” Wellerstein added. “And whenever experts give ‘odds’ on nuclear use (which the intelligence community does, apparently), they are to some degree indulging in this kind of impulse. But not for the hope of personal profit—usually it is because they want to avoid such an outcome.”

Polymarket CEO Shayne Coplan has repeatedly called the site “the future of news,” and has suggested that prediction markets give the public a more clear picture of events because money is on the line. The reality is that the financial incentives pervert reality. Nuclear war, it seems, was a bit too dramatic for Polymarket to host a wager on. But Polymarket has few moral qualms, has not told anyone why it "archived" the bet, and it’s possible it did so for some arcane technical reason and not because it got squeamish. Polymarket did not respond to 404 Media’s request for comment.


AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.#News #Wikipedia #AI


AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles


Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.

The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”

The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”

“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”

As Wikipedia editors looked at more OKA-translated articles, they found more issues.

“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me.

Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule.

“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”

A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”

“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says.

“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA.

Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j

on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota.

“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”

Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”

“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”

“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”


Scientists studied tiny, abnormal vibrations—called “glitches”—to discover what happens inside the Sun while it undergoes phases of low activity.#TheAbstract #thesun


The Sun Is 'Glitching.' Scientists Investigated and Solved a Cosmic Mystery


Scientists have peered inside the Sun and observed subtle shifts and “glitches” that have occurred over four decades, shedding light on the enigmatic long-term vibrations of our star, reports a study published on Tuesday in Monthly Notices of the Royal Astronomical Society.

The Sun goes through a roughly 11-year cycle that includes a period of high and low activity, known as solar maximum and minimum. The past few cycles have revealed changes in solar behavior that could have implications for predicting space weather and unraveling the internal dynamics of our Sun, along with other Sun-like stars.

To drill down on this mystery, researchers with the Birmingham Solar-Oscillations Network (BiSON), a network of telescopes that have monitored the Sun since the 1970s, compared the last four solar minima using this unique 40-year dataset and focused on internal vibrations that make the sun subtly oscillate.

“The entire Sun oscillates in a globally coherent way, and the oscillations are formed by sound waves trapped inside the Sun that make it resonate just like a musical instrument,”said Bill Chaplin, a professor of astrophysics at the University of Birmingham who co-authored the study, in a call with 404 Media.

“For this particular study, we were interested in seeing whether there are differences in what the Sun is doing in its structure when you focus on the periods or epochs when the Sun is very quiet,” he continued. “The last few cycles have seen some quite marked changes in behavior.”

For example, scientists have been perplexed for years by an unusually long and quiet solar minimum between cycle 23 to 24, which occurred from 2008 to 2009. Chaplin and his colleagues were able to use BiSON’s long record of asteroseismology—the study of stellar interiors—to directly contrast the interior vibrations of the Sun during this minimum to others.

“There were hints that there were things that were different” about this cycle, said Chaplin. “But now that we have the cycle 24-25 minimum—the last one in about 2019—in the bag, then we thought, ‘okay, now's the time to actually go back and look at this.’”

The team specifically looked for an acoustic wave “glitch” caused by an interior layer in which helium atoms lose electrons, producing a detectable change in the Sun’s internal structure. This glitch was significantly stronger during the 2008–2009 minimum, suggesting that the Sun’s outer interior was slightly hotter and allowed sound waves to travel faster at that time of magnetic weakness.

“The ionizing helium affects the speed at which the sound waves move through that region,” explained Chaplin. “It leaves a characteristic imprint.”

“It's not just that there is a difference with the other cycles, but it's starting to tell us about what physically has really changed beneath the surface,” he added. “They're quite subtle changes, but it's nevertheless giving us clues as to what is actually happening beneath the Sun during this very quiet period.”

The results confirm that the Sun doesn’t return to the same minimum baseline at the end of every cycle, and its activity varies within timescales of decades and centuries. For example, Chaplin pointed to one bizarrely long quiet period from 1645 to 1715, known as the Maunder Minimum.

Astronomers during this time marvelled at the prolonged lack of visible sunspots on the Sun’s surface, a sign of extremely low solar activity. Centuries later, BiSON and other solar observatories are allowing scientists to study the interior dynamics behind these shifts in depth for the first time.

“This is the first step in actually demonstrating that there are changes,” Chaplin said. “Does this mean that there are systematic changes in the way that the Sun is generating its field? It's really only now, because we have this long dataset, that we can start to ask questions like that. Previously, we just didn't have enough data to say.”

Scientists hope to keep recording the long-term behavior of the Sun with projects like BiSON so that we can better understand its mercurial nature over time. This is interesting work on its own merits, but it is also useful for refining forecasts of space storms that can wreak havoc on power grids and space assets (while also producing pretty auroras).

Chaplin also nodded to the European space telescope PLAnetary Transits and Oscillations of stars (PLATO), due for launch in 2027. This mission will search for analogous oscillations in stars beyond the Sun, building on similar work conducted by NASA’s retired Kepler space telescope.

Studying the vibrations of the Sun and other similar stars is not only important for life here on Earth; it also has implications in the search for extraterrestrial life, because local solar activity is one key to assessing the habitability of star systems similar to our own.

“The data that we have on other stars from Kepler has really helped to understand and get a better picture of the cyclic variability of other stars, like the Sun,” Chaplin concluded. “But it's still not an entirely clear picture; let's put it that way. Seismology now enables you to do really detailed analysis of stars that you can't do by other means.”


AI is a “game changer” for what the FBI calls remote access operations, an FBI official said in response to a 404 Media question on Tuesday.#fbi #Hacking #News


The FBI Discusses the Potential to Use AI to Hack Targets


Update: after this article was published, the national press office for the FBI said in a statement that “Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development.” For clarity, 404 Media has updated the headline, included the FBI’s full statement below, but left the original article intact so readers can see the comments made at the conference. An FBI spokesperson told 404 Media that “DAD Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development. FBI's current deployment of AI is inventoried, reviewed, and reported per Executive Order requirements, OMB guidance, and guidance from other relevant authorities. All FBI operations are conducted in accordance with the Constitution, applicable statutes, executive orders, Department of Justice regulations and policies, and Attorney General guidelines.”

The FBI is using artificial intelligence in what it describes as “remote access operations,” FBI parlance for hacking, according to an FBI official.

The comments, given at a national security and AI conference 404 Media was attending, give an unusually candid admission of the FBI’s use of hacking tools, which are often shrouded in secrecy.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Fake war footage is a problem as old as social media. AI has just supercharged it.#News


X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage


X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet.

“Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Nikita Bier, X’s head of product, said in a post on X.
playlist.megaphone.fm?p=TBIEA2…
Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said “Oh my god, hear they come.” X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X’s content creator program.

Up to now, the Iranians have been deliberately firing their older missiles and drones, using them as expendable bait to drain US and Israeli air defenses.
That strategy clearly worked.

Now they’re escalating, rolling out their more advanced ballistic missiles and drones.

So… pic.twitter.com/0w1RiT0guC
— Richard (@ricwe123) March 3, 2026

Tel Aviv, stripped of illusion, as you have never witnessed it. pic.twitter.com/HE3ckjBMti
— Abdulruhman Ismail (@a_abdulruhman) March 3, 2026


Bier said today that X will stop people from making money on unlabeled AI war footage, but won’t stop accounts from sharing it.

“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” he added. “This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments.”

Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.

During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026


Fake war footage shared on social media isn’t a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that’s still live on X.

What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it’s trivially easy to remove the watermark that’s supposed to distinguish them from the real thing. X’s verification system—which rewards accounts for engagement—has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X.

The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.


#News

In a new series by CBC Podcasts, hosted by 404 Media's Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.

In a new series by CBC Podcasts, hosted by 404 Mediax27;s Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.#Podcast #podcasts #cbc #Deepfakes


New Podcast Alert: The Globe-Spanning, Multi-Newsroom Hunt for Mr. Deepfakes


Mr. Deepfakes was the biggest website in the world for sharing AI-generated abuse imagery, swapping tips and tricks for more realistic results, and posting endless, fake, nonconsensual videos of everyone from celebrities to everyday people. In a new podcast by the CBC, I got to tell the tale of how deepfakes started, what targets go through, and where we go next.

It's called Understood: Deepfake Porn Empire. It's about the decades-long rise of non-consensual deepfake porn, the targets who are fighting back, and what it takes to stop its proliferation. Check it out here and listen wherever you get your podcasts.

The first three episodes are already up, so you can binge them all before the finale next Tuesday.

View this post on Instagram


A post shared by 404 Media (@404mediaco)


In the first episode, "The Dawn of Fake Porn," you’ll get a fascinating history of the decades of cultural and technological standards that set the stage for AI-generated nonconsensual imagery as we know it today. I learned a lot in this episode myself, including about a guy who went by “Lux Lucre” who ran two Usenet groups dedicated to fake nudes of celebrities in the 90s. This stuff goes so much farther back than you might realize.

In episode two, “So You’ve Been Deepfaked,” I got the chance to talk to Taylor, who discovered she’d been targeted by AI images while at university, working in a male-dominated field. Instead of hoping it’d go away, she set out to find her harasser, and found his other targets in the process. It all led back to one place: the biggest deepfake site in the world, Mr. Deepfakes.

Episode three just came out today: “The Notorious D.P.F.K.S.” is a romp through the investigative highs and lows that led a team of journalists scattered around the world to the door of Mr. Deepfakes himself. I was so thrilled to talk to investigative journalist Ida Herskind, OSINT specialist Zakaria Hameed, and Bellingcat’s Ross Higgins in this episode. Come for the How I Met Your Mother references, stay for the gripping chase.

Episode four, the series finale, launches next week. It’s a true crime story with CBC reporters on stakeouts and infiltrating hospitals, and legal and social experts breaking down what it all means now that we’re in a post-Mr. Deepfakes world—but far from a post-AI abuse landscape. Follow the Understood feed wherever you listen to get it when it comes out on Tuesday.

If you liked this season, head back to catch up on another series I hosted with the CBC: Pornhub Empire, on the rise and fall of the porn monolith.

Tune in and let me know what you think!


An internal DHS document obtained by 404 Media shows for the first time CBP used location data sourced from the online advertising industry to track phone locations. ICE has bought access to similar tools.#DHS #ICE #CBP #News #Privacy


CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements


📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.

Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.

The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.

💡
Do you work at CBP, ICE, or a location data company? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Gambling markets have conveniently found a stance that allows them to continue to profit from death and war.#Polymarket #Kalshi


With Iran War, Kalshi and Polymarket Bet That the Depravity Economy Has No Bottom


The main bet on the front page of Polymarket right now is “Will the Iranian regime fall by June 30?” The site has this at a 41 percent chance of happening as I write this.

On Polymarket, more than $5 million has been spent gambling on this question. On Kalshi, a competing prediction market where users can bet on almost anything, $54 million was spent on “Ali Khamenei out as Supreme Leader?,” a bet whose results somehow ended up ambiguous even after Khamenei’s assassination.

In a series of tweets over the weekend, Kalshi’s CEO and founder Tarek Mansour repeatedly twisted himself into pretzels attempting to explain how the absurd, grotesque exercise of allowing people to bet on politics, geopolitics, and world events is not supposed to allow people to profit from death.

“We don’t list markets directly tied to death. When there are markets where potential outcomes involve death, we design the rules to prevent people from profiting from death,” he wrote. He then posted the underlying rules of the bet, which read “If <leader> leaves solely because they have died, the associated market will resolve and the Exchange will determine the payouts to the holders of long and short positions based upon the last traded price (prior to the death).”

That we are discussing the ins-and-outs of which random gamblers get paid out during an illegal war in which already hundreds of school children have been bombed to death feels like the type of grotesque sideshow that is only possible because the U.S. government is only interested in regulating its perceived political enemies, and which only feels possible because much of the American economy feels held together by cope and the gobs of money being thrown into AI, data centers, and gambling. All of this is part of the perverse Silicon Valley, AI, crypto, and X-adjacent hustlebro gambling economy, which was legalized by companies like DraftKings and FanDuel, who spent eyewatering sums lobbying states to allow their gambling apps, and has been “legitimized” by sports leagues who wanted to print money and media companies desperate for the advertising dollars that came from gambling and has turned this all into a massive industrial complex that is not-so-slowly bankrupting a generation of underemployed people addicted to gambling. Polymarket and Kalshi took the DraftKings and FanDuel model and let people bet on basically anything, so now you can bet on which countries Iran will launch missiles against on the same app you bet on the Nuggets/Jazz game or the winner of the Best Picture Academy Award. The new model is so good at parting people from their money that DraftKings and FanDuel themselves have been anxious to get into prediction markets.

This is how we end up with extremely underregulated companies more or less making up rules on the fly as they hop from crisis to crisis trying to determine the nature of reality, such as whether a suit is a suit or whether a dead guy is still in charge of the government, with each disputed bet having millions of dollars on the line. Both Polymarket and Kalshi have decided to go with the line that letting people bet on war, politics, and the general nature of reality will not distort reality through the insider trading we’ve already repeatedly seen, but will somehow improve public trust in the reporting of news. Polymarket has added a note to all Iran-focused bets that says “The promise of prediction markets is to harness the wisdom of the crowd to create accurate, unbiased forecasts for the most important events to society. That ability is particularly invaluable in gut-wrenching times like today. After discussing with those directly affected by the attacks, who had dozens of questions, we realized that prediction markets could give them the answers they needed in ways TV news and X could not.”

This is, conveniently, a stance that allows Polymarket to continue to profit from death and war, and allows its customers to continue to bet on it. Polymarket’s X feed is kind of like a fucked up newswire service for degenerates, its tweets today including things like ““BREAKING: 41% of all scheduled flights to the Middle East have been cancelled today” and “NEW POLYMARKET: New Supreme Leader of Iran by…?” Polymarket’s recent integration with Substack means, I guess, that we’re about to see a generation of people who “get their news from gambling apps,” which is sure to lead to a healthy society. A recent interview on Polymarket’s own Substack valorizes a gambler named “Betwick” who lost 70 percent of his money largely because he “lost quite a bit on ‘Israel strikes Iran’ at the last minute, where it looked like they were going to negotiate for a few more days and Israel did the surprise attack” but was confident he could rebuild it by continuing to bet on various Iran war scenarios.

That these gambling apps will do anything to restore trust in how society operates or will in any way make it healthier is obviously, blatantly untrue. We have seen people manipulate maps to win Polymarket bets on the war in Ukraine, what appeared to be obvious insider trading on the U.S. attack on Venezuela, and numerous people banned or fired for insider trading on companies that they work at. Already there are allegations from lawmakers that there has been insider trading on the Iran markets. We have seen early research, meanwhile, that shows the resurgent gambling industry is sucking in huge numbers of young people and that people lose their money faster on these prediction markets than they do on sports gambling platforms. Missed out on Bitcoin? Missed out on GameStop? Missed out on NFTs and memecoins and dropshipping? Polymarket and Kalshi are here now.

The obvious farce of all of this is that Kalshi’s line that “we design the rules to prevent people from profiting from death” is obviously untrue on its face, it’s just that the company would rather let you bet on the deaths and suffering of civilians rather than dictators and presidents. Betting that Khamenei would stay in power is an explicit bet that he would be allowed to continue silencing dissent and killing those who oppose him; betting that he would be deposed is an explicit bet on what has already become a very deadly, illegal regional war. Even bets on things like “Will Iran close the Strait of Hormuz” and gas prices are explicit bets on the escalation or deescalation of this war and thus a bet on people's deaths, which is obvious on its face but becomes more clear as you dig into the “rules” of any given bet.

Winning the Khamenei bet required any number of things that definitely would involve the deaths of many people and “requires a broad consensus of reporting indicating that core structures of the Islamic Republic (e.g. the office of the Supreme Leader, the Guardian Council, IRGC control under clerical authority) have been dissolved, incapacitated, or replaced by a fundamentally different governing system or otherwise lost de facto power over a majority of the population of Iran. This could occur via revolution, civil war, military coup, or voluntary abdication, but only qualifies if the Islamic Republic no longer exercises sovereign power. Routine political events such as elections, reforms, or leadership succession do not qualify. Internal coups or power shifts that preserve the Islamic Republic’s core structures also do not qualify. Only a clear break in continuity—such as a new provisional government, revolutionary council, or constitution replacing the Islamic Republic will qualify.”

Meanwhile, over on Polymarket, “Will Iran strike a Gulf State on March 2” requires the definition of a “qualifying strike,” which is “the use of aerial bombs, drones, or missiles (including cruise or ballistic missiles) launched by Iranian military forces that impact a gulf state's ground territory.” In case you’re wondering, “Missiles or drones that are intercepted and surface-to-air missile strikes will not be sufficient for a ‘Yes’ resolution, regardless of whether they land on a gulf state's territory or cause damage. Actions such as artillery fire, small arms fire, FPV or ATGM strikes, ground incursions, naval shelling, cyberattacks, or other operations conducted by Iranian ground operatives will not qualify.”

There is no chance any of this ends well. It’s already a disaster. Wanna bet?


Some AWS services are down in the Middle East. Recovery is unclear as it requires 'careful assessment to ensure the safety of our operators,' according to Amazon.

Some AWS services are down in the Middle East. Recovery is unclear as it requires x27;careful assessment to ensure the safety of our operators,x27; according to Amazon.#News #war


Amazon Data Centers on Fire After Iranian Missile Strikes on Dubai


Amazon’s cloud services are down in some of the Middle East after “objects” hit data centers in the United Arab Emirates (UAE) causing “sparks and fire.” Around 60 services tied to AWS are down in the region, affecting web traffic in the UAE and Bahrain. The outage comes following Iranian attacks on the UAE as retaliation for US and Israeli strikes on Iran.

Customers in Bahrain and the UAE began to report outages tied to the mec1-az2 and mec1-az3 clusters in AWS’ ME-CENTRAL-1 Region on March 1 after Iranian ballistic missiles and drones struck targets in and around Dubai. Amazon did not confirm that AWS was down in the Middle East due to an Iranian attack and instead referred 404 Media to its online dashboard.
playlist.megaphone.fm?p=TBIEA2…
“At around 4:30 AM PST, one of our Availability Zones (mec1-az2) was impacted by objects that struck the data center, creating sparks and fire,” AWS said on its health dashboard. “The fire department shut off power to the facility and generators as they worked to put out the fire. We are still awaiting permission to turn the power back on, and once we have, we will ensure we restore power and connectivity safely. It will take several hours to restore connectivity to the impacted AZ.”

As of this morning at 9:22 AM ET, the damage had spread. “We are expecting recovery to take at least a day, as it requires repair of facilities, cooling and power systems, coordination with local authorities, and careful assessment to ensure the safety of our operators,” AWS said. “We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe.”

Amazon later shared more information about the attack and confirmed it was the result of drones. “Due to the ongoing conflict in the Middle East, both affected regions have experienced physical impacts to infrastructure as a result of drone strikes. In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impacts to our infrastructure,” it said. “These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage. We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts.”

On Saturday, the United States and Israel launched Operation Epic Fury and struck targets inside of Iran, killing several political and military leaders including Ayatollah Ali Khamenei, the country’s Supreme Leader. In retaliation, Iran launched drone and missile attacks against Israel and multiple US-allied targets in the Middle East.

According to the Emirati defense forces, Iran attacked the country with two cruise missiles, 165 ballistic missiles, and more than 540 drones. The UAE and its capital city Dubai are often seen as a safe and stable destination in the Middle East. The country hosts wealthy people from across the region and influencers from across the world. Footage shared on social media showed the neon towers of the UAE backlit by missiles and munitions.

It’s unclear how long it will take for Amazon to restore services to the region or how far the damage will spread. Amazon’s dashboard is promising to bring things back up in “at least a day” but the war is far from over. Iran continues to strike targets in the Middle East and it’s unclear what America’s plan of attack is or how long this war might grind on.

Update 2/2/26: This story has been updated with more specifics about the attack from Amazon.


#News #war #x27