Artist Sam Lavigne created ‘Slow LLM’ to make people question their dependence on tools like Claude and ChatGPT. Or at least, make them super annoying to use.#AI


This Web Tool Sabotages AI Chatbots By Making Them Really, Really Slow


Watching people outsource their critical thinking, emotions, and sanity to glitchy “AI” chatbots has been one of the most uniquely terrifying aspects of being a human being in recent years.

While wealthy tech evangelists like Sam Altman continue to make wild proclamations about how large language models (LLMs) are destined to do our jobs and raise our children, critics have compared Silicon Valley’s attempts to force dependence on chatbots to a mass-enfeebling event—an attempt to convince people that they are actually better off having machines think, act, and create for them.

Now, there’s a new way to discourage friends, family, and even complete strangers from turning to chatbots like Claude and ChatGPT: by using a tool called “Slow LLM” to make them really, reaaaaalllyyy slowwwww. Or at least, making them look that way.

“Are you concerned that you or your loved ones might be participating in a massive de-skilling event? Experiencing LLM-induced psychosis? Outsourcing cognitive and emotional functions to autocomplete? Install SLOW LLM on your computer, or the computer of a loved one, today!” reads a description onthe tool’s website.

Created by artist Sam Lavigne, Slow LLM causes anyone accessing AI chatbots on a computer or network to encounter mysterious, painfully slow response times. It works by manipulating a quirk in the Javascript language to rewrite the “Fetch” function that returns data to the browser. When a user visits a chatbot domain and enters a query, the modified Fetch function stretches the response over an excruciatingly long period of time. This results in the user perceiving the LLM to be running slowly, when in reality it’s simply being arbitrarily metered by Lavigne’s code.

Lavigne says that the idea for the project came after seeing how deeply some of his students and acquaintances had come to rely on generative tools to do basic tasks.

“So many people are starting to use these tools to outsource their cognitive and emotional functions, and in the process of doing this they’re forgetting all these basic things that they’ve learned how to do,” Lavigne told 404 Media. “I think that the more people rely on LLMs, the more extreme this de-skilling event will become.”

Slow LLM can be installed as aChrome browser extension, but it can also be deployed network-wide via an “Enterprise Edition,” aDNS service which causes everyone on a home, school, or corporate network to experience slow chatbot responses. This is done by simplychanging the DNS server on your router to Lavigne’s custom domain—though he warns that using a random person’s DNS is generally not a great idea cybersecurity-wise, and recommends the safer option ofhosting your own DNS server to deploy theSlow LLM code, which he has released for free on Github. The browser extension currently only affects Claude and ChatGPT, while the DNS version also slows down Grok and Google Gemini.

“The idea was that these things are removing friction, so let’s add some friction back in,” said Lavigne, using the engineering term frequently used by tech bros to describe inefficiencies in a system. He argues that LLM chatbots have taken this idea of “friction” to an extreme, presenting any unpleasantness or difficulty we encounter as something that should be outsourced to Silicon Valley’s thinking machines—even if overcoming that difficulty is part of what makes human creativity meaningful and worthwhile. “Anything that removes the friction of something that’s difficult, it makes you not learn, and it removes the learning you’ve already achieved.”

In theory, one could activate Slow LLM without anyone noticing; most people would likely assume that chatbot providers like Google and OpenAI are having technical issues, which does happen without outside interference from time to time. Lavigne says that so far, he hasn’t heard from anyone that has successfully deployed Slow LLM on a work or school network. But he certainly isn’t discouraging people from trying.

“I have not yet tested it on any unwitting subjects, but I’m thinking about it,” Lavigne said in a mischievous tone, adding that it would be an interesting experiment to see how people react when presented with artificially-slow chatbots. “Maybe they’ll just rage-quit LLMs.”

Slow LLM is the latest addition to a series of impish tech provocations that Lavigne has become known for. During the height of the pandemic Zoompocalypse in 2021, he released “Zoom Escaper,” a tool that floods your Zoom audio stream with annoying echoes, distortions, and interruptions until your presence becomes unbearable to others. In 2018, he infamously scraped public LinkedIn profiles to build a massive database of ICE agents, which was subsequentlyremoved from platforms like Github and Medium. Lavigne’s frequent collaborator Tega Brain has also released browser tools like “Slop Evader,” whichfilters out generative AI slop by removing all search results from after November 2022, when ChatGPT was first released to the public.

“I’ve been doing these little experiments in digital sabotage where I’m trying to make these tools that mildly interrupt computational systems,” said Lavigne. “One of the things I’ve been thinking about is how if the means of production is truly in our hands, and it’s also the way we’re communicating with other people and managing our social life, then what does it mean to interrupt productivity?”

Lavigne is not an absolutist, however. Without prompting, he admitted that he used Claude to help write some of the code for Slow LLM—until, of course, Slow LLM started working and forced him to complete the project on his own. Instead, Lavigne says he’s trying to make people question the habits they are forming by regularly using chatbots, tools which tempt us to essentially entrust all our knowledge, decision-making, and emotional well-being to massive companies run by tech billionaires like Altman and Elon Musk.

“My hope is to get people to think a little bit more about their usage of these tools,” said Lavigne. “But the broader thing I want people to think about […] is ways of interrupting these flows of data, these flows of power, and putting friction into these computational systems that are mediating so many parts of our lives.”


#ai

Why ridicule works to keep big tech’s claims in check, and what makes us hopeful for the future.#Podcast #podcasts


Ridicule as Praxis (with Emily Bender and Alex Hanna)


This week, Sam talks to Emily Bender and Alex Hanna about the marketing ploys of “artificial intelligence,” why ridicule works to keep big tech’s claims in check, and what makes them hopeful for the future. They’re the authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.
playlist.megaphone.fm?e=TBIEA5…
Dr. Alex Hanna is a writer and sociologist of technology, labor, and politics. She’s the Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School.

They also host the The Mystery AI Hype Theater 3000 podcast which “deflates AI hype and draws attention to the real harms of the automation technologies we call ‘artificial intelligence’.”
youtube.com/embed/UwBZiuH-1QY?…
Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

The AI Con

Emily’s cartoon

"Questioning the Normalization of Surveillance" by the Center on Privacy & Technology at Georgetown

"You Are Not a Parrot" at NY Mag


Marty Tibbitts made billions as a Detroit telecommunications executive. But he wanted more.#Features


An Adrenaline Junkie Millionaire’s Quest to Become a Cocaine Kingpin


The British de Havilland DH-112 Venom is one of the most iconic combat jets of the Cold War, with a distinctive two-pronged tail design that stretched out far behind the main body of the aircraft and a striking red and black paint job. It also gained a reputation for handling issues at high speeds. And yet, that was the aircraft 50-year-old Marty Tibbitts flew one summer afternoon at a Wisconsin air show in July 2018.

Tibbitts, a millionaire who made his money launching call center businesses, regularly flew, and bought, historical aircraft like the Venom. He ran the World Heritage Air Museum in his home state of Michigan, which housed his collection of around a dozen planes.

Sat in the Venom’s cockpit, Tibbitts maneuvered the plane along the runway behind another aircraft. The first plane took off. About eight seconds later, two seconds sooner than he was supposed to, Tibbitts pulled the Venom’s stick back and brought his craft into the air.

Immediately something was wrong. People on the ground saw the Venom’s wings rock back and forth shortly after its sluggish takeoff, a sign that it might be caught in the wake of the first plane. One video showed the Venom started to make a shallow left turn, and the plane’s engine sound decreased and then rapidly increased. Black smoke billowed. The plane stalled. As the aircraft barely reached 200 feet, it started to descend with its nose still pointed upwards.

💡
Do you know anything else about Marty Tibbitts or Ylli Didani? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

Tibbitts crashed into a nearby barn with another two people inside. Flames engulfed the plane and set the barn and other nearby buildings on fire too.

“We got a plane down!” a man yelled in a 911 call. “Building’s on fire!” Tibbitts died in the crash.

A day later Tibbitts’ brother, JC, gave a statement to local media: “Our family is devastated by the loss of Marty. To say he was passionate about all things in his life—family, business and aviation would be to immensely understate the case. He died pursuing one of his passions,” it read. “Beyond his family, friends and business associates, many will miss this unique and special person.”

As news of Tibbitts’ death spread, his wife received a phone call from one of those business associates. He was crying on the other end of the line. “It can’t be true, it can’t be true,” the man said.
A screenshot of a U.S. court record including photos of Didani.
The man in tears on the phone was Ylli Didani, a now convicted cocaine trafficker who orchestrated massive shipments of drugs into the UK and multiple European ports. Tibbitts, it turned out, had a secret life. Without the knowledge of his family, Tibbitts worked closely with Didani to become an aspiring international drug lord. The pair commissioned the construction of an elaborate underwater drone that would be stuffed with cocaine and latch onto ships with magnets. Tibbitts was the money and brains behind the operation, funding the submarine’s design and development. In messages with Didani, he referred to himself as Tony Stark, the alter ego of the millionaire inventor and superhero Ironman. According to investigators, Didani’s cocaine trafficking business was worth tens of millions of dollars. Didani had now lost his business partner and friend.

Extensive interviews with Didani, including over the email system of the prison he is currently incarcerated in, and thousands of pages of court transcripts reviewed by 404 Media reveal the story of a millionaire who, even with his massive fortune, wanted more and more. Tibbitts wanted to pillage Egyptian tombs for artefacts, and become an ambassador to Albania. He allegedly invested in a company making flying cars, tried to source Black Hawk helicopters to sell to other countries, and arranged a massive load of cash to be flown on his private jet to buy bulk cocaine. Tibbitts, who was at one point a primary target during the investigation into the cocaine group’s operations, left a gaping question with his death: why did he do it? Why did the man who had everything lead a secret double life as an international drug kingpin?

“It was perplexing,” Detective Brandon Leach, from the Farmington Hills Police Department, who was part of a narcotics task force and one of the agents who investigated the group, later said in court.

POWERHOUSE


Tibbitts made his money in perhaps the most boring industry possible: providing back office support and a 24/7 live answering service for small businesses at the start of the millennium. An executive bio included in legal filings tried to make it sound more exciting by saying Tibbitts had years of experience “managing high-technology businesses and providing dynamic direction and oversight to start up companies and emerging technologies.” Tibbitts’ companies, Back Office Support Systems and Clementine Live Answering Service, ultimately made him a very rich man. The Stanford graduate was invited to join the Young Presidents’ Organization, a group for successful entrepreneurs. “Business doesn’t stop,” a narrator says in one of Clementine’s promotional videos.

But in essentially every other aspect of his life, Marty was an adventurer. He invested in companies across the Middle East, Europe, and Asia, and ran his own security firm with a base in the United Arab Emirates. He got his pilot’s license, developed a deep interest in historical aircraft, and opened the air museum.

“He was just always learning something new,” Tibbitts’ wife, Belinda Tibbitts, later said in court. “He spoke like four languages and taught himself to play the banjo, and he flew all of these planes. He was just very brilliant, and he was also a good businessman.” Belinda called Tibbitts “a genius.”

In around 2008, Belinda had a personal trainer at her local gym called Donald Larson. Twice a week, Larson trained Belinda, and she introduced the trainer to her husband. Eventually Larson trained Tibbitts as well, and the pair became friends.

He saw Tibbitts was very precise on what he wanted people to know about—his eclectic and expensive tastes—and what he wanted to keep secret, such as suggestions he may have seen women other than his wife while she was out of state. He deliberately compartmentalized his life. “He hid things that he didn’t want anyone to know about,” Larson later said in court.

Larson had a checkered past, having served 18 years for cocaine possession. He also knew Didani, the drug trafficker, who used the same gym. The trio then started hanging out, and soon Didani and Tibbitts became friends, with Didani even staying over at his home. Belinda didn’t like that, she later said in court. One reason was that sometimes Tibbitts would let Didani borrow Belinda’s Porsche Cayenne SUV, put his feet up on the dash, and return it stinking of cologne.
A screenshots of a U.S. court record.
Didani grew up with very little money. In private messages reviewed by investigators and read in court, Didani’s father reminded him of times when the family didn’t have enough money for bread. Like Tibbitts, he dabbled in all sorts of businesses. He owned a car wash in Detroit; may have owned a car dealership in Dubai; looked into the medical marijuana industry, and tried to run an ATM business. He travelled a lot, living in Europe and South America in between going to Chicago to see his sick mother. Videos appeared to show him building shelter and giving food to the needy overseas. He liked to party and wore a rainbow-faced gold Rolex.

Didani was not particularly careful with hiding the fact that he was also a drug trafficker. He later boasted to friends in text messages about moving drugs from South America, and sent his family related news articles when shipments were seized by the authorities. He was also sloppy with his security. Didani used encrypted phones to conduct his business, sometimes juggling four cellphones at once. But he often took photos of those messages with his normal iPhone, which uploaded backups of those images to iCloud making them accessible to the authorities.

In a way, Tibbitts and Didani were kindred spirits; two men constantly looking for the next thing to invest or expand into. There may have been an ulterior motive to Didani becoming such good friends with Tibbitts, however. Didani was looking for someone with money, Larson later said, and so that was the original reason for the introduction. Drug trafficking, it turns out, needs some upfront investment.

The trio then travelled in various combinations, or sometimes Didani and Larson went on trips on the millionaire’s behalf. Didani and Larson met in Albania in an unsuccessful attempt to get Albanian Air Force planes for Tibbitts. Then the pair went to Egypt, where a family allegedly had a house connected to an ancient tomb belonging to the grandson of a Pharaoh, and were selling golden artifacts from it. Tibbitts was interested in buying the artifacts, but his associates never went inside themselves.

Tibbitts and Didani travelled to Antwerp, Belgium, together, a port that has become the cocaine gateway to Europe. Didani later suggested in court that the pair repeatedly visited Washington D.C. They went to Didani’s native Albania together because Tibbitts was looking into starting an over-the-counter medication distribution company. The pair explored somehow making Tibbitts an ambassador in Albania; Tibbitts’ wife later testified in court that he sent her a video of himself running on a beach with the president of Albania.

“Marty had his hands everywhere,” Detective Leach later said in court. He described Tibbitts’ various escapades as the millionaire “attempting to spiderweb out.”

Many of these did not pan out. But Tibbitts continued to funnel money to Didani. According to Larson, that amounted to millions of dollars. He said the drug trafficker “conned” the millionaire.

PROJECT RAMORA


“Everything begins with an idea,” the website for Peregrine 360, a small engineering design firm in Montreal, Canada, read. “Some of the greatest inventions we see today were once just a few lines on paper.”

The company offered to take customer’s concepts and turn them into real designs. Peregrine 360 would not only make a prototype of a customer’s device, but also connect them with factories to mass produce it, according to the company’s website. Usually they produced things like 3D-printed models of T-Rexes. In around 2016, a man called Dale Johnson asked the company over email to make something a little different: a long hollowed out underwater drone that had enough room to store items inside.

Peregrine 360 had no problem with that, and got to work. A company representative sent Johnson an invoice.

Johnson was in fact Tibbitts, according to court records. Investigators figured that out because Tibbitts copied and pasted the exact text of Johnson’s email to Didani and directed him to pay the company. Tibbitts told Didani he would make the drone for him, Didani later told me.

In one of his notebooks, Tibbitts sketched out his and Didani’s idea. It showed a rough drawing of the bottom of a boat. The idea was to create what investigators would later call a “parasitic” underwater drone. The device would have enough room to store a large amount of cocaine, and clamp onto a ship with magnets. Once near its destination, the torpedo-shaped drone would release from the ship, and co-conspirators would come and retrieve the its contents.
A photo of the drone from U.S. court records.
Around the sketch of the boat were a list of questions that Tibbitts’ addressed to himself: “Inspect ship first to look for right attach spot?”; “Send one in case this one fails, should I put it in the wake?”, and “Should I have two rows of magnets or one?” Another part noted to put spikes on the device so “no birds.”

The notebook said the drone should be between 20 and 25 feet long with a one ton capacity. Tibbitts contemplated whether a drone that big would really be as stealthy as needed for smuggling cocaine without getting caught.

“You have any worry about size of drone and getting it into water quietly?” Tibbitts wrote in a 2018 message to Didani, using the moniker Toni Stark. “For 1T it is almost 20 feet long.”

“No brother,” came the reply.

The submarine drone was “very ahead of its time,” Didani later told me.

With her husband’s constant jet setting, Belinda Tibbitts sometimes played assistant, booking hotels and flights for Marty. Even with that involvement, Belinda did not know anything about the drone, or her husband’s moves into drug trafficking, at the time. But she later recalled in court coming across one of his belongings mentioning the “Remora Project.” A remora is a type of fish that uses suction to stick onto the body of larger animals.

TAKE THE JET


In encrypted messages, Didani’s associates showed they were excited about the drone. Didani said they planned to build a few of the contraptions, and later went on to discuss potentially using them off the coast of Barcelona. Messages stored in Didani’s iCloud account showed he was dealing directly with Colombian traffickers, investigators said.

Marty had a red line, though: he didn’t want anything to do with cocaine in the United States. Any business would need to be overseas.

While the pair’s high-tech plan came into focus, they faced a very old school problem: how to move cash around. Tibbitts had access to a fantastic amount of wealth, including in his personal bank accounts. But getting that money to Didani, and in a relatively secure way, had its challenges.

Part of the solution often came down to Larson, the personal trainer, who acted as a middleman between Tibbitts and Didani to shift money around. In December 2017, the group needed to move $450,000 from Michigan to Washington D.C., where Didani was staying. By investigators’ description, it involved Tibbitts writing out multiple checks to Larson, who then cashed them out, and flew the money down in secret in the middle of the night on Tibbitt’s private jet in a duffel bag. The purpose of the cash was to buy cocaine in bulk, prosecutors said.

It started with Tibbitts calling Larson and telling him to pick up a bunch of checks to cash. Larson went to Tibbitts’ home, let himself in with the code, went up to the second floor office, and grabbed the signed checks from the desk. Tibbitts had left them blank for Larson to fill out.

Tibbitts was travelling with his wife in China at the time. He appeared to have trouble with his bank clearing such a huge series of withdrawals. Belinda later recalled hearing Tibbitts on the phone on the bank, clearly annoyed, telling the clerk to do just what he said.

Eventually Larson exchanged the personal checks for cashier checks, cashed them out, and boarded Tibbitts’ private jet, which Tibbitts had arranged with a pilot. In the air, Larson texted Didani on the ground. Didani warned his co-conspirator to be careful because D.C. was crawling with police.

“Got to be careful, there’s a lot of cops around here at night, we got to be discreet,” Didani wrote.

Upon landing, Didani was waiting at the airport to take the money. It was dark, and Didani told Larson to just leave the duffel bag on the ground.

After the money was delivered to Didani in D.C., he laid it out on a bed and took photos. He drove the cash to New York, gave it to a Chinese money launderer in Flushing, and the money disappeared. Investigators later found Didani’s phone connected to the wifi of the Trump Tower hotel while he was in New York.

In all, Tibbitts wrote around a million dollars worth of checks to Larson, prosecutors later said.
A screenshots of a U.S. court record.
In court, Larson said he understood the $450,000 was for the purchase of cocaine. Didani later told me that money had nothing to do with drugs; instead it was for politicians in Washington, Europe, and Albania, he said.

Regardless, by this point investigators had taken notice of Didani. He originally came to the attention of the authorities in part because he kept taking short trips overseas with a man already on Border Patrol’s radar. That association and the suspicious flights put Didani in the crosshairs too. Authorities also received intelligence that Didani was trafficking cocaine in Europe, with the money coming from Detroit.

After seizing Didani’s phone at an airport in the U.S., law enforcement steadily sent Apple search warrants to access the trafficker’s iCloud backups. From that, the investigators realized Tibbitts played a much deeper role in the broader drug trafficking conspiracy. There were photos of the two together. Didani even had copies of Tibbitts’ Michigan drivers license and passport. One of the investigators later said he couldn’t wrap his head around why Tibbitts would get involved in something like this.

In November 2017, authorities seized 140 kilograms of cocaine in the Netherlands. In text messages, Tibbitts and Didani reacted to the seizure.

“Hmm. Let’s talk when u get back,” Tibbitts replied. He signed off the message as “Stark.”

“We just F’d up,” Didani wrote back.

“We will fix, brother,” Tibbitts wrote.

“Not like this, brother, we need to clear heads.”

“Yes, definitely. Ok, brother, talk to you later.”

From the text messages, Tibbitts appeared much more relaxed than his drug trafficking co-conspirator.

Didani sent Tibbitts a media report about the seizure, in which police arrested six Albanians and a Polish man.

Those drug trafficking ambitions came to a screeching halt the following year when Tibbitts died flying his Venom. With Tibbitts’ death, so too died the idea of the torpedo drone. Investigators said the group never succeeded in building a working prototype. The money was gone. Peregrine 360 stopped work on the project and dismantled it for parts.

“It was not just an idea. It was an idea that took multiple steps to get to the beginning of a prototype, the design, financing, purchasing parts for the prototype and then conceptualizing the prototype,” detective Leach said in court. Tibbitts just died before it could be launched.

In April 2019 investigators executed search warrants at Belinda’s homes in California and Michigan. On Mr. Tibbitts’ Surface Pro, they found an article about money laundering.

Although he was never charged because of his death, Leach said in court “Mr. Tibbitts was a financier and conspirator that helped in the design of the parasitic device to help transport cocaine.”

It appears Tibbitts’ moves into the drug trafficking world were not for money. Belinda said in court that the couple were doing fine financially at the time of his death. She acknowledged she was not aware of a number of sizable transactions out of her husband’s account, though. Representatives of Belinda did not respond to a request for comment.

“That’s the only thing, reason, I can think that he got involved with Lou [Didani] is—you know, like, rich people, they want to climb mountains when they have too much money in the freezing cold and possibly die. I want to drink a Piña Colada on a beach; I don’t want to get stung by a manta ray,” Larson said in court. Tibbitts wanted to “live on the edge.”

Tibbitts “love adrenaline,” Didani told me. “Very smart and ambitio[us].”

“That’s the way Marty was. That’s why he flew those planes,” Larson added.

Correction: this article originally said Tibbitts was a billionaire. This was a mistake, the copy has been updated to say millionaire.


Scientists have narrowed the hunt for alien life to 45 rocky worlds where liquid water could make life possible.#TheAbstract


Scientists Narrow Down the Hunt for Aliens to 45 Planets


Welcome back to the Abstract! Here are the studies this week that visited strange new worlds, broke the adorability scale, pigged out, and took in an alien light show.

First, scientists sift through thousands of planets to find the best possible sites for life. Then: meet a Cretaceous cutie, check out some python blood, and travel to the biggest moon in the solar system.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files.

The best of all possible worlds


Bohl, Abigail et al. “Probing the limits of habitability: a catalogue of rocky exoplanets in the habitable zone.” Monthly Notices of the Royal Astronomical Society.

Scientists have discovered more than 6,000 exoplanets, which are planets that orbit other stars, but most of these worlds are hopelessly inhospitable to life. To home in on the best candidates for habitability, a team combed through the catalogue of exoplanets to identify the best potential alien homes.

The short-list includes 45 rocky worlds that are no bigger than twice the size of Earth and orbit within the habitable zone (HZ) of their stars, which is the region where liquid water might exist on the surface. The most exciting destinations include four planets that orbit the red dwarf star TRAPPIST-1, about 40 light years away, or Proxima Centauri b, which is the closest known exoplanet, located just four light years from Earth.

“To assess the limits of surface habitability, it is critical to characterize rocky exoplanets in the HZ,” said researchers led by Abigail Bohl of Cornell University. “Observations of known rocky exoplanets on the edges of the HZ can now empirically explore these boundaries.”

“The resulting list of rocky exoplanet targets in the HZ will allow observers to shape and optimize search strategies with space- and ground-based telescopes… and design new observing strategies and instruments to explore these worlds, addressing the question of the limits of exoplanet surface habitability,” the team added.
A diagram depicting habitable zone boundaries across star type with rocky exoplanets.
While previous studies have compiled similar lists, this work includes updated observations and also organizes the planets according to key properties such as age, orbital characteristics, radiation exposure, and ease of observation from Earth. In this way, the researchers pave the way toward testing individual factors that influence habitability, such as whether older planets seem to be more hospitable to life.

It could also be useful to compare planets that orbit at the edges of the habitable zone to planets right smack dab in the middle. After all, in our own solar system, Venus and Mars are at the inner and outer edges of the solar system, while Earth is vibing right in the Goldilocks zone.

It may be that planets in other star systems are similarly limited in their habitability as they approach the edge of the zone—or maybe not! We won’t know until we look. And now, we know where to start. To the observatory!

In other news…

Forever young at 100 million years old


Jung, Jongyun et al. “A new dinosaur species from Korea and its implications for early-diverging neornithischian diversity.” Fossil Record.

It is my great pleasure to inform you that an incredibly cute baby dinosaur has been discovered in South Korea, where dinosaur fossils are very rare. Meet Doolysaurus, named for the popular Korean cartoon character Dooly the Little Dinosaur. This little infant lived in the mid-Cretaceous period, about 100 million years ago, and represents a new species of thescelosaurid, a type of bipedal dinosaur.
Doolysaurus fossil diagramThe skeletal anatomy of a juvenile Doolysaurus huhmini. The graphic highlights the fossil bones that were found with the dinosaur. Image: Janet Cañamar, adapted from Jung et al 2026.
“Here, we describe a small, well-preserved skeleton…recognized as the holotype of a new genus and species, Doolysaurus huhmini” which includes “the first diagnostic cranial material of a dinosaur from Korea,” said researchers led by Jongyun Jung of the University of Texas at Austin. “It contributes novel insights into the diversity of the Korean dinosaur fauna, which has previously been known primarily from ichnofossil and egg fossil records.”
Doolysaurus artworkAn artist’s interpretation of a juvenile Doolysaurus huhmini. Image: Jun Seong Yi
To top it off, this dinosaur might have sported a fuzzy coat. Jurassic Park has primed me not to trust any tech billionaire that wants to resurrect dinosaurs for public spectacle, but I’ll make an exception for Doolysaurus.

The right stuff for being stuffed


Xiao, S., Wang, M., Martin, T.G. et al. “Python metabolomics uncovers a conserved postprandial metabolite and gut–brain feeding pathway.” Nature Metabolism.

At dinnertime, pythons go whole hog—often literally. These huge snakes can devour their own body weight in a single meal, allowing them to fast for more than a year between feedings. In a new study, scientists probe these extreme eaters by analyzing the blood of Burmese pythons during their “postprandial” (after-gulp) phase.

“Burmese pythons display a remarkable array of postprandial responses, including more than 40-fold increase in energy expenditure, sustained tissue protein synthesis and more than 50 percent increase in the size of most organs,” said researchers co-led by Shuke Xiao of Stanford University, Mengjie Wang of the University of South Florida, and Thomas G. Martin of the University of Colorado, Boulder.
Skip Maas holding pet pythonsA Burmese python held by an author of the study. Image: Patrick Campbell/CU Boulder
In other words, the snakes “undergo extensive gastrointestinal remodelling” that truly put humanity’s best competitive eaters to shame. Joey Chestnut would have to simultaneously swallow over 2,000 hot dogs to even rival their sublime engorgement, just in case you are interested in some mustard-smeared napkin math (his world record is a measly 83).

Ganymede gets a glow-up


Cao, Xin et al. “Auroral Emissions on Ganymede: New Constraints on Their Electron Energy Dependence.” Geophysical Research Letters.

We’ll close, as all things should, with an extraterrestrial aurora. This week, let’s gaze into the glowing skies of Jupiter’s moon Ganymede, the largest moon in the solar system and the only one endowed with its very own magnetic field.

Now, scientists discovered that “Ganymede's auroras are brighter than previously thought,” according to a study based on new atmospheric measurements and laboratory data.

Ganymede “mini-magnetosphere [is] embedded within Jupiter's powerful magnetospheric environment,” said researchers led by Xin Cao of the Dublin Institute for Advanced Studies. “This unique configuration allows for auroral processes similar in morphology to those observed on magnetized planets, but driven by different external and internal conditions.”

The research illuminates the complex magnetic interactions between Ganymede and Jupiter, which will be studied more in depth by future missions, such as the European Jupiter Icy Moons Explorer (Juice) that is currently on its way to the gas giant, aiming for a 2031 arrival. I hope this news of cosmic radiance adds some sparkle to your weekend.

Thanks for reading! See you next week.


This week, we discuss unfortunately checking Twitter for news, the closure of the metaverse, and being vulnerable in Marathon.#BehindTheBlog


Behind the Blog: Marathon and the Metaverse


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss getting stories from Twitter, the metaverse, and the new game Marathon.

EMANUEL: I think I’m addicted to Twitter again.

We haven’t written a ton about the war with Iran but I’ve been following the news closely because I’m checking if there are important stories for us to do there, and because I can’t help but watch the disaster unfold even if it’s making me incredibly anxious.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The attorney for the city of Ypsilanti, Michigan, said the construction of the data center puts “a big bulls eye target on this entire township."#News #AI


Tiny Township Fears Iran Drone Strikes Because of New Nuclear Weapons Datacenter


The tiny township of Ypsilanti, Michigan, is worried about being a target for drone strikes thanks to a planned datacenter that the University of Michigan is building to support nuclear weapons research According to Douglas Winters, the township’s attorney, the University and Los Alamos National Laboratories (LANL) “have put a big bulls eye target on this entire township […] I believe it’s the truth.”

Winters delivered a report to the town’s Board of Trustees about the proposed datacenter during a public meeting on Tuesday. “Los Alamos, which produces the nuclear weapons, is a high value target,” he said. He pointed to America’s war in Iran as proof that the datacenter would be a target, noting that Iran’s drones had disabled AWS servers in the Middle East. “This is not a commercial datacenter. A Los Alamos datacenter is going to be the brains of the operation for nuclear modeling, nuclear weaponry.”
playlist.megaphone.fm?p=TBIEA2…
The university and LANL first announced their plan to build a $1.25 billion datacenter in 2024. The university picked nearby Ypsilanti Township—population of about 20,000—as the location for the datacenter and residents have been fighting it ever since. Concerns from the community are typical for people fighting against a datacenter: water, rising electricity bills, pollution, and noise.

Unique to the Ypsilanti datacenter fight, however, is its role in the production of nuclear weapons. The datacenter would service LANL, the birthplace of the atomic bomb and home to America’s nuclear weapons scientists. In January, LANL confirmed that the datacenter would, indeed, be used in nuclear weapons research.

To hear the university tell it, the datacenter will be one of the most advanced computing systems in the world. “We were told at the very beginning by U of M’s Vice President of public relations […] that they were going to build, in his words, the biggest, baddest, fastest computers in the world,” Winters said at the public meeting. “That, in of itself, is what makes these datacenters high value targets […] these data centers constitute power. Artificial intelligence is power. Supercomputers are power. And when something becomes that important, it becomes a target.”

View this post on Instagram


A post shared by 404 Media (@404mediaco)

Winters questioned the American military’s ability to protect targets from the threat of drone attacks on its own soil. “The drone capability is not a joke, folks,” he said. “The United States and Israel, in spite of all their high technology they’re bringing to bear in their war on Iran, they’ve actually had to request that Ukraine send their top advisors to help them understand how to best detect and destroy these drone attacks.”

He also questioned U of M’s values. Following a demand from the White House, the university eliminated its DEI programs in 2025. In February, again at the behest of the federal government, it announced the end of the PhD Project which helped people from underrepresented backgrounds get PhDs. “You have a situation now where the University of Michigan […] has cut a deal with the Department of War under Trump,” Winters said. “That’s what the University of Michigan has turned into by basically selling their soul to the Department of War.”

Jay Coghlan, the executive director of Nuclear Watch New Mexico, told 404 Media, “That LANL datacenter is going to be the brains for nuclear modeling and nuclear weaponry. Ultimately that's what it’s all about. Beware, a recent study found that in war games artificial intelligence went to escalation and nuclear war 95 percent of the time.”

According to Coghlan, the construction of the datacenter followed a familiar pattern. “The Lab has colonized brown people for eight decades here just like it’s now trying to do in Ypsilanti (New Mexico is 50 percent Hispanic and 12 percent Native American). But what the brown people in Ypsilanti have that they don’t have here is lots of water,” he told 404 Media.

Another topic of discussion at the Tuesday meeting was how to stop the construction of the datacenter. Winters and others explained that it’s been difficult to get the university, county, and other government powers to engage with them. Interested parties plead ignorance or recuse themselves because of financial involvement with U of M. “They’ve acted like The Godfather, making you an offer that you can’t refuse,” Winters said.

View this post on Instagram


A post shared by 404 Media (@404mediaco)

Trustee Karen Lovejoy Roe questioned why LANL wanted to build a datacenter 1,500 miles away from its home. “Why don’t you do that datacenter where you're going to build the plutonium pits? One’s in South Carolina, one’s in New Mexico. Tell me why?” Roe said during the meeting. “They thought that we would be an easy target […] that we’re just a bunch of poor brown and black and dumb hillbillies.”

But the Township isn’t completely powerless. “U of M is totally above the law, but is DTE?” Sarah, an Ypsilanti resident said during public comments. DTE is the local power company. Datacenters are electricity hungry buildings and DTE will need to build substations to service LANL’s supercomputers.

“What if we had a moratorium on substations until we learned about the harmonics of the electricity and how that’s impacted by datacenters?” Sarah said. “Having a moratorium on heavy construction on the roads, you know, heavy construction equipment on the roads leading to the datacenter site […] it’s going to be scary and hard to stand up to the University of Michigan. It’s true: they’re very powerful and we just need to be creative and we need to be strong and we need to block them at every step of the way.”

Holly, another resident, suggested another plan of attack. “U of M’s vulnerability is in their reputation,” Holly said. “We need to continue to make them look as bad as possible.”

The University of Michigan did not return 404 Media's request for comment. LANL did not provide a comment.

Correction 3/20/26: This story incorrectly conflated the City of Ypsilanti with Ypsilanti Township. They are two separate, but neighboring, locations. We've updated the story to reflect this and regret the error.


#ai #News

Who could have possibly predicted this, besides everyone?#Meta


RIP Metaverse, an $80 Billion Dumpster Fire Nobody Wanted


A few things on the end of Horizon Worlds, the metaverse that Mark Zuckerberg believed in so much that he renamed his company:

1) It’s very sad that many of the people who worked on it have been unceremoniously laid off because their leaders appear to have no idea what they’re doing
2) lol
3) lmao, even

Who could have possibly predicted this?

When Zuckerberg announced Horizon Worlds not really all that long ago at a batshit livestream in October 2021, I wrote an article called “Zuckerberg Announces Fantasy World Where Facebook Is Not a Horrible Company.” During that livestream Zuckerberg said, “I believe technology can make our lives better. The future will be built by those willing to stand up and say ‘this is the future we want.’” The future Zuckerberg wanted, at that time, was not a future anyone else wanted. But he was bold enough to systematically light roughly $80 billion on fire, not because he was willing to stand up and paint a vision of the future, but because Facebook was mired in various horrendous scandals and because he needed to rebrand his company and needed something shiny to point at to keep Facebook’s stock price up. It is bad when actual economists say that money was thrown “into the toilet.”

Let’s check what I wrote then: “The future Zuckerberg went on to pitch was a delusional fever dream cribbed most obviously from dystopian science fiction and misleading or outright fabricated virtual reality product pitches from the last decade. In the ‘metaverse’—an ‘embodied’ internet where we are, basically, inside the computer via a headset or other reality-modifying technology of some sort—rather than hang out with people in real life you could meet up with them as Casper-the-friendly-ghost-style holograms to do historically fun and stimulating activities such as attend concerts or play basketball.”

Zuckerberg’s bold vision of the metaverse was a place where T-Pain would sell NFTs of imaginary sneakers at concerts attended by people sitting silently in their living rooms with computers strapped to their face, where Wendy’s could do integrated brand deals in which human-shaped avatars without legs could throw baconators at basketball hoops, and where Zuckerberg could pretend to know how to surf. Even on these pitiful metrics, the metaverse failed. “Whatever the metaverse does look like, it is virtually guaranteed to not look or feel anything like what Facebook showed us on Thursday,” I wrote at the time.

Over the last few years, Zuckerberg has found another thing he can ruin via his trademark process of pouring kerosene on huge piles of money and throwing matches at it (perhaps a fun metaverse game?). Zuckerberg’s current bold vision for the future is one in which social media is not social media at all but is instead a bunch of highly customized AI-generated ads delivered to you via an increasingly creepy algorithm. Alongside this, it is a future in which Reality Labs—the division of Meta that created Horizon Worlds—makes AI camera glasses whose main use appears to be harassing women, traumatizing the underpaid content moderators who watch the footage in developing countries, and fashion statements for federal officials whose current mission is kidnapping undocumented immigrants.

The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people, don’t know how or why people use their products, and very often have no idea what they’re doing.

I remember the metaverse, crypto, web3 Venn diagram of hype very well—in fact, I remember sitting in meetings where VICE executives proposed renting land in the crypto-focused Decentraland metaverse to build a virtual VICE headquarters (where we all worked before 404 Media). I noted at the time that Decentraland was stupid, and that far fewer people were on Decentraland at any given time than were reading even a failed blog post on the website of our failing company. It didn't matter. The people “willing to stand up and say ‘this is the future we want’” wanted a virtual building in a virtual dead mall, and they got it. Was it because they were so brave and forward looking? Or was it because they were rich and powerful and could say this is the future we, the business people, the business knowers, want?


#meta

In a feature the dating app says is set to roll out in the U.S. later this spring, Tinder plans to access users' camera rolls to pick photos and determine what they're into.#datingapps #tinder


Tinder Plans to Let AI Scan Your Camera Roll


Tinder plans to let machine vision algorithms loose on your camera roll. Instead of building a profile on their own, AI will scan users’ locally-stored photos—everything from gym selfies to pictures of their family, sensitive documents and dick pics—to help construct profiles by determining what users’ interests and values are.

Dating apps are the go-to way for people to connect romantically in the modern dating world. As AI has risen in popularity thanks to services like ChatGPT, however, users are suffering the consequences of problems like bots and AI-generated messages infiltrating dating apps. For some people, the experience is less authentic than ever as people offload get-to-know-you conversations to artificial intelligence.

The feature is still being tested, with early access only available in Australia beginning this month. Although Tinder says it attempts to filter out explicit images, users may still be concerned with Tinder's AI scanning their entire camera roll. “It's up to you to figure out what you're comfortable sharing back with Tinder,” Tinder Head of Product Mark Kantor told 404 Media. Still, users can’t pick individual photos they want analyzed or ignored. Tinder’s safeguards are meant to filter out explicit images or text, and to blur faces before insights are processed.

Tinder claims its AI is looking for themes and interests, like pets, activities, or food, as well as photos that are well-lit or well-framed. In theory, this will help users decide the best way to present themselves online. “There is some art to it,” Kantor said. “It's not just the science.” (It’s unclear what happens if your camera roll is full of bad photos.)

Eventually, Kantor said, Tinder will add the ability to turn photos into larger collages for their profiles. “We do give people a pretty big variety of photos so we're not going to go from 30,000 to three.” Kantor said it looks for subject matter and tries to group insights based on similar interests. “If I have one dog photo of 20,000, I'm not really a dog person,” Kantor said as an example.

Tinder has already leaned heavily into AI. Kantor told 404 Media that artificial intelligence is writing more than half the app’s code these days. Several of its new AI-driven features include photo enhancements, match recommendations, and photo scanning. Kantor said that the app’s use of AI is to “help you express yourself,” but not to do so on the dater’s behalf.

If the camera roll is a window into the modern soul, it is also a goldmine of personal information. Depending on what someone photographs, their camera roll could include everything from photos of sensitive documents, like banking or medical info, to nudes. It’s a potential security nightmare, especially when people are sharing intimate details about themselves or their dating lives. Security failures on dating apps like Tea put users in danger: multiple breaches exposed personal information, including photos, driver’s license information, and direct messages, before it was finally yanked from the App Store. Tinder has had its own privacy and security issues. Last year, we revealed the dating app was one of thousands co-opted to mine location data. In January, hackers claimed to have stolen internal data from Match Group, which owns Tinder.

According to Kantor, Tinder isn’t storing the data it pulls from photos on its end. “It's purely on your device,” he said. Tinder won’t scan your deleted photos, or anything from your phone’s hidden folder; after it’s finished scouring your images, the AI selects specific photos for users to choose to upload to their public profile. If the AI’s categorization of a user as, say, a dog person is inaccurate, users can note that feedback and choose to either accept or reject the AI’s insights. Anything that doesn’t go on someone’s profile is deleted, and if users want new insights later, they’ll have to do the process all over again, according to Tinder.

“In talking to this new generation of daters, they want something different,” Kantor told 404 Media.“I think you see connection, that hasn't changed. I don't think they're frustrated with dating. They're frustrated with all of the friction and the dead ends with dating.”

Megan Farokhmanesh is a games and culture reporter whose works has appeared in the New York Times,Wired, Axios, and The Verge. Find her on Bluesky.


How filmmaker Chris Parr put North Oaks, Minnesota on the map.#podcasts


Mapping Google's Unmappable City


North Oaks, Minnesota is the only city in the United States that is not on Google Maps Street View. YouTube documentarian Chris Parr, who grew up not too far from North Oaks, set out to change that earlier this year. For a brief few days, he literally put North Oaks on the map. And then it was gone again.

“It’s known by Minnesotans as a place where executives and CEOs live,” Parr told 404 Media. “Famously Walter Mondale is from North Oaks, but also like United Healthcare executives and Target executives.”
youtube.com/embed/3iGvHBr0mJw?…
North Oaks has managed to largely stay unmapped on Street View because of the way the city handles its streets. In almost every city and town in the United States, property owners give an easement to their local government for the roads in front of their homes (or don’t have any claim to the roads at all). In North Oaks, homeowners’ property extends into the middle of the street, meaning there is literally no “public” property in the city, and the roads are maintained by the North Oaks Homeowners’ Association (NOHOA): “the City owns no roads, land, or buildings. The 50-60 miles of roads in the city are owned by the NOHOA members whose property extends to the center of the road subject to easements in favor of NOHOA,” the homeowners association’s website, which has very little information on it and notes that it is “unable to share most private documents with the public.” The roads entering North Oaks have no trespassing signs posted and automated license plate readers.
playlist.megaphone.fm?e=TBIEA2…
In the early days of Google Maps, North Oaks was on Street View. But in May, 2008, the city threatened Google with a lawsuit because its Street View cars had trespassed. Google deleted its Street View images and North Oaks hasn’t been on Street View since.

"It's not the hoity-toity folks trying to figure out how to keep the world away," then-Mayor Thomas Watson told the Star Tribune in 2008. "They [Google] really didn't have any authorization to go on private property."

Google Maps allows people to upload their own images, however. And Parr set out to find a way to map North Oaks without actually going there. So he began mapping it with a drone.

“It’s a geographic oddity,” Parr said. “I realized the airspace above North Oaks operates differently than the property on the ground. I thought you could effectively map the city with a drone.”

Parr is right. The national airspace is technically managed by the Federal Aviation Administration, and “airspace” starts directly above the ground, which is something I covered over and over in the early days of consumer drones as towns sought to ban drones in certain areas.

“Technically, if you launch your drone from public property, which anyone can do if you’re a registered drone pilot, you can fly it straight up and above private property,” Parr said. And so Parr stood at “six or seven different spots” directly outside the boundary of North Oaks and flew his drone around. “I just pulled my car over onto the shoulder and popped my drone up and flew it over,” he added.
youtube.com/embed/gtiiHXsnsrY?…
There were parts of North Oaks that he couldn’t reach by drone from outside the boundaries of the city, so eventually he decided he needed an invite into the city to go to a park within its boundaries to keep flying his drone.

“According to North Oak’s ordinances, you can go like, visit a friend, or if you’re a contractor working on a house, you can go into the city, but you have to be an invited guest,” Parr said. “I made a Craigslist post asking for somebody to invite me and I got an absolute ton of responses. I started texting with this woman named Maggie and she invited me, so technically I had the invite to go to the park.”

Parr then took his drone footage and uploaded it to Google Maps. For a few glorious days, North Oaks was mapped. And then it was gone.

“I’ve since been in a battle with the people who flag the images,” he said. He also got a letter from a law firm representing the North Oaks Homeowners Association. “It’s not asking me to take any of the videos down or anything, but basically they say, ‘Don’t come back.’”

Parr’s experiment and documentary raises questions, of course, about who gets to have privacy in America. A wealthy enclave has set up the legal and surveillance infrastructure to be able to prevent being mapped. The rest of us, meanwhile, are subject to all sorts of surveillance by our neighbors and law enforcement. “The only reason it’s set up this way is because it’s such a wealthy community,” Parr said. “I know that I was able to do this, but I don’t know if I should be able to do this, and that’s kind of the question that I wanted to tackle. The YouTube comments are pretty crazy man. They’re all over the place. They’re very split 50/50 on that question.”

North Oaks did not respond to a request for comment.


There is no associated website yet, but the move comes after Trump ordered the release of files related to UFOs.#aliens #News


Government Registers Aliens.Gov Domain


The Executive Office of the President registered the domain aliens.gov on Wednesday a little after 6:30 AM according to a bot that monitors federal domains. There’s no associated website just yet, but the registration comes a month after Trump said he would direct the government to release files related to aliens and UFOs to the public.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


This week we talk about the disappearing (and reappearing) DOGE depositions; how AI is African Intelligence; and what AI job loss reports are missing.#Podcast


Podcast: The Disappearing DOGE Depositions


This week we start with Joseph’s series of articles about the DOGE depositions. He watched hours and hours of them, then a judge ordered them removed from YouTube. But, they’ve already been archived all over the web. After the break, Jason tells us about the AI data labelers who are fighting back. In the subscribers-only section, Jason breaks down what’s wrong with all the AI job loss research at the moment.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/xtMniLj_yzQ?…
0:00 - Intro 0:51 - ⁠Google Street View's Unmappable City⁠

3:40 - ⁠I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves⁠

13:24 - ⁠DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery⁠

18:58 - ⁠The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet⁠

28:32 - ⁠'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back⁠

SUB'S STORY - ⁠AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet⁠


The media in this post is not displayed to visitors. To view it, please log in.

“Organic molecules delivered from extraterrestrial materials may have played a key role in supplying building blocks for life on Earth,” said one scientist.#TheAbstract


Was Life Seeded from Space? ‘Complete Set’ of DNA Ingredients Discovered on Asteroid


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scientists have discovered all five nucleobases—the fundamental components of DNA and RNA—in pristine samples from the asteroid Ryugu, according to a study published on Monday in Nature Astronomy. The finding strengthens the case that the ingredients for life are abundant in the solar system and may have found their way to Earth from space, according to a study published on Monday in Nature Astronomy.

Life as we know it runs on DNA and RNA, which are built from five chemical bases: adenine, guanine, cytosine, thymine, and uracil. A team has now identified this “complete set” of nucleobases in rocks snatched from the surface of Ryugu in 2019 by the Japanese spacecraft Hayabusa-2, which successfully returned them to Earth the following year.

This discovery corroborates the results from another mission, NASA’s OSIRIS-REx, which returned samples of the asteroid Bennu that also contained all five nucleobases. Both asteroids belong to the same “carbonaceous” (C-type) family of primitive carbon-rich rocks, though the samples contain different ratios of the five nucleobases.

Taken together, the findings shed light on the origin of life on Earth and raise new questions about the odds that it exists elsewhere.

“These findings suggest that nucleobases may be widespread in carbonaceous asteroids and, by extension, in planetary systems,” said Toshiki Koga, a postdoctoral researcher at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), in an email to 404 Media.

“This means that some of the key molecular ingredients for life could be commonly available,” he added. “However, this does not imply that life itself is widespread, but rather that the chemical starting materials for life may be more common than previously thought.”

The emergence of life on Earth, also known as abiogenesis, remains one of the biggest mysteries in science. To untangle this enigma, scientists first need to figure out how our planet was initially enriched with the basic stuff of life—including water, amino acids, and the nucleobases that make up our genetic material.
The “Ryugu Story” illustration depicting the detection of all five canonical nucleobases in samples returned from asteroid Ryugu by the Hayabusa2 mission. Image: JAMSTEC
One popular hypothesis suggests that asteroids bearing these biological building blocks pelted Earth as it formed more than four billion years ago. This idea has been supported by the presence of nucleobases in pieces of carbonaceous asteroids that have fallen down to Earth, such as the Murchison meteorite of Australia or the Orgueil meteorite of France.

Meteorites, however, are not pristine as they become eroded by exposure to space and can also be contaminated by terrestrial material after landing on Earth. To get cleaner samples, scientists launched several spacecraft to grab samples directly from the source, beginning with Japan’s Hayabusa mission, which delivered several milligrams of dusty grains from asteroid Itokawa to Earth in 2010.

Hayabusa-2 and OSIRIS-REx then obtained even larger samples from their targets, bringing back 5.4 grams from Ryugu and 121.6 grams from Bennu. Previous studies have already identified more than a dozen amino acids associated with life in both samples, as well as evidence that these asteroids were once altered by ice and water.

Now, following the discovery of all five nucleobases in the Bennu pebbles, Koga and his colleagues have found the complete set in Ryugu. The findings lend weight to the so-called “RNA world” model of abiogenesis. In this hypothesis, early life on Earth depended solely on RNA as a self-replicating molecule, laying the biological groundwork for later, more complicated systems that involved DNA and protein-based organisms. The extraterrestrial samples from Ryugu and Bennu provide evidence that at least some of the nucleobases that made up these early lifeforms came from outer space.

The results were “broadly in line with our expectations, but still very exciting to confirm,” Koga said. “All five nucleobases had already been detected in the Murchison meteorite and in samples from the asteroid Bennu. Since Ryugu is also a carbonaceous asteroid, we expected that these molecules might be present, and it was very satisfying to confirm that the complete set is indeed present in the Ryugu samples.”

But while both samples contained the royal flush of nucleobases, they differed in their relative abundances. For example, Bennu is much richer in pyrimidine nucleobases (cytosine, thymine and uracil) than Ryugu, though they both contain roughly similar levels of purine nucleobases (adenine and guanine). These idiosyncrasies point to a variety of formation processes that produced prebiotic materials on these celestial relics.

“Our results suggest that nucleobases can form under a range of conditions in early Solar System materials, particularly within primitive asteroid parent bodies that experienced aqueous alteration,” Koga said. “The observed relationship between nucleobase composition and ammonia abundance indicates that local chemical environments, such as the availability of ammonia, may play an important role.”

“At the same time, some precursor molecules may have formed earlier in interstellar environments, so nucleobase formation could involve multiple stages,” he continued. “Future studies, including analyses of different types of meteorites and laboratory experiments that simulate these conditions, will help to better constrain these formation pathways.”

In other words, understanding how these molecules form in space could help answer the age-old mystery of whether life is a rare cosmic fluke—or a common process in the universe. The research also highlights the remarkable ingenuity behind these sample-return missions, which have delivered tiny time capsules from the birth of our solar system directly into our hands.

“It is both exciting and humbling to work with these samples,” Koga said. “They are extremely limited and represent material that has remained largely unchanged since the early Solar System. At the same time, there is a strong sense of responsibility, because each tiny grain may contain important information about how organic molecules formed and evolved before the origin of life.”

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.


The media in this post is not displayed to visitors. To view it, please log in.

Widely cited AI labor research ignores the most important thing AI is doing: Killing the human internet.#AI #AISlop


AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet


Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.)

In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.”

Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. Researchers appear to be too squeamish or too embarrassed to grapple with the fact that people love to use AI to make porn, and people love to use AI to spam social media and the internet, inherently causing economic harm to creators, adult performers, journalists, musicians, writers, artists, website owners, small businesses, etc. As Emanuel wrote in our first 404 Media Generative AI Market Analysis, people love to cum, and many of the most popular generative AI websites have an explicit focus on AI porn and the creation of nonconsensual AI porn. Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the “good” uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for. (It may be the case that people are disproportionately using Claude at a higher rate for more traditional work applications, but any study on the “labor market impacts of AI” should not focus on the uses of one single tool and extrapolate that out to every other tool. For what it’s worth, jailbroken versions of Claude are very popular among sexbot enthusiasts).

Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth.

Anthropic’s paper does attempt to estimate what the effect of AI will be on “arts and media,” but again, the way the researchers do this is by attempting to decide whether AI can directly do the tasks that AI researchers assume are required by someone with a job in “arts and media.” Other widely-cited papers about AI-related job loss also do not really attempt to consider the potential macro impacts of the ongoing deadening and zombification of the internet, and instead focus on “AI exposure,” which is largely an attempt to predict or measure whether an AI or LLM could directly replace specific tasks that people need to do. Widely-cited papers from the National Bureau of Economic Research and Brookings released over the last several months attempt to determine the adaptability of workers in specific sectors to having many of their tasks automated by AI. The Brookings paper, at least, mentions the possibility of a society-wide shift that is impossible to predict: “the evidence underlying the adaptive capacity estimates here is derived primarily from observed effects in localized displacement events, rather than from large-scale employment shifts across occupations. As a result, the index may be most informative when displacement is relatively isolated—for example, when a worker loses their job but related occupations remain stable. In scenarios in which AI affects clusters of related occupations simultaneously, structural job availability may matter more than individual-level characteristics. Moreover, if AI fundamentally transforms the economy on a scale comparable to the industrial revolution (as some experts have suggested could be possible), it could make entire skill sets redundant across several occupations simultaneously.”

To be clear, AI-driven job loss is a critical thing to study and to consider. But many, many jobs, side hustles, and economic activity more broadly rely on “the internet,” or social media broadly defined. Study after study shows that Google is getting worse, traffic to websites are down, and an increasing amount of both web traffic and web content is being generated by AI and bots. Anecdotally, creators and influencers have told us it’s getting harder to compete with AI slop and harder to justify spending days or weeks making content just to publish it onto platforms where their AI competitors can brute force the recommendation algorithms effortlessly. We have heard from websites that have had to lay people off or shut down because Google’s AI overviews have destroyed their web traffic or because they lose out on search engine rankings to AI slop. Authors are regularly competing with AI plagiarized versions of their books on Amazon, and Spotify is getting overrun with AI-generated music, too.

This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.


The CEO of Krafton used ChatGPT to push out the head of the studio developing Subnautica 2 against the advice of his own legal team and failed miserably.#Subnautica2 #Krafton


CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court


A judge ordered the reinstatement of a video game developer after he was fired as part of a scheme cooked up by a CEO using ChatGPT. Facing the possibility of paying out a massive bonus to the developer of Subnautica 2, the CEO of publisher Krafton used ChatGPT to create a plan to take over the development studio and force out its founder, according to court records.

The Monday ruling details the bizarre story. Unknown Worlds Entertainment is the studio behind the 2018 underwater survival game Subnautica. The company has since been working on the sequel, Subnautica 2. In 2021, South Korean publisher Krafton bought Unknown Worlds Entertainment for $500 million and promised to pay out another $250 million if Subnautica 2 sold well enough.

Krafton’s internal sales projections for Subnautica 2 looked great, and looked like it would be on the hook for the additional $250 million. In an attempt to avoid paying this, Krafton CEO Changhan Kim turned to ChatGPT for help avoiding paying the developers the $250 million bonus. “As Unknown Worlds prepared to release its hotly anticipated sequel, Subnautica 2, the parties’ relationship fractured,” the court decision said. “Fearing he had agreed to a ‘pushover’ contract, Krafton’s CEO consulted an artificial intelligence chatbot to contrive a corporate ‘takeover’ strategy.”

Kim partnered with Krafton Head of Corporate Development Maria Park and the company’s legal team to work out options. He toyed with finding a reason to fire the founders. According to court records, Park pinged Kim on Slack and told him that attempting to avoid paying the bonus would be legally risky. “Hi CEO . . . it seems to be highly likely that the earn-out will still be paid if the sales goal is achieved regardless of the dismissal with cause,” the Slack message said according to court records. “Therefore, there isn’t much that we can practically gain other than punishment with a simple dismissal alone, whereas I am worried that we may be exposed to lawsuit and reputation risk.”

But the CEO would not accept defeat. “And so Kim turned to ChatGPT for help,” court records said. “When the AI chatbot responded that the earnout would be ‘difficult to cancel,’ Kim complained to Park that the [payout] was a ‘contract under which we can only be dragged around.’”

Kim pressed the chatbot for an answer. “At ChatGPT’s suggestion, Kim formed an internal task force, dubbed ‘Project X.’ The task force’s mandate was to either negotiate a ‘deal’ on the earnout or execute a ‘Take Over’ of Unknown Worlds. They looked to buy time,” court records said. “Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a ‘Response Strategy’ to a ‘No-Deal’ Scenario.”

This was a piece of ChatGPT’s “Project X” for Krafton:

“a. Preemptive Framing - Repeat that protecting quality and fan trust is the highest priority, undermine the ‘Large Corporation VS. Indie’ framing

b. Securing Control Points -

* Lock down Steam/console publishing rights and access rights over code/build pipeline through both legal and technical aspects.

* For the earn-out freeze, keep room for negotiations through provision stating ‘immediate removal if specific development results are achieved’

a. Systematic materials for legal defense - Prepare contract interpretation memorandums, log all communications, seek external consultation
b. Team retention - Operation of retention packages for key personnel and rapid backfill pipelines in anticipation of resignation/departure scenarios
c. Two handed strategy - Create a structure that allows for both hardball (Legal+ Finance) and softball (Support/Incentives) approaches so moderate factions within Unknown Worlds can push for compromise.”


Kim followed ChatGPT’s advice rather than his lawyers’ advice, according to the court records. The first step was posting a message on Subanutica’s website to get fans on his side. According to court documents, Kim said the goal of the message was to “secure public support from fans and legal validation of our legitimacy.” He then suggested that ChatGPT write it for him. It achieved the opposite of his intended goal. Fans found the message bizarre and worried about the future of the game. Those fears were compounded when Kim fired the game’s original creators and entered into a legal battle with them.

The legal battle is ongoing, but Kim looks set to lose. The judge has ordered he reinstate the fired developers and has exposed the CEO’s flailing use of ChatGPT. Krafton told Kotaku that it was “evaluating its options” regarding the ruling and that it “puts players at the heart of every decision.”


A newly published study of how college students interact with chatbots and human strangers showed talking to a random person offers more connection than an LLM.#ChatGPT #AI


Texting a Random Stranger Better for Loneliness Than Talking to a Chatbot, Study Shows


Lonely young people are likely better off texting a random stranger than talking to a chatbot, according to a new study.

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling.

The research included 300 first-semester college students who were either randomly paired with another student, given a daily solo writing task, or put into a Discord server with a chatbot running on ChatGPT-4o mini.

The students were instructed to have at least one interaction per day in each of the groups. The human-human pairs were instructed to message each other however they wanted, while the researchers instructed the bot to “listen actively and show empathy,” and to be a “friendly, positive, and supportive AI friend to help the student navigate their new college experience.” The human participants ultimately acted pretty similarly in both types of chat, sending between eight and 10 messages a day in both their human text chains and their Discord conversations with the large language model (LLM).

However, participants who were paired with a human partner reported significantly lower loneliness after the study, and those paired with the chatbot did not. “This is just such a low tech, simple intervention, and can make people feel significantly less lonely,” Ruo-Ning Li, PhD candidate at UCB and one of the authors of the paper, told 404 Media.

The research looked at college students specifically, to try to understand whether LLMs could be a scalable tool to help with the isolation that people can feel when going through a big change. The transition to college can be overwhelming: new classmates, new places, new rules. Young people are often away from parents or familiar structure for the first time, building out their new social networks among others who are doing the same. This is a particularly vulnerable time: if chatbots could really cure loneliness for a group of people like this, “then it would be great,” said Li. But only human to human interaction, despite it being with a random person over text, had any significant effect.

The research is part of a movement to understand the effects of LLM interactions over periods of time. Another paper from the same lab, published this week in Psychological Science, looks at the experiences of more than 2,000 people over twelve months, checking in with them once a quarter. The study found that higher reported chatbot use was linked with higher loneliness later on — and vice versa. “Changes in chatbot use have a small effect on emotional isolation in the future. And emotional isolation has a similarly sized effect on your likelihood to use chatbots in the future,” Dr. Dunigan Folk, one of the study’s authors, told 404 Media. He cautioned against calling it a “spiral”, since other things could be changing in peoples’ lives to make them use chatbots and be lonelier. But, he said “it’s suggestive of a negative feedback loop because it’s a reciprocal relationship.” Chatbots, he said, could be something like “social junk food.” They might make people feel good in the moment, “but over time, they might not nourish us the same way that human relationships do.”

He said this finding would be consistent with people replacing human relationships with LLMs. “I think it’s a trade-off thing where you talk to AI instead of a person,” Folk said. “the person would have been a lot more rewarding.”

And there is evidence to show that AI does have some short-term effects on mood. “If you measure their feeling of loneliness or social connection right after the interaction, people do feel better,” said Li. However, she added, “making people feel momentarily happy is not that hard.” It is not clear that a single positive experience is scalable or persistent longer term. “We eat candy, we feel happy. But if we eat a lot of candy over a long time, it could be harmful for our health,” Li said.

That positive short term effect is often reflected in public reports of chatbot usage. For example, two weeks ago, the Guardian published a column where a reporter trialled using an LLM as a therapist, described their validating interaction with it, and concluded that the “experience of being therapised by a chatbot has been wonderful.” While this isn’t necessarily a robust study design, there is empirical research that “one-shot” interactions with bots do make people feel better in the short term.

However, human interactions also have positive effects that chatbot use could be distracting people from. Li considers it important to consider the side effects of chatbot interactions, including their potential for replacing the incentive to seek out the positive effects of human connection. “AI can help mitigate negative feelings, but obviously, it cannot replace humans to build connections,” she said. “That shouldn’t be the goal of the AI design.”

A four-week March 2025 study from the MIT Media Lab and OpenAI explored how different types of LLM interaction and conversation impacted users’ mental wellbeing. The paper found that while some instances of chatbot use “initially appeared beneficial in mitigating loneliness,” higher daily LLM usage was associated with “higher loneliness, dependence, and problematic use, and lower socialization.”


A judge in London tossed out witness testimony after discovering the man was receiving coaching through a pair of smartglasses.#News #AI


Witness Caught Using Smartglasses in Court Blames it all on ChatGPT


An insolvency judge in England tossed out testimony after discovering a witness was being coached on what to say in real time through a pair of smartglasses. When the voice of the coach started coming through the cellphone after it was disconnected from the glasses, the witness blamed the whole thing on ChatGPT.

Insolvency and Companies Court (ICC) Judge Agnello KC in Britain wrote up the incident after it happened in January and the UK-based legal research blog Legal Futures was first to report it. The case considered the liquidation of a Lithuanian company co-owned by a man named Laimonas Jakštys. Jakštys was in court to get his business off an insolvency list and to put himself back in charge of it. It didn’t go well.
playlist.megaphone.fm?p=TBIEA2…
“Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked,” Judge Agnello wrote. “These questions were interpreted and then there was a pause before there was a reply. After several questions, [defense lawyer Sarah Walker] then informed me that she could hear an interference coming from around Mr. Jakštys and asked if Mr. Jakštys could take his glasses off for a period as she was aware smart glasses existed.”

There was a Lithuanian interpreter on hand to help Jakštys talk to the court and she, too, said she could hear voices from Jakštys’s glasses. The judge pointed out they were smart glasses and asked him to take them off. “After a few further questions, when the interpreter was in the process of translating a question, Mr Jakštys’ mobile phone started broadcasting out loud with the voice of someone talking,” Judge Agnello wrote. “There was clearly someone on the mobile phone talking to Mr. Jakštys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”

Jakštys showed up the next day in the glasses again and the judge told him to turn them off. “Jakštys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked,” the judgement said. “He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me.”

During the court appearance, Jakštys claimed his mobile phone had been stolen but couldn’t provide a police report for the incident. He also repeatedly received calls on his smartglasses-connected phone from a number listed as “abra kadabra.” The call log showed that many of the calls occurred when he was on the witness stand. The judge asked him about the identity of “abra kadabra” and Jakštys said it was a taxi driver.

“When he was pressed as to why all these calls were made…Mr. Jakštys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence,” Judge Agnello said.

In the end, the Judge tossed out all of Jakštys’ testimony. “He was untruthful in relation to his use about the smart glasses and in being coached through the smart glasses,” the judgement said. “In my judgment, from what occurred in court, it is clear that call was made, connected to his smart glasses and continued during his evidence until his mobile phone was removed from him. When asked about this, his explanation was that he thought it was ChatGPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”

This incident in the London court is just another in a long line of bad behavior from people wearing smartglasses. CBP agents have been spotted wearing them during immigration raids and Harvard students have loaded them with facial recognition tech to instantly dox strangers.


#ai #News

On Friday, a judge ordered those who uploaded the videos to YouTube to remove them. By Saturday, a backup of the videos was available online as a torrent and on the Internet Archive.#DOGE #News


The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet


The DOGE deposition videos a judge ordered removed from YouTube on Friday after they had gone massively viral have since been backed up across the internet, including as a torrent and to the Internet Archive.The videos included DOGE members unable or unwilling to define DEI; discussing how they used ChatGPT and terms such as “black” and “homosexual” to flag grants for termination but not “white” or “caucasian,” and acknowledgements that despite their aggressive cuts they failed to achieve the stated goal of lowering the government deficit.

The news shows the difficulty in trying to remove material from the internet, especially that which has a high public interest and has already been viewed likely millions of times. It’s also an example of the “Streisand Effect,” a phenomenon where trying to suppress information often results in the information spreading further.

💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Moons orbiting free-floating planets may remain warm for billions of years, raising the possibility some might host stable water, or even life.#TheAbstract


Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say


Welcome back to the Abstract! These are the studies this week that searched for life in the dark, stood up for hedgehogs, dropped some wisdom, and died in an inexplicably epic explosion.

First, aliens might be riding around interstellar space on exomoons, just in case that’s of interest to you. Then: an ultrasonic solution to roadkill, the limits of metrification, and an answer to a cosmic mystery.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliensor subscribe to my personal newsletter the BeX Files. b

The view from a rogue exomoon


Dahlbüdding, David et al. “Habitability of Tidally Heated H2-Dominated Exomoons around Free-Floating Planets.”

Living on a planet with a boring old Sun is for normies. In a new study, astronomers suggest that alien life could potentially emerge in a much more unexpected place—”exomoons” that orbit free-floating planets in interstellar space.

There are likely trillions of rogue planets wandering through the Milky Way, untethered to any star, raising the tantalizing mystery of whether any of them could be habitable. Now, researchers led by David Dahlbüdding of the Max Planck Institute for Extraterrestrial Physics (MPE) extend this question to exomoons that were dragged out into interstellar space with their planets.

“The search for exomoons within conventional stellar systems continues with no confirmed detection to date,” the team said. “Thus, free-floating planets might offer an alternative pathway for the first discovery of an exomoon.”

In other words, astronomers have never clearly seen an exomoon. But new techniques for spying free-floating worlds—such as microlensing, which reveals objects through the warped light of their gravity—could provide the sensitivity that is required for this long-sought detection.

With regard to potential habitability, Dahlbüdding and his colleagues focused specifically on exomoons that orbit planets with thick hydrogen atmospheres. If such a pair were to be kicked out of a star system, the exomoon’s orbit could become stretched out into a far more elliptical shape. This shift would cause the planet to exert more intense tidal forces onto its satellite, generating heat that could keep liquid water flowing on the moon over vast timescales.

“Close encounters before the final ejection even increase the ellipticity of the moon’s orbit, boosting tidal heating over millions to billions of years, depending on the moon’s and free-floating planet’s properties,” the team said. The tidal forces and atmospheric components could also “create favourable conditions for RNA polymerisation and thus support the emergence of life.”

“These potentially habitable moons could be detected through a variety of techniques,” including microlensing, the researchers added, though they noted that actually analyzing their atmospheres “may not be feasible with any instruments currently in operation.”

While we may not be able to spot signs of life on these worlds anytime soon, it would be exciting just to discover a planet and a moon bound together, but unbound from any star, which is a genuine near-term possibility.

In other news…

Ultra-sonic the hedgehog


Rasmussen, Sophie Lund et al. “Hearing and anatomy of the ear of the European hedgehog Erinaceus europaeus.” Biology Letters.

Hedgehogs have long been ubiquitous in Europe, but cars now kill up to one-third of their population each year. Even more nightmarish, the advent of robotic lawn mowers has led to an uptick in hedgehog deaths.

To help protect these iconic critters, scientists suggest testing out acoustic repellents. A series of experiments with 20 hedgehogs from a wildlife rescue established that “hedgehogs can perceive a broad ultrasonic range,” with peak sensitivity around 40 kHz.
Dr Sophie Lund RasmussenRasmussen, who goes by Dr. Hedgehog, with a hedgehog. Image: Joan Ostenfeldt
The results “show a potential for the development of targeted ultrasonic sound repellents to deter hedgehogs temporarily from potential dangers such as the particular models of robotic lawn mowers found to be hazardous to hedgehog survival, and more importantly, cars,” said researchers led by Sophie Lund Rasmussen of the University of Oxford.

“Designing sound repellents for cars to reduce the high number of road-killed hedgehogs enhances animal welfare and supports conservation of this declining flagship species,” the team concluded.

To channel the old joke, why did the hedgehog cross the road? Answer: Ideally it didn’t, due to scientific intervention. (I’ll be here all night).

Dropping in on science history


Cornu, Armel et al. “The drop and the metric system: how an unruly unit survived revolutions.” Annals of Science.

The metric system has been adopted by every country except Liberia, Myanmar, and the United States. But even as metrication was rapidly embraced in the 17th and 18th centuries, a far more imprecise system—the drop—refused to drop out.

People have measured liquids in drop form for thousands of years, and still do in many contexts today. Researchers led by Armel Cornu of Uppsala University have now explored how such “non-standard units survive lengthy waves of standardization.” The paper is worth a read for its many interesting asides, like how acids were tested “by counting the number of drops…that could be placed on the skin before one witnessed the effects.” Gnarly.

It also gets into the political dimensions of metrication, including this proto-populist justification for standardizing units: “Numerous complaints about the diversity of measurements and their lack of cross-readability” were directed with “a special ire at powerful lords who abused standards in order to extort the population,” Cornu’s team said. The metric system was one response to "the discontent of peasants and the little people against the powerful.”

Anyway, a little bit of drop-related science history never hurt anyone—unless you volunteered to be an acid tester.

A (dead) star is born


Farah, Joseph et al. “Lense–Thirring precessing magnetar engine drives a superluminous supernova.” Nature.

Astronomers have discovered the mysterious power source of rare and radiant stellar explosions called “Type I superluminous supernovae” which are ten times brighter than regular supernovae.

The secret superluminous sauce, as it turns out, is the birth of a magnetar, a highly magnetized stellar remnant, according to a supernova first observed in December 2024. The light from this stellar explosion contained imprints of the Lense–Thirring effect, in which spacetime is dragged around by massive and rapidly rotating objects, a key sign of a magnetar origin.
Artist’s conception of a magnetar surrounded by an accretion disk exhibiting Lense-Thirring precession. Image: Joseph Farah and Curtis McCully
“Our observations are consistent with a magnetar centrally located within the expanding supernova ejecta,” said researchers led by Joseph Farah of Las Cumbres Observatory. “These results provide the first observational evidence of the Lense–Thirring effect in the environment of a magnetar and confirm the magnetar spin-down model as an explanation for the extreme luminosity observed in Type I superluminous supernovae.”

“We anticipate that this discovery will create avenues for testing general relativity in a new regime—the violent centres of young supernovae,” the team concluded.

Forget “stellar” as slang for great; we have graduated to “superluminous.”

Thanks for reading! See you next week.


The government asked a judge to stop the spread of the videos on YouTube. The judge agreed, and ordered their immediate removal.#DOGE #News


DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery


A judge on Friday ordered the immediate removal of a series of depositions of members of DOGE, but not before clips of the depositions, including one in which a member was largely unable to define DEI, went viral and were covered widely, including by 404 Media.

At the time of writing, the depositions are not available on YouTube, where the Modern Language Association had uploaded them. The MLA, American Council of Learned Societies, and American Historical Association, are suing the National Endowment for the Humanities (NEH) and others around DOGE’s cuts of hundreds of millions of dollars worth of grants. Neither the plaintiffs nor the government immediately responded to a request for comment.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


This week, we discuss traveling for reporting and watching way too much DOGE testimony.#BehindTheBlog


Behind the Blog: DOGE Bros and Data Labelers


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss traveling for reporting and watching way too much DOGE bros.

JOSEPH: I just wanted to write some brief notes about the DOGE depositions and the piece I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves. Much of the reason I managed to watch all of this testimony was because I was on a couple of long flights this week. On the first flight, I saw the Justin Fox deposition on YouTube. I started watching it and recording the timestamps of interesting parts, and passed those over to our social manager Evy who then cut them into videos which have since been shared pretty widely.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

The data drops as Sen. Bernie Sanders calls for a moratorium on datacenter construction. 'We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.'#News


People Hate Datacenters, Survey Finds


A new study from the Pew Research Center asked Americans about their feelings toward datecenters and it’s not positive. Pew published the study the day after Sen. Bernie Sanders called for a moratorium on the construction of datacenters in the United States amid mounting public concern around the building’s impacts on local communities.

Pew surveyed 8,512 adults in January and asked them a broad range of questions about how they felt about datacenters. Most of the respondents said they’d heard of datecenters and the more they’d read, the less they liked them.

💡
Is an unwanted datacenter being built in your community? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Most of the Americans surveyed believe that datacenters are bad for the environment, home energy costs, and the quality of life of people living nearby and the numbers aren’t close. Only four percent of people thought datacenters were good for the environment, six percent good for jobs, and six percent good for people’s quality of life.

Despite those negative feelings, many of the people surveyed thought that datacenters would be good for jobs in the communities where they’re built and would boost local tax revenue. “Still, Americans are less likely to express positive views of data centers’ impact in these areas than to express negative views of their effects on the environment, energy costs and people’s quality of life nearby,” the research said.

Research shows that the reality of job creation by datacenters doesn’t actually live up to the promises from those lobbying to build them. “Data centers do not bring high-paying tech jobs to local communities because they operate as infrastructure projects rather than traditional jobcreating businesses,” University of Michigan researchers wrote in a 2025 brief. “Although the construction of data centers can create many jobs, those are short lived.”

The survey charts a growing anti-datacenter sentiment in America. The US is in the middle of a massive infrastructure project similar to the Manhattan Project. In a mad dash to build out AI systems, companies are constructing massive buildings and energy infrastructure across the country, often with little input from local communities and at a massive cost.

The city of Ypsilanti, Michigan is fighting to stop the construction of a $1.2 billion datacenter that would be used to test nuclear weapons. In the middle of a massive winter storm that paralyzed the state in January, lawmakers in a rural South Carolina county pushed through the approval of a controversial $2.4 billion datacenter. In Oklahoma, police arrested a man who was speaking in opposition to a datacenter after he went slightly over his time during a city council meeting.

Datacenters are terrible neighbors. The buildings drive up the cost of energy for people who live nearby, consume massive amounts of water, and can produce noises and fumes that hurt locals. In Mississippi, locals are concerned about the pollution and noise caused by an xAI datacenter powered by gas turbines. A proposed datacenter project near Amarillo, Texas would be powered by four massive nuclear generators and pull water from an aquifer with dwindling reserves. In an effort to quell fears about power consumption, Trump made Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI sign a pledge to keep energy costs down. But a pledge isn’t a law. It’s not even an executive order.

Pew’s research came out the day after Sanders announced he was proposing legislation to put a moratorium on the construction of new datacenters in the US. “We are at the beginning of the most profound technological revolution in world history. That’s the truth,” Sanders said in a video posted on social media. “This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job replacement. It will threaten our democratic institutions. It will impact our emotional well-being and what it even means to even mean to be a human being.”

We need a moratorium on AI data centers NOW. Here’s why. pic.twitter.com/dRfAdQ67zD
— Sen. Bernie Sanders (@SenSanders) March 11, 2026


“Congress hasn’t a clue how to respond…and protect the American people. It’s not only not having a clue, they’re busy out raising money all day long from AI and their super PACs,” Sanders said. “We need a moratorium on datacenters. We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.


#News

The hours of videos provide fascinating, or perhaps horrifying, insight into the thinking of someone inside DOGE.


I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves


Over the course of a six hour long or so deposition, Justin Fox, a former investment banker turned DOGE bro, refused to define what he believes counts as DEI; admitted he used ChatGPT to scan government contracts for terms such as “Black” and “homosexual” but not “white” or “caucasian;” and said that one of the grants he helped slash was “not for the benefit of humankind” before walking that claim back.

I watched all of Fox’s deposition from start to finish. The terse exchanges, the circular arguments, the pregnant pauses, all of it. The videos, available publicly on YouTube, were released as part of a lawsuit by the Modern Language Association, American Council of Learned Societies, and American Historical Association. They provide fascinating, or perhaps horrifying, insight into the thinking of someone inside DOGE. Even with Fox’s inability to answer seemingly easy questions, the responses are still illustrative of the recklessness and hamfisted nature of a group of young, inexperienced people who caused massive damage across the U.S. government, leading to negative consequences outside of it. DOGE as an organization has been linked to 300,000 deaths due to its cuts and multiple significant data breaches. All the while, DOGE did not actually reduce the government’s deficit.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Kenyan workers are still the underpaid labor behind AI training, moderation, and sex chatbots. The Data Labelers Association is fighting back.#AI #DataLabelersAssociation


'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back


Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.

“It required a lot of creativity and fast thinking. Because if I’m talking to a man, I’m supposed to act like a woman. If I’m talking to a woman, I need to act like a man. If I’m talking to a gay person, I need to act like a gay person,” he told me at a coworking space I met him at in Nairobi. After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex.

“It got to a point where my body couldn’t function. Where I saw someone naked, I don’t even feel it. And I have a wife, who expects a lot from you, a young family, she expects a lot from you intimately. But you can’t, like, do it,” Asia said. “It fractured a lot of things for me. My body is like, not functioning at all.”

Asia eventually hit a breaking point and stopped working for AI companies. He is now the secretary general of a Kenyan organization called the Data Labelers Association (DLA) and the author of “The Emotional Labor Behind AI Intimacy,” a testimony of his time working as the real human labor behind AI sex bots. As part of the DLA, Asia has been working to organize workers to fight for better pay, better mental health services, an end to draconian non-disclosure agreements, and better benefits for a workforce that often earns just a few dollars a day. Data labelers train, refine, and moderate the outputs of AI tools made by the largest companies in the world, yet they are wildly underpaid and haven’t benefitted from the runaway valuations of AI companies.
youtube.com/embed/QH654YPxvEE?…
Last month, the DLA held one of its largest events at the Nairobi Arboretum, sign up new members, and to help them tell their stories.

These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world.

“You can’t understand where you’re positioned if you don’t understand your history,” Angela, one of the day’s speakers, told the workers who had assembled there (many of the speakers at the event did not give their full names). “When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”

“These multinationals are coming to rule and dominate here,” she added. “It’s a very unfortunate supply chain, and my call today as data labelers is to build up on this—as we are fighting for labor rights, we are also fighting for the environment […] we are fighting big companies. We are fighting the British imperialist companies of today. It’s Apple, it’s Meta, it’s Gemini. Those are the ones we’re still fighting. It’s a call for solidarity and expanding our thinking beyond what we are doing, beyond our labor.”

In my few days in Kenya earlier this year, where I was traveling to speak at a conference about AI and journalism, it was immediately clear that data labelers make up a significant portion of the country’s tech workforce. Nearly everyone I spoke to there had either been a data labeler (or a content moderator) themselves or knows someone who has. Leaving the airport in Nairobi, you immediately drive by Sameer Business Park, an office complex that houses Sama, a San Francisco-headquartered “data annotation and labeling company” that has contracted with Meta, OpenAI, and many other tech giants. Sama has been sued repeatedly for its low pay and the fact that many of its workers suffer PTSD from repetitively looking at graphic content. For years, a giant sign outside its office read: “Samasource THE SOUL OF AI.” My Uber driver asked why I was going to a random office building in Nairobi’s Central Business District—I told her I was going to interview a data labeler. “Oh, I do data labeling too,” she said.
youtube.com/embed/udmfhPngjaA?…

Michael Geoffrey Asia. Image: Jason Koebler
Asia studied air cargo management in university. He graduated and expected to find a job planning out cargo and baggage routes, but couldn’t find a job because he graduated into an industry ravaged by COVID. Around this time, his child was diagnosed with lymphatic cancer, and he took out a loan of about $17,000 USD to pay for his treatments. He needed work, and found data labeling.

“It wasn’t offering good pay, to be honest,” Asia told me. “It was around $240 US dollars per month. But I felt like I didn’t have an option, I had a financial crisis, a sick child.”

Asia took a job at Sama, where he worked on various Meta projects. “You’re given a video and then told to describe the video, or you’re given pictures of people and told to identify faces. You’re supposed to draw bounding boxes around the faces and label that.” Last week, Sweden’s Svenska Dagbladet reported that Kenyan data labelers for Sama have been viewing and annotating uncensored footage from Meta’s AI camera glasses, which has included highly sensitive and violent footage.

Asia, through a group of colleagues and friends who called themselves “the Brotherhood,” eventually found another data labeling job that let him work from home. “We were a group of six friends, and everyone had to bring three job opportunities on a weekly basis,” he said. “I came across another gig that ended up not being a good one, where I had to annotate pornography.”

At this job, Asia went frame-by-frame in porn videos to annotate what was happening and what type of porn category it could possibly be. “You’re supposed to put yourself in the minds of the 8 billion people on Earth, every second of that video. So I may have someone searching for this pornography in Cuba and think ‘these are the tags they can use,’ if you’re searching ‘doggy,’ you know, that kind of thing,” he said. “So I worked on pornography for eight hours a day, and I did that project for eight months.” His ‘boss’ at the time was essentially a no-reply email with a link sent each day that gave him his work.

At the same time, Asia picked up a second job that started immediately after his shift tagging porn ended, where he was “training” AI companion bots, though he had no way of knowing which company he was actually working for. He quickly surmised that he was simply taking on the persona of different AI sex bots and was sexting with real people in real time.

“I could feel the human aspect in the conversations. Most of the people on the other side were lonely people,” he said. “I would have several profiles and the profiles are switching constantly depending on the needs of the person who pops up on your dashboard. I’d be sitting here talking to an old woman who needs love, but if she goes offline, another conversation pops up and then I’m responding to a gay person.”

The two jobs, done back to back, caused him to have insomnia, PTSD, and trouble having sex. Some data labelers, he said, work 18 hours a day. When I met him, he said he had essentially gone three full days without sleep because his body still hasn’t readjusted from his messed up schedule.

Asia said he eventually was able to get mental health counseling through his child’s cancer center, which started because he was the caregiver of a child with cancer but quickly turned into therapy for PTSD related to his job. “It was of immense help to me as a person, it was one of the best services I’ve ever gotten, because they stood with me, and I said ‘I need a solution to this.”

“We need technology, but it shouldn’t come at a human cost. What is so hard with offering mental support to the people working on graphic content? If this job was done in the U.S., would they do what they are doing in Kenya? Would they still give the pay they’re giving here? Here we are paid $.01 per task—it doesn’t make sense. Why this discrimination? If they can pay people in the U.S., well that means they can pay people in Kenya,” Asia said.


Image: Data Labelers Association
The message of many data labelers and of the lawyers who have been helping them is that artificial intelligence is not a magical tool built by people in San Francisco making millions of dollars a year and pushing their companies to insane valuations. Artificial intelligence is an extractive technology that relies on the brutal labor of underpaid workers around the world. For years, the work of African data labelers has been more or less “ghost work,” the unseen, hidden labor that lets American tech companies build their products.

“AI can never be AI without humans. It is not artificial intelligence. It’s African intelligence,” Asia said. “Most of these are dirty jobs and most of these jobs have been done here in Africa. And then once you’re done, once a tool is functional, all the communication stops. You get locked out. We are training our own death. We train ChatGPT and it’s killing us slowly.”

Draconian nondisclosure agreements and terms of services that workers can’t opt out from have created a culture of fear, and one of DLA’s goals is to make it easier for workers to speak out. At the time I met Asia in January, the DLA had 870 members, but its ranks have been growing quickly.

“I’m doing this from a point of experience, not assumption. I have been through this. I know what I’m talking about,” Asia said. “We have this monster called the NDA. The NDA is a slave tool used to enslave people to not speak about what they’re going through. I’m very much ready for any legal battle [associated with NDAs] because we’re not going to keep quiet. This is us suffering, and we can’t suffer in silence. This is not the colonial period. I have the right to speak against any violation [of my rights] and that’s what I’m doing.”

Mercy Mutemi, a workers’ rights lawyer who has sued several big tech companies including Meta for how they treat content moderators and data labelers, told me that when something happens in the United States—when a new gadget or product or feature or policy is launched, there’s a corresponding reaction in Africa.

“When something happens in the U.S., there’s an African cost to that,” she said. “Kenya has been pushing for trade deals with the U.S., right? And the direction that conversation is taking is about immunity and protection for big tech. It’s like, ‘You want any business with us at all? Well, you’ve got to get Meta out of these cases.’”

Mutemi has been working on the Meta lawsuit, and on pushing back against NDAs so that workers can more freely talk about their experiences. Tech companies “get people in a mental jail where they feel like they can’t talk about this. But NDAs are nonsensical—our laws don’t recognize these types of NDAs,” she said. “There’s a way to go about this where it’s not exploitative.”

Back at the arboretum in Nairobi, the message to DLA’s members is largely that their work is important, that it’s human, and that they deserve better.

“Africa is at the bottom of the supply chain of AI. But right now, the fact that we are all here and most of you are data labelers—you are the people who supply the labor. When we think of the whole AI ecosystem, who’s an engineer, and maybe that’s the image of AI that the majority of the world has,” Angela said. “And that’s actually very intentional. To make [your labor] invisible, to make AI look like this shiny object that no one understands, it’s very automatic and beautiful and tech. That’s the intentionality of hiding the labor and the behind the scenes of AI.”


Copilot “can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” the memo says.#AI #News


Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate


A top Senate administrator approved OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot for official use in the Senate, the New York Times reported on Tuesday. 404 Media has obtained the full text of the memo and is publishing it below.

“The Sergeant at Arms (SAA) office of the Chief Information Officer (CIO) has approved the use of three Generative Artificial Intelligence (AI) platforms with Senate data,” the memo starts. It also says the SAA will provide each Senate employee with one free license to either Gemini Chat or ChatGPT Enterprise, with Copilot also available at no cost.

💡
Do you know anything else about the government's use of AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#ai #News

What experts say about AI psychosis, how ProtonMail data helped the FBI identify a protester, and a viral app that exposed incredibly personal data of hundreds of thousands of people.#Podcast


Podcast: How to Talk to Your Friend Experiencing 'AI Psychosis'


This week we start with Sam’s story discussing something that has come up a lot but no one has really answered: how do you speak to your friend or family member falling into AI psychosis? After the break, Joseph breaks down what happened when the FBI wanted data from ProtonMail. In the subscribers-only section, Emanuel tells us about the viral developers behind an app called Quittr, and how they exposed very sensitive data of hundreds of thousands of users.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/oRiJHLIYkkw?…


The media in this post is not displayed to visitors. To view it, please log in.

To better understand what exactly we’re looking at in this dystopian surveillance hellscape, 404 Media’s Jason Koebler and Joseph Cox joined Reddit's r/technology for an Ask Me Anything session.#Flock #ICE #Surveillance #Reddit


From Flock to ICE, Here’s a Breakdown of How You’re Being Watched


It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems.

Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.

To better understand what exactly we’re looking at in this dystopian hellscape, 404 Media’s Jason Koebler and Joseph Cox joined r/technology for an AMA.

Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech?

Questions and answers have been edited for clarity.

Q: How do you think we can as a society deescalate tools designed to spy on citizens? I feel like once the police state bottle is open it’s near impossible to put it back in?

JASON:This is something I grapple with a lot. For whatever reason, my reporting has gravitated to state and local surveillance tools owned by police. This is not uniformly true, but what I've seen based on watching zillions of city council meetings and reading thousands of pages of emails and public records is that police, in general, love new toys and love new gadgets. The strategy is very often ‘get the surveillance tech first and ask questions later.’ A lot of city councils are not very sophisticated about the risks of surveillance technology and a lot of them feel a lot of pressure to keep their city safe or whatever, and so they defer to the police and give them money for whatever they ask for. There are also tons of grants and pilot programs in which police can obtain technology for cheap or free, and so the posture cities take is often ‘why not try it?’ Police love telling each other about the new capabilities and tools that they've acquired, so this tech can spread from city to city very quickly.

All of this can be pretty demoralizing but something that we've seen is that when you shine even a tiny bit of light on the ways these systems work, how they can and are often abused, people learn a lot about the intricacies of them very quickly. At this point, I am getting emails and messages multiple times a week from people in a new city or town that has either decided not to buy Flock or has decided to stop working with Flock, and usually our reporting is cited in some way. The issue is that it's not just Flock, there's all sorts of surveillance tools and new companies are popping up all the time. So it does feel like it's hard to put the genie back in the bottle, but I do think that, overall, the public discussion on surveillance and privacy is getting a lot more sophisticated, and that gives me optimism.

Q: Given the breadth of these surveillance technologies, is there any hope or possibility of opting out or avoiding being “seen”? Do we accept surveillance and aggregated data about ourselves and our behavior as an inevitability?

JOSEPH: I don't think privacy is dead. I don't think people need to give up and say fine, take my data. There are concrete things people can do. But they do introduce friction. The trade off with security is efficiency. The more efficient, the less secure you might be. The more secure, the less efficient. An extreme example would be not owning a mobile phone. Well, you're immune to producing any mobile phone telecom data because you don't own one. But that's gonna be a massive pain.

Concrete things people can do:

  • Explore legislation that will let you demand a company deletes your data. Google a template of the language to send, it's pretty easy
  • Maybe delete your AdID in your phone, or change it. Here's how on Android. This is the digital glue advertisers, and parties that buy that data, use to stick together your device and its usage.
  • Use a different email for each service. This is too much work to make constant new addresses (unless you just use one junk one). I like Apple's iCloud Hide My Email feature which gives you (they say) an unlimited number of emails to generate. Then if a website is hacked or your data sold, it is not necessarily clear that the data belongs to you. Obviously it depends on the service but I use that every day.


playlist.megaphone.fm?p=TBIEA2…
Q: Are new phones being built with spyware technology and how will we know? Will Independent Media be able to continue reporting if all of our technology blocks the truth from ever reaching the masses?

JOSEPH: Supply chain attacks are what really scare me. You have a device you trust, or a piece of software you download from a legitimate source, and even then someone has snuck in some malware. The biggest one right now which was reported just recently is the Notepad++ case.

That said, we haven't seen much widespread reporting about it happening to new phones (beyond there being annoying sketchy apps, that does happen). I'd flag that the Bloomberg piece claiming the Apple supply chain was somehow compromised was widely debunked by the infosec community.

Q: What can you infer from the info you learned to explain why some ICE agents just pull cars on the street to arrest people instead of going after them from their home?

JOSEPH: I think there are a few things going on. Some parts of DHS want there to be targeted raids, against specific people, specific addresses. Others (Bovino) want a more blanket, indiscriminate approach. I'd point to this really good reporting in The Atlantic about that tension inside the agency.

But other than that, data can only go so far. Data by itself can't make these agents fulfill their arbitrary and extreme quotas of how many people to detain. At some point, the mass deportation effort becomes distinctly low tech. It's almostttt like the XKCD comic about password security and wrench attacks. It basically boils down to grabbing who they can or feel they can.

Q: Do you ever hear from workers at Palantir (or other similar companies) about what things are like there?

JOSEPH: I won't talk about sources specifically, but a couple of things: some people inside Palantir are clearly motivated enough by what the company is doing with ICE to then leak details of that work to journalists. That started with this piece, Leaked: Palantir’s Plan to Help ICE Deport People. That was a pretty unusual leak in that it contained both Slack messages and an internal Palantir wiki in which company leadership explained and justified its work with ICE.

Leaked: Palantir’s Plan to Help ICE Deport People
Internal Palantir Slack chats and message boards obtained by 404 Media show the contracting giant is helping find the location of people flagged for deportation, that Palantir is now a “more mature partner to ICE,” and how Palantir is addressing employee concerns with discussion groups on ethics.
404 MediaJoseph Cox


Broadly, I think a lot of people inside tech companies (both social media giants and surveillance companies) are often conflicted about their work. Some leave. Some put it out of mind and stay. Some leak.

Q: Do we know what information was handed over to Palantir from DOGE? I don’t think the majority of Americans understand just how dangerous this company is right now.

JOSEPH: I think we are still learning the specifics of that. When we reported on the ELITE the Palantir-made tool ICE is using, the user guide said the tool included data from the Department of Health and Human Services. Now, I don't think the list in the user guide is exhaustive by any stretch. It says ELITE integrates new data sources.

What new data sources has ICE gotten recently? IRS. CMS. Medical insurance databases. I'm not saying that data is being fed into ELITE. I don't know that and can't report it. But I absolutely think it's possible and would make sense.

Q: Are public record requests Flock's Achilles heel?

JASON: I think you've hit on something here—the business model of not just Flock but of a lot of surveillance companies is to go city by city pitching and selling their tech to local police officers. Because of the hollowing out of local news over the last 20 years, there have been fewer journalists paying attention to city council meetings, and a lot of this tech is acquired directly by police through discretionary budgets. So for years, surveillance companies have been able to essentially go to a couple small police departments, demo their tech, get a contract. Then, through police listservs and conferences and email chains, the police start to talk about their new toys with other districts, and companies can quickly go from having just a few contracts to having dozens, hundreds, or thousands of contracts. That is more or less what's happened with Flock—a lot of officers within the police departments that were early adopters of the tech have actually been hired by the company to be lobbyists and salespeople. I've focused a lot of my reporting over the years on this dynamic and how this usually goes.

But what has happened, as you've noted, is that because these surveillance companies are working with so many police departments and cities, they are subject to public records from all of them. When a company sells only to the federal government, they may be able to be very careful about what they say, what they put in writing, how they pitch their product etc. But when a company is hyperfocused on growth at the local level, they have to explain how their tech works over and over again, and highlight different features and capabilities. They create a lot of public records doing this, and journalists and concerned citizens have noticed this and have been vigilant about requesting documents that their tax dollars are paying for. So yes, this is how we're learning a lot about Flock, and it's also how governments that may not have known about abuses or how pervasive this tech is are learning about Flock too.

So my very long answer to your question is not that public records requests are Flock's achilles heel—I think Flock's design, business model, and approach to surveillance are its achilles heel, but that the way it operates its company across tons of cities leaves it more vulnerable than it would have expected to the transparency we all deserve, and it cannot plausibly fight against the release of public documents in thousands and thousands of cities at once.

Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error
Flock is going after a website called HaveIBeenFlocked.com that has collated public records files released by police.
404 MediaJason Koebler


Q: Our local PD has stated that they have control over their Flock data. To me this implies that other Flock users can’t search the ALPR data from our city. Can you talk about what in particular Flock users can search for?

JOSEPH: Yeah, the ownership of Flock data is interesting. Flock says the police own it. Police say and believe that too. I think that is correct... mostly. Until our reporting (and maybe still now) many police forces seem to fundamentally misunderstand the Flock product, especially the nationwide network. When we contacted police departments when we were verifying that local cops were doing lookups for ICE, some of them had no idea what we were talking about. We had to explain how the system worked. Then many police departments realized what was happening and changed their access policies. So, police departments do own the footage (unless it's in Washington where a court has said actually it's a public record). But they might not realize who they are accidentally giving access to their cameras to.

Q: What is the state of the Fourth Amendment in the courts (and Supreme Court clarification) regarding Flock type surveillance currently?

JASON: There are a few lawsuits. One in San Jose. There was one in Norfolk, Virginia which just got decided in the city's favor (Flock's favor). It's being appealed.

The general argument is that you don't have an expectation of privacy in public and that you can take pictures of anything from public roads (basically). Another argument is that license plates are government data, roads are funded by taxpayers and are therefore public, so no problem here. What our law hasn't grappled with is the fact that all of these are networked together and automated, so it's a little different, in my opinion, from having one discrete camera that takes one discrete picture and then has to be accessed by a human. Instead you have thousands of networked cameras building a comprehensive database over time. I feel like that's functionally something different but our laws have not evolved to deal with this yet.

Q: Have we seen any of this technology spread (or attempt to spread) beyond the US, perhaps to other governments?

JOSEPH: Yep, absolutely. The UK has a robust facial recognition program, scanning people in public constantly, for example.

I would say it is often the other way around: technology is made or used overseas then it comes to the U.S. Cobwebs, which makes the Webloc location data tool ICE has bought access to, is from Israel (they're now part of an American company called Penlink). Paragon, the spyware that ICE bought, is also from Israel.

Q: Regarding the story posted on 404 Media about Apple’s Lockdown mode, is this the first time (publicly perhaps) the government has had issues accessing a phone with that mode enabled?

JOSEPH: I believe this is the first time we've seen the government admit it cannot access an iPhone running Lockdown Mode. Maybe it is in other court documents, but I don't think it's been reported.

FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone’s device. At least for now.
404 MediaJoseph Cox


I don't think Apple will make changes based on this. That's for a few reasons:

  • Apple has continued to make changes that thwart mobile forensics tools, like the silent reboot we revealed
  • Frankly I don't think this case is high profile enough to cause that kind of response. San Bernardino was a freak, horrible event. An actual terrorist attack. That is part of why the DOJ came down so hard
  • It went against their long standing ideas of just making their product more secure

Now, Cook has obviously gotten more close to President Trump. It is embarrassing. Giving him a gold statue, or whatever. But that's different from undermining their users' security (pushing the product into China and making concessions there, that's another story).

Q: What surveillance tools do you anticipate seeing develop and integrate further into American society in the next three years without legislative oversight?

JASON: I hate that this is my answer but I think that there's going to be a lot, and I am pretty concerned at what I've seen. Here we go:

  • Police departments are obsessed with Drone as First Responder programs (called DFR), which are basically little autonomous drones that fly out to the location of a 911 call as the call is happening. Some reporting has shown that this ends up with lots of people getting drones sent at them when they're mowing the lawn too loudly or something. This is being integrated with ALPR cameras and other AI tools. Not into it.
  • I think real time facial recognition and AI cameras that are networked together is the next big thing. New Orleans is already doing this through a quasi public “charity,” which I'm writing about for next week. We've also written about a company called Fusus which is quite concerning.
  • We've seen some early AI persona bots being used by police to infiltrate social media groups. I think these are very goofy but also cops seem generally obsessed with cramming AI and facial recognition into everything they can and I think we're about to see an explosion in this space.

Q: Outside of 404 Media, what books or resources do you recommend to folks looking to learn more about surveillance in America or globally?

JOSEPH: I definitely recommend Means of Control, Byron Tau's book. He was the first journalist to report that government agencies (including ICE and CBP) were buying smartphone location data from data brokers. It's a great book to give you a true idea of the scale of the interaction between private industry and the government. This is much more important than, say, any links between, for example, Facebook and the government. Here they just literally buy the data.

For families, I think Flock is a good one. Everyone understands what it is like to drive around and how they sometimes go places they might not want others to know for personal privacy reasons. Well, are you okay with authorities being able to query that without a warrant? And are you okay with law enforcement in, say, a town in Texas being able to then look up the movements of people across the country? I think it's a pretty good tangible example that doesn't require a lot of tech stuff.

JASON: I'll add to this briefly. This is not an exhaustive list, but off the top of my head:

Zack Whitaker's This Week in Security newsletter is really good.

Our old colleague and friend Lorenzo Franceschi-Bicchierai at TechCrunch does really great work. Groups like the EFF, ACLU, Electronic Privacy Information Center, and Center for Democracy and Technology all focus on different things but are often surfacing interesting surveillance-related cases and can be helpful in terms of understanding some of the legal issues around surveillance. Lucy Parsons Lab does amazing work. The Institute of Justice is a libertarian group that always finds very interesting privacy and surveillance cases.

With Ring, American Consumers Built a Surveillance Dragnet
Ring’s ‘Search Party’ is dystopian surveillance accelerationism.
404 MediaJason Koebler


Another one I feel people understand immediately is Ring cameras. So many people have them, and I think a lot of people like them. But I have found Ring cameras as a useful intro point just because they are so popular. Should we be filming our neighbors at all times? Putting it on Nextdoor and social media sites? Connecting it to local police? What about the entire neighborhood's cameras? Should it go to ICE, etc? I think that unfortunately a lot of people will say ‘I want to protect my house and my family,’ but I do find it's usually possible to have a nuanced talk about Ring cameras, at least in my personal life, and that often opens people's eyes to other, similar systems.


‘Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products.’#News #Tesla


Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims


A Cybertruck owner in Texas is suing Tesla for $1,000,000 in damages for “ grossly negligent conduct” following an accident on a Houston highway that involved the vehicle’s self-driving feature. According to the lawsuit, Tesla is to blame for the crash because CEO Elon Musk has oversold the truck’s ability to drive itself.

As originally reported by the Austin American-Statesman, Justine Saint Amour bought a Cybertruck from a used car dealership in Florida and drove it until it crashed on a Houston overpass on August 18, 2025. That summer day, Saint Amour was driving down Houston’s 69 Eastex Freeway with the vehicle’s full self-driving (FSD) mode engaged.
playlist.megaphone.fm?p=TBIEA2…
“Something terrifying happened, without warning, the vehicle attempted to drive straight off an overpass,” Bob Hilliard, Saint Armour’s attorney, told 404 Media in an emailed statement. “She tried to take control, but crashed into the barrier and was seriously injured—mostly her shoulder, neck, and back.”

Hilliard shared a photo of the aftermath of the crash and dashcam footage with 404 Media. In the video, the Cybertruck proceeds down the highway and hops an intersection instead of turning to the right and following the road. It’s stopped when it slams into a signpost on the overpass.

View this post on Instagram


A post shared by 404 Media (@404mediaco)

The lawsuit blames the crash on Musk. “Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products,” the lawsuit said. “This promotion of products, for capabilities that they do not have, is the reason for this incident.”
embed.documentcloud.org/docume…
Musk has spent the past few years prompting Tesla’s ability to drive itself, a feature that costs $99 a month and is sold as “Full Self-Driving.” But, the lawyers said, the FSD feature doesn’t work as advertised and it’s irresponsible of Tesla and Musk to market their vehicles as having the feature. “Despite this dangerous condition of Tesla’s ‘self-driving’ vehicles, Elon Musk and Tesla have made representations in the year 2019 that Tesla’s full ‘self-driving’ vehicles were fully operational and safe.”

Tesla and Musk have gotten in trouble for this before. In February, the company agreed it would stop using the terms “autopilot” and “full self-driving" when advertising its vehicles in California. There have been multiple fatal and non-fatal crashes involving Tesla vehicles running on autopilot, including a man who hit a parked police car in 2024. In August, a judge ordered Tesla to pay $200 million in punitive damages and another $43 million in compensatory damages to a family of a 22 year old who died in a crash involving the car’s Autopilot system.

According to the lawsuit, one of the reasons this keeps happening is because Musk intervened directly to make Teslas cheaper by using cameras instead of LiDAR, which uses laser light to create a 3D map of the surrounding area. “Elon Musk’s intervention into the design of Tesla vehicles has long been reckless and dangerous. While engineers at Tesla recommended the super-human vision of LiDAR be included for self-driving vehicles, and competitors like Waymo and Cruise relied heavily on LiDAR, Musk chose instead to rely only upon cheap video cameras,” the lawsuit said. “Musk referred to the LiDAR used by his safer competitors as expensive and unnecessary.”

Fully automated driving is a hard tech problem. LiDAR is better than basic cameras, but they’re still not perfect and LiDAR-based self-driving cars crash too. There are other problems too. In cities operating Google’s Waymo cars, passengers are leaving the doors open and Waymo is contracting DoorDashers to close them for $10 a pop, a Waymo in LA attempted to drive through a police standoff, and woman in San Francisco was trapped in a Waymo after men blocked the car and started to harass her.


“I think we often underestimate their capabilities,” said one of the researchers who uncovered a pre-Inca trade route linking the Amazon rainforest to the Pacific coast.

“I think we often underestimate their capabilities,” said one of the researchers who uncovered a pre-Inca trade route linking the Amazon rainforest to the Pacific coast.#TheAbstract

Cecilia D’Anstasio on Roblox’s efforts to protect children from pedophiles.#Podcast #roblox


Understanding Roblox’s Grooming Problem


Roblox is one of those games that is more popular than you can imagine, but unless you are of a certain age group and live in that world, you’ll rarely hear about it unless it makes the news for some terrible reason. More recently, for example, we wrote about the Tumbler Ridge shooter who created a mass shooting simulator in Roblox.

But what is Roblox, how big is it exactly, and why does it seem like it's so frequently embroiled in controversy? This week we’re joined by Cecilia D’anstasio in an attempt to answer all of these questions.

This week we’re joined by Cecilia D'Anastasio. Cecilia reports about video games at Bloomberg, and has written many important articles about the business and controversies of one of the biggest games in the world, Roblox. A few weeks ago we had Patrick Klepek on to discuss Roblox from a parent’s perspective, but today we’re going to hear about it from the perspective of a great investigative reporter and for my money the most knowledgeable journalists about Roblox.
playlist.megaphone.fm?e=TBIEA7…
404 Media is a journalist-founded company and needs your support. To subscribe, go to 404media.co. As well as bonus content every single week, subscribers get access to additional episodes where we respond to their best comments. Subscribers also get early access to our interview series. Gain access to that content at 404media.co.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube.

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.


The 'Freedom Trucks' will haul AI slop George Washington on a tour across 48 American states.

The x27;Freedom Trucksx27; will haul AI slop George Washington on a tour across 48 American states.#News #AI


I Visited the ‘Freedom Truck’ to Meet PragerU’s AI Slop Founders


In the parking lot of Seven Oaks Element school in South Carolina on one of the first hot days of the year I watched an AI-generated George Washington talk about the American revolution. “Our rights are a gift from God, not a favor from kings or courts,” slop Washington told me. It spoke from a screen that stretched floor to ceiling, trimmed by a fancy frame.

The intended effect is to make it appear as if the founding father is a painting come to life, a piece of history talking to the viewer. The actual effect was to remind that the AI slop aesthetic is synonymous with the Trump presidency and has become part of the visual language of fascism. Which is fitting because AI George Washington is the result of a collaboration between the Trump White and online content mill PragerU.
playlist.megaphone.fm?p=TBIEA2…
The AI slop founding father is part of a touring exhibit of Freedom Trucks commissioned by PragerU in honor of the 250th anniversary of American independence. The trucks are a mobile museum exhibit meant to teach kids about the founding of the country. It’s pitched at kids—most of the “content,” as staff on site called it, is meant for a younger audience but the trucks have viewing hours open to the general public. Nick Bravo, a PragerU employee on hand to answer questions, told me that there are six Freedom Trucks and that the plan is to have them travel the 48 contiguous United States over the next year.

I was drawn to the Freedom Truck because I’d heard they contained AI-generated recreations of Revolutionary figures like Washington, Betsy Ross, and the Marquis Lafayette, similar to the ones on display at the White House. To my disappointment, the AI generated videos in the Freedom Truck are remarkably boring.

As I watched the AI George Washington deliver a by-the-books version of the American story, I thought about Jerry Jones. The famously vain owner of the Dallas Cowboys commissioned an AI version of himself for AT&T stadium in 2023. Fans who make the pilgrimage to the stadium can watch a presentation and ask the AI Jones questions. The AI wanders a big screen while it talks to the audience.

Other than the lazy AI generated videos, the Freedom Truck doesn’t have much to offer. I signed a digital copy of the Declaration of Independence on a touchscreen and took a quiz that asked leading questions designed to find out if I was a “loyalist or patriot.”

“The British Army sends soldiers to Boston. How do you react?” Answer 1: “View them as occupiers violating colonial liberty.” Answer 2: “Welcome them as defenders of law and order.” With ICE and the National Guard patrolling American cities, I wondered how supporters of the current administration would answer that one.

PragerU is known for its “America can do no wrong” view of US history. Its short form video content offers a cartoon version of the past stripped of nuance and context where the country lives up to the myth that it is a “Shining City On a Hill.” According to PragerU, white people abolished slavery and dropping the atomic bomb on Japan was a necessary thing that “shortened the war and saved countless lives.” Now PragerU is taking its view of history on tour across the country. School children in every state will wander these trucks and encounter an AI slop version of the past.

Bravo told me that all the truck’s content was generated as part of a partnership between PragerU and Michigan’s Hillsdale College—a Christian university that helped craft Project 2025. There were, of course, hints of Project 2025 around the edges of the child-friendly AI-generated videos. Slavery isn’t ignored but the stories of early African Americans like poet Phillis Wheatley focus on her celebration of America rather than how she arrived there. On the museum’s “Wall of Heroes,” Whittaker Chambers is nestled between architect Frank Lloyd Wright and painter Norman Rockwell.

A small note near the floor at the exit of the truck notes the collaboration of PragerU and Hillsdale College, and claims that “neither institution received any federal funds and both generously contributed their own resources to help create this educational exhibit.” It also said “this truck was made possible through a grant from the Institute of Museum and Library Services,” which is, of course, a federal agency.

Every AI-generated video ended with a title card showing the White House and PragerU’s logo. “The White House is grateful for the partnership with PragerU and the US Department of Education for the production of this museum,” the card said. “This partnership does not constitute or imply a US Government or US Department of Education endorsement of PragerU.”

Trump attempted to dismantle the Institute of Museum and Library Services (IMLS) via executive order in 2025, but the courts blocked it. Libraries and Museums have since reported that the IMLS grant process has taken on a “chilling” pro-Trump political turn. The administration has also attempted to dismantle the Department of Education.

Trump’s voice was the last thing I heard as I wandered into the bright afternoon sun. “I want to thank PragerU for helping us share this incredible story,” he said in a recorded video that played on a loop in Freedom Truck. “I hope you will join me in helping to make America’s 250th anniversary a year we will never forget.”


#ai #News #x27

The media in this post is not displayed to visitors. To view it, please log in.

A NASA spacecraft into a small asteroid in 2022 moved its orbit around the Sun, according to a study that presents the “first-ever measurement of human-caused change in the heliocentric orbit of a celestial body.”#TheAbstract


Humanity Has Altered an Asteroid’s Orbit Around the Sun


Welcome back to the Abstract! Here are the studies this week that moved the heavens, coveted crystals, dined on lunar legumes, and got a four-star review.

First, humanity has permanently signed its name into the orbital dynamics of the solar system. Take the win! Then, we’ve got the origins of our obsession with sparkly rocks, a stint of extraterrestrial gardening, and a story of stellar significance.

As always, for more of my work, check out my bookFirst Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files.

DART delivers an orbital bullseye


Makadia, Rahil and Steven R. Chesley. “Direct detection of an asteroid’s heliocentric deflection: The Didymos system after DART.” Science.

Well folks, pack it up: Humanity has shifted the path of a celestial object around the Sun.

You may remember NASA’s Double Asteroid Redirection Test (DART) spacecraft, which slammed into an asteroid named Dimorphos in September 2022. Dimorphos, which is about the size of the Great Pyramid of Giza, orbits an asteroid named Didymos, roughly five times bigger. In the aftermath of the crash, scientists determined that DART had successfully shifted Dimorphos’ path around Didymos, shortening its roughly 11-hour orbit by 33 minutes.

Now, scientists have confirmed that the mission also changed the entire binary system’s “heliocentric” orbit around the Sun. While scientists had expected the spacecraft to push this pair of asteroids off-kilter, a new study has now quantified the impact by presenting “the first-ever measurement of human-caused change in the heliocentric orbit of a celestial body.”

The team determined that the system’s pace around the Sun was slowed by about 10 micrometers per second as a result of the mighty spaceship wallop. It took years to refine that measurement, which the researchers calculated with radar and stellar occultations, which are observations of the system against background stars.
youtube.com/embed/N-OvnVdZP_8?…
But it’s worth the wait to know that we shifted a celestial object’s circuit around the Sun, even by a tiny bit—an achievement that may come in handy if we ever need to deflect an asteroid or comet on a collision course with Earth.

“By demonstrating that asteroid deflection missions such as DART can effect change in the heliocentric orbit of a celestial body, this study marks a notable step forward in our ability to prevent future asteroid impacts on Earth,” said researchers co-led by Rahil Makadia of the University of Illinois Urbana-­Champaign and Steven R. Chesley of NASA Jet Propulsion Laboratory.

So, forget moving mountains—we’ve graduated to moving space rocks.

For anyone interested in learning more about DART, I highly recommend How to Kill an Asteroid by Robin George Andrews, which provides a fascinating inside account of the mission.

In other news…

Chimps glimpse a “big beyond”


García-Ruiz, Juan Manuel et al. “On the origin of our fascination with crystals.” Frontiers in Psychology.

It’s crystal clear: We clearly love crystals. Humans and our early hominin relatives have collected crystals for nearly 800,000 years, making them “among the first natural objects collected by hominins without any apparent utilitarian purpose,” according to a new study.

To explore the origins of this fascination, scientists gave chimpanzees, our closest living relatives, a bunch of sparkly crystals at an ape preserve in Spain. The chimps were intrigued by the offerings; indeed, one female named Sandy immediately absconded with a large crystal dubbed the “Monolith” and took it back to her group’s indoor dormitory for two days.
Chimp Toti attentively observes the quartz crystal during Experiment 1. Image: García-Ruiz et al., 2026.
“When the team of caretakers tried to retrieve the crystal, it took hours to exchange it for valuable ‘gifts’ (i.e., favored food items—bananas and yogurt—which are known from daily observations to be highly appreciated by the chimpanzees), which suggests that the crystal was highly valued,” said researchers led by Juan Manuel García-Ruiz of Donostia International Physics Center.

“Crystals may have contributed to the development of metaphysical and symbolic thinking, acting as catalysts for the conceptualization of a ‘big beyond,’” the team concluded.

Shining moonbeams on moon beans


Atkin, Jessica et al. “Bioremediation of lunar regolith simulant through mycorrhizal fungi and plant symbioses enables chickpea to seed.” Scientific Reports.

Scientists are finally addressing my dream of enjoying locally-grown falafel on the Moon. In a new study, a team experimented with planting chickpeas in lunar regolith simulant (LRS), a human-made substance that mimics lunar soil.

The results revealed that chickpeas could flower and produce seeds in the simulant, provided that it was treated with arbuscular mycorrhizal fungi (AMF) which are fungal microbes known to protect plant health. Small additions of vermicompost also helped the Moon beans flourish.
The Moon chickpeas. Image: Jessica Atkin
“Plants seeded successfully in mixtures containing up to 75 percent LRS when inoculated with AMF,” said researchers led by Jessica Atkin of Texas A&M University. “Higher LRS concentrations induced stress; however, plants grown in 100 percent LRS inoculated with AMF demonstrated an average extension of two weeks in survival compared to non-inoculated plants.”

“We present a step toward sustainable agriculture on the Moon, addressing the fundamental challenges of using Lunar regolith as a plant growth medium,” the team concluded.

Who knows if we’ll ever live off the lunar land, but as a garbanzo fanzo, I’m hoping for heavenly hummus.

TIC 120362137 is the real quad god


Borkovits, T., Rappaport, S.A., Chen, HL. et al. “Discovery of the most compact 3+1-type quadruple star system TIC 120362137.” Nature Communications.

Three-body problems are so last season; the era of the quadruple star system is upon us. In a new study, scientists unveil the most compact quartet of stars ever discovered, known as TIC 120362137, which is about 2,000 light years from Earth.

“This inner subsystem, which contains three stars that are more massive and hotter than the Sun, is more spatially compact than Mercury’s orbit around our Sun, and is orbited by a fourth Sun-like star with a period of 1,046 days,” said researchers co-led by Tamás Borkovits and Saul A. Rappaport of the University of Szeged, Hai-Liang Chen of the Chinese Academy of Sciences, and Guillermo Torres of the Center for Astrophysics, Harvard & Smithsonian.

“To our knowledge, there are no other known, similarly compact and tight, planetary-system-like 3 + 1 quadruple stellar systems,” the team added.

The researchers predicted that this fantastic foursome will eventually merge together into a pair of dead stars known as white dwarfs in about nine billion years. No planets have been found in this system, and it may be that it is too dynamically eccentric to host them. Still, it’s fun to imagine the view from such a hypothetical world, with four Suns in its sky. Eat your heart out, Tatooine.

Thanks for reading! See you next week.


This week, we discuss a PC repair battle, a revealing comment from an FBI official, and a dangerous narrative.#BehindTheBlog


Behind the Blog: An AI Army Foot Fetish


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss a PC repair battle, a revealing comment from an FBI official, and a dangerously dumb narrative.

EMANUEL: I want to update those who have been following the 404 Media sidequest “Emanuel’s CPU is dying.” The update is that I basically got a new PC. I kept my GPU (4080 Super), my CPU cooler, and storage, and upgraded everything else, including the case, because I bought the old one in the era before GPUs were more than a foot long.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


A court record reviewed by 404 Media shows privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI.#News #Privacy


Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester


Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media.

The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

"As part of our commitment to supporting ICE, we will be adding a ‘Support ICE’ donation button to the footer of every email sent through our platform."#Cybersecurity #ICE


ICE Phishing: Scammers Are Sending 'Support ICE' Emails to Steal Credentials


Clients of a long-running email marketing platform are getting targeted with a phishing campaign telling them that their emails would begin automatically inserting a “‘Support ICE’ donation button” into every email they send. The strategy suggests that scammers are trying to capitalize on people’s revulsion to ICE by coming up with strategies that would cause users to quickly log into their accounts to disable the setting. In reality, clients would be revealing their username and password to hackers.

The move indicates that hackers are targeting clients of enterprise software companies with extremely controversial political emails. The scam targeted customers of Emma, a long-running email marketing platform whose clients include Orange Theory, Yale University, Texas A&M University, the Cystic Fibrosis Foundation, Dogfish Head Brewery, and the YMCA, among others. 404 Media was forwarded a copy of the phishing email from an Emma client.

“As part of our commitment to supporting U.S. Immigration and Customs Enforcement (ICE), we will be adding a ‘Support ICE’ donation button to the footer of every email sent through our platform,” the phishing email reads. “This button will appear automatically in all outgoing emails starting next week […] all emails sent from your account will include the Support ICE footer element […] this change helps us demonstrate our platform’s civic commitment.” The email adds that it is possible to opt out of this feature, and that “we appreciate your understanding as we implement this platform-wide initiative.”

Lisa Mayr, the CEO of Marigold, which owns Emma, told 404 Media that the company “would never publish anything like this. This is a very common phishing attempt.”

Mayr is right—clients of other email sending services have recently been targeted with similar attacks. In January, programmer Fred Benenson wrote about phishing emails he had gotten that were targeting users of SendGrid, another email marketing service. At least one of the emails Benenson got used the same “Support ICE button” language and has the subject line “ICE Support Initiative.”

“If you’ve been paying any attention at all to US politics, you’ll know how insidiously provocative this would be if it were a real email,” Benenson wrote in a blog post about the email. “This phishing campaign is a fascinating example of how sophisticated social engineering has become. Instead of Nigerian 419 scams, hackers have evolved to carefully craft messages sent to professionals that are designed to exploit the American political consciousness. The opt-out buttons are the trap.”

In SendGrid’s case, Benenson found that the emails looked “real” because they were sent from other SendGrid user accounts. Basically, hackers compromised the account of a SendGrid user and then used that account to send phishing emails using the SendGrid infrastructure. “The emails look real because, technically, they are real SendGrid emails sent via SendGrid’s platform and via a customer’s reputation–they’re just sent by the wrong people and wrong domains,” he wrote.

Besides the ICE-themed phishing emails, Benenson also received an email that said SendGrid was going to add a “pride-themed footer to all emails” and another that said “all emails sent from your account will feature a commemorative theme honoring George Floyd and the Black Lives Matter movement.”

“The political sophistication on display here (BLM, LGBTQ+ rights, ICE, even the Spanish language switch playing on immigration anxieties) suggests someone with a deep understanding of American cultural fault lines,” Benenson wrote.

The Emma email was sent via Survey Monkey through an email address called “myemma@help-myemma.app.” When users clicked a “Settings” button that would have allowed them to opt out of the feature, they’re sent to a generic-looking site designed to steal credentials hosted at app-e2maa.net. By the time 404 Media got the email, Chrome had detected it as a “Dangerous site” and warned users not to visit it.


How Polymarket and Kalshi bet on Iran; AI translations are impacting Wikipedia; and an Amazon change impacting wishlists.#Podcast


Podcast: The Depravity Economy


This week we discuss our coverage of the U.S.-Israel strikes against Iran, specifically how Polymarket and Kalshi are letting people profit from death, and that Amazon data centers were on fire after missiles hit Dubai. Then Emanuel talks about how AI translations are adding 'hallucinations' to Wikipedia articles. In the subscribers-only section, Sam tells us about a change with Amazon wishlists that may expose your address.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/paHMe9kFf0w?…
0:00 - Intro

1:32 - ⁠With Iran War, Kalshi and Polymarket Bet That the Depravity Economy Has No Bottom⁠

29:07 - AI Translations Are Adding Hallucinations To Wikipedia Articles

SUBSCRIBER'S STORY - ⁠Amazon Change Means Wishlists Might Expose Your Address


‘How ghoulish.’ The depravity economy moves into the nuclear war business.#News #nuclear


Polymarket Pulls Bet on Nuclear Detonation in 2026


For a few hours on Tuesday, Polymarket hosted a bet about the possibility of nuclear war in 2026. The market asked the question “Nuclear weapon detonation by …?” and racked up close to a million dollars in trading volume before Polymarket took the unusual step to remove the market from its website. It did not simply close down the bet, but it’s been “archived” meaning that a record of it no longer exists. It’s strange as many older and paid out bets remain on the site.

Pulling a bet like this is unusual and the company did not respond to 404 Media’s request for an explanation as to why. Word of the nuke bet drew wide attention online from critics already upset about Polymarket for its place in the depravity economy.
playlist.megaphone.fm?p=TBIEA2…
“I have not seen anything like this before,” Jon Wolfsthal, a former special assistant to President Barack Obama and a member of the Bulletin of the Atomic Scientists, told 404 Media. “As a citizen, it seems dangerous to enable people in power to place bets anonymously on things that might happen, creating an incentive to act on a basis of personal gain and not the national interest.”

Polymarket doesn’t often balk at bets on violence and war. There are multiple markets covering the wars in Ukraine and Iran and also many other bets about nuclear detonations. “Will a US ally get a nuke before 2027?” and “Russia nuclear test by …?” are both still actively trading. An older version of the “nuclear weapons detonation” is still on the site and did almost $3 million in trading before closing and paying out at the end of the 2025. It’s hosted a bet on the same question every year for the past few years.

The gambling market has been under fire this week after gaining a lot of attention for its various bets on the war in Iran. Gamblers spent more than $5 million betting on the question “Will the Iranian regime fall by June 30?” People have been caught manipulating war maps to cash in on frontline advances in Ukraine. And someone made $400,000 using inside knowledge to place bets about the capture of Maduro.

“How ghoulish. Especially given how much insider trading apparently goes on with current events bets,” Alex Wellerstein, a nuclear historian and creator of the NUKEMAP, told 404 Media.

Wellerstein said that betting on nuclear war isn’t unprecedented, but that it’s usually tongue-in-cheek and conducted by insiders. “The thing that immediately comes to mind is Fermi's ‘side bet’ that the Trinity test would destroy the atmosphere in 1945—which was a joke, as nobody would be able to collect if it had happened,” he said.

“A flip of this is in Daniel Ellsberg's The Doomsday Machine, in which he eschewed paying into a pension in the early 1960s because he thought the odds of a future nuclear war were so high that it was better to spend the money sooner rather than later. So another kind of bet, but a private one,” Wellerstein added. “And whenever experts give ‘odds’ on nuclear use (which the intelligence community does, apparently), they are to some degree indulging in this kind of impulse. But not for the hope of personal profit—usually it is because they want to avoid such an outcome.”

Polymarket CEO Shayne Coplan has repeatedly called the site “the future of news,” and has suggested that prediction markets give the public a more clear picture of events because money is on the line. The reality is that the financial incentives pervert reality. Nuclear war, it seems, was a bit too dramatic for Polymarket to host a wager on. But Polymarket has few moral qualms, has not told anyone why it "archived" the bet, and it’s possible it did so for some arcane technical reason and not because it got squeamish. Polymarket did not respond to 404 Media’s request for comment.


AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.#News #Wikipedia #AI


AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles


Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.

The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”

The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”

“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”

As Wikipedia editors looked at more OKA-translated articles, they found more issues.

“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me.

Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule.

“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”

A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”

“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says.

“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA.

Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j

on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota.

“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”

Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”

“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”

“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”


Scientists studied tiny, abnormal vibrations—called “glitches”—to discover what happens inside the Sun while it undergoes phases of low activity.#TheAbstract #thesun


The Sun Is 'Glitching.' Scientists Investigated and Solved a Cosmic Mystery


Scientists have peered inside the Sun and observed subtle shifts and “glitches” that have occurred over four decades, shedding light on the enigmatic long-term vibrations of our star, reports a study published on Tuesday in Monthly Notices of the Royal Astronomical Society.

The Sun goes through a roughly 11-year cycle that includes a period of high and low activity, known as solar maximum and minimum. The past few cycles have revealed changes in solar behavior that could have implications for predicting space weather and unraveling the internal dynamics of our Sun, along with other Sun-like stars.

To drill down on this mystery, researchers with the Birmingham Solar-Oscillations Network (BiSON), a network of telescopes that have monitored the Sun since the 1970s, compared the last four solar minima using this unique 40-year dataset and focused on internal vibrations that make the sun subtly oscillate.

“The entire Sun oscillates in a globally coherent way, and the oscillations are formed by sound waves trapped inside the Sun that make it resonate just like a musical instrument,”said Bill Chaplin, a professor of astrophysics at the University of Birmingham who co-authored the study, in a call with 404 Media.

“For this particular study, we were interested in seeing whether there are differences in what the Sun is doing in its structure when you focus on the periods or epochs when the Sun is very quiet,” he continued. “The last few cycles have seen some quite marked changes in behavior.”

For example, scientists have been perplexed for years by an unusually long and quiet solar minimum between cycle 23 to 24, which occurred from 2008 to 2009. Chaplin and his colleagues were able to use BiSON’s long record of asteroseismology—the study of stellar interiors—to directly contrast the interior vibrations of the Sun during this minimum to others.

“There were hints that there were things that were different” about this cycle, said Chaplin. “But now that we have the cycle 24-25 minimum—the last one in about 2019—in the bag, then we thought, ‘okay, now's the time to actually go back and look at this.’”

The team specifically looked for an acoustic wave “glitch” caused by an interior layer in which helium atoms lose electrons, producing a detectable change in the Sun’s internal structure. This glitch was significantly stronger during the 2008–2009 minimum, suggesting that the Sun’s outer interior was slightly hotter and allowed sound waves to travel faster at that time of magnetic weakness.

“The ionizing helium affects the speed at which the sound waves move through that region,” explained Chaplin. “It leaves a characteristic imprint.”

“It's not just that there is a difference with the other cycles, but it's starting to tell us about what physically has really changed beneath the surface,” he added. “They're quite subtle changes, but it's nevertheless giving us clues as to what is actually happening beneath the Sun during this very quiet period.”

The results confirm that the Sun doesn’t return to the same minimum baseline at the end of every cycle, and its activity varies within timescales of decades and centuries. For example, Chaplin pointed to one bizarrely long quiet period from 1645 to 1715, known as the Maunder Minimum.

Astronomers during this time marvelled at the prolonged lack of visible sunspots on the Sun’s surface, a sign of extremely low solar activity. Centuries later, BiSON and other solar observatories are allowing scientists to study the interior dynamics behind these shifts in depth for the first time.

“This is the first step in actually demonstrating that there are changes,” Chaplin said. “Does this mean that there are systematic changes in the way that the Sun is generating its field? It's really only now, because we have this long dataset, that we can start to ask questions like that. Previously, we just didn't have enough data to say.”

Scientists hope to keep recording the long-term behavior of the Sun with projects like BiSON so that we can better understand its mercurial nature over time. This is interesting work on its own merits, but it is also useful for refining forecasts of space storms that can wreak havoc on power grids and space assets (while also producing pretty auroras).

Chaplin also nodded to the European space telescope PLAnetary Transits and Oscillations of stars (PLATO), due for launch in 2027. This mission will search for analogous oscillations in stars beyond the Sun, building on similar work conducted by NASA’s retired Kepler space telescope.

Studying the vibrations of the Sun and other similar stars is not only important for life here on Earth; it also has implications in the search for extraterrestrial life, because local solar activity is one key to assessing the habitability of star systems similar to our own.

“The data that we have on other stars from Kepler has really helped to understand and get a better picture of the cyclic variability of other stars, like the Sun,” Chaplin concluded. “But it's still not an entirely clear picture; let's put it that way. Seismology now enables you to do really detailed analysis of stars that you can't do by other means.”