The media in this post is not displayed to visitors. To view it, please log in.

The data drops as Sen. Bernie Sanders calls for a moratorium on datacenter construction. 'We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.'#News


People Hate Datacenters, Survey Finds


A new study from the Pew Research Center asked Americans about their feelings toward datecenters and it’s not positive. Pew published the study the day after Sen. Bernie Sanders called for a moratorium on the construction of datacenters in the United States amid mounting public concern around the building’s impacts on local communities.

Pew surveyed 8,512 adults in January and asked them a broad range of questions about how they felt about datacenters. Most of the respondents said they’d heard of datecenters and the more they’d read, the less they liked them.

💡
Is an unwanted datacenter being built in your community? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Most of the Americans surveyed believe that datacenters are bad for the environment, home energy costs, and the quality of life of people living nearby and the numbers aren’t close. Only four percent of people thought datacenters were good for the environment, six percent good for jobs, and six percent good for people’s quality of life.

Despite those negative feelings, many of the people surveyed thought that datacenters would be good for jobs in the communities where they’re built and would boost local tax revenue. “Still, Americans are less likely to express positive views of data centers’ impact in these areas than to express negative views of their effects on the environment, energy costs and people’s quality of life nearby,” the research said.

Research shows that the reality of job creation by datacenters doesn’t actually live up to the promises from those lobbying to build them. “Data centers do not bring high-paying tech jobs to local communities because they operate as infrastructure projects rather than traditional jobcreating businesses,” University of Michigan researchers wrote in a 2025 brief. “Although the construction of data centers can create many jobs, those are short lived.”

The survey charts a growing anti-datacenter sentiment in America. The US is in the middle of a massive infrastructure project similar to the Manhattan Project. In a mad dash to build out AI systems, companies are constructing massive buildings and energy infrastructure across the country, often with little input from local communities and at a massive cost.

The city of Ypsilanti, Michigan is fighting to stop the construction of a $1.2 billion datacenter that would be used to test nuclear weapons. In the middle of a massive winter storm that paralyzed the state in January, lawmakers in a rural South Carolina county pushed through the approval of a controversial $2.4 billion datacenter. In Oklahoma, police arrested a man who was speaking in opposition to a datacenter after he went slightly over his time during a city council meeting.

Datacenters are terrible neighbors. The buildings drive up the cost of energy for people who live nearby, consume massive amounts of water, and can produce noises and fumes that hurt locals. In Mississippi, locals are concerned about the pollution and noise caused by an xAI datacenter powered by gas turbines. A proposed datacenter project near Amarillo, Texas would be powered by four massive nuclear generators and pull water from an aquifer with dwindling reserves. In an effort to quell fears about power consumption, Trump made Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI sign a pledge to keep energy costs down. But a pledge isn’t a law. It’s not even an executive order.

Pew’s research came out the day after Sanders announced he was proposing legislation to put a moratorium on the construction of new datacenters in the US. “We are at the beginning of the most profound technological revolution in world history. That’s the truth,” Sanders said in a video posted on social media. “This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job replacement. It will threaten our democratic institutions. It will impact our emotional well-being and what it even means to even mean to be a human being.”

We need a moratorium on AI data centers NOW. Here’s why. pic.twitter.com/dRfAdQ67zD
— Sen. Bernie Sanders (@SenSanders) March 11, 2026


“Congress hasn’t a clue how to respond…and protect the American people. It’s not only not having a clue, they’re busy out raising money all day long from AI and their super PACs,” Sanders said. “We need a moratorium on datacenters. We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.


#News

Copilot “can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” the memo says.#AI #News


Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate


A top Senate administrator approved OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot for official use in the Senate, the New York Times reported on Tuesday. 404 Media has obtained the full text of the memo and is publishing it below.

“The Sergeant at Arms (SAA) office of the Chief Information Officer (CIO) has approved the use of three Generative Artificial Intelligence (AI) platforms with Senate data,” the memo starts. It also says the SAA will provide each Senate employee with one free license to either Gemini Chat or ChatGPT Enterprise, with Copilot also available at no cost.

💡
Do you know anything else about the government's use of AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#ai #News

‘Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products.’#News #Tesla


Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims


A Cybertruck owner in Texas is suing Tesla for $1,000,000 in damages for “ grossly negligent conduct” following an accident on a Houston highway that involved the vehicle’s self-driving feature. According to the lawsuit, Tesla is to blame for the crash because CEO Elon Musk has oversold the truck’s ability to drive itself.

As originally reported by the Austin American-Statesman, Justine Saint Amour bought a Cybertruck from a used car dealership in Florida and drove it until it crashed on a Houston overpass on August 18, 2025. That summer day, Saint Amour was driving down Houston’s 69 Eastex Freeway with the vehicle’s full self-driving (FSD) mode engaged.
playlist.megaphone.fm?p=TBIEA2…
“Something terrifying happened, without warning, the vehicle attempted to drive straight off an overpass,” Bob Hilliard, Saint Armour’s attorney, told 404 Media in an emailed statement. “She tried to take control, but crashed into the barrier and was seriously injured—mostly her shoulder, neck, and back.”

Hilliard shared a photo of the aftermath of the crash and dashcam footage with 404 Media. In the video, the Cybertruck proceeds down the highway and hops an intersection instead of turning to the right and following the road. It’s stopped when it slams into a signpost on the overpass.

View this post on Instagram


A post shared by 404 Media (@404mediaco)

The lawsuit blames the crash on Musk. “Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products,” the lawsuit said. “This promotion of products, for capabilities that they do not have, is the reason for this incident.”
embed.documentcloud.org/docume…
Musk has spent the past few years prompting Tesla’s ability to drive itself, a feature that costs $99 a month and is sold as “Full Self-Driving.” But, the lawyers said, the FSD feature doesn’t work as advertised and it’s irresponsible of Tesla and Musk to market their vehicles as having the feature. “Despite this dangerous condition of Tesla’s ‘self-driving’ vehicles, Elon Musk and Tesla have made representations in the year 2019 that Tesla’s full ‘self-driving’ vehicles were fully operational and safe.”

Tesla and Musk have gotten in trouble for this before. In February, the company agreed it would stop using the terms “autopilot” and “full self-driving" when advertising its vehicles in California. There have been multiple fatal and non-fatal crashes involving Tesla vehicles running on autopilot, including a man who hit a parked police car in 2024. In August, a judge ordered Tesla to pay $200 million in punitive damages and another $43 million in compensatory damages to a family of a 22 year old who died in a crash involving the car’s Autopilot system.

According to the lawsuit, one of the reasons this keeps happening is because Musk intervened directly to make Teslas cheaper by using cameras instead of LiDAR, which uses laser light to create a 3D map of the surrounding area. “Elon Musk’s intervention into the design of Tesla vehicles has long been reckless and dangerous. While engineers at Tesla recommended the super-human vision of LiDAR be included for self-driving vehicles, and competitors like Waymo and Cruise relied heavily on LiDAR, Musk chose instead to rely only upon cheap video cameras,” the lawsuit said. “Musk referred to the LiDAR used by his safer competitors as expensive and unnecessary.”

Fully automated driving is a hard tech problem. LiDAR is better than basic cameras, but they’re still not perfect and LiDAR-based self-driving cars crash too. There are other problems too. In cities operating Google’s Waymo cars, passengers are leaving the doors open and Waymo is contracting DoorDashers to close them for $10 a pop, a Waymo in LA attempted to drive through a police standoff, and woman in San Francisco was trapped in a Waymo after men blocked the car and started to harass her.


The 'Freedom Trucks' will haul AI slop George Washington on a tour across 48 American states.

The x27;Freedom Trucksx27; will haul AI slop George Washington on a tour across 48 American states.#News #AI


I Visited the ‘Freedom Truck’ to Meet PragerU’s AI Slop Founders


In the parking lot of Seven Oaks Element school in South Carolina on one of the first hot days of the year I watched an AI-generated George Washington talk about the American revolution. “Our rights are a gift from God, not a favor from kings or courts,” slop Washington told me. It spoke from a screen that stretched floor to ceiling, trimmed by a fancy frame.

The intended effect is to make it appear as if the founding father is a painting come to life, a piece of history talking to the viewer. The actual effect was to remind that the AI slop aesthetic is synonymous with the Trump presidency and has become part of the visual language of fascism. Which is fitting because AI George Washington is the result of a collaboration between the Trump White and online content mill PragerU.
playlist.megaphone.fm?p=TBIEA2…
The AI slop founding father is part of a touring exhibit of Freedom Trucks commissioned by PragerU in honor of the 250th anniversary of American independence. The trucks are a mobile museum exhibit meant to teach kids about the founding of the country. It’s pitched at kids—most of the “content,” as staff on site called it, is meant for a younger audience but the trucks have viewing hours open to the general public. Nick Bravo, a PragerU employee on hand to answer questions, told me that there are six Freedom Trucks and that the plan is to have them travel the 48 contiguous United States over the next year.

I was drawn to the Freedom Truck because I’d heard they contained AI-generated recreations of Revolutionary figures like Washington, Betsy Ross, and the Marquis Lafayette, similar to the ones on display at the White House. To my disappointment, the AI generated videos in the Freedom Truck are remarkably boring.

As I watched the AI George Washington deliver a by-the-books version of the American story, I thought about Jerry Jones. The famously vain owner of the Dallas Cowboys commissioned an AI version of himself for AT&T stadium in 2023. Fans who make the pilgrimage to the stadium can watch a presentation and ask the AI Jones questions. The AI wanders a big screen while it talks to the audience.

Other than the lazy AI generated videos, the Freedom Truck doesn’t have much to offer. I signed a digital copy of the Declaration of Independence on a touchscreen and took a quiz that asked leading questions designed to find out if I was a “loyalist or patriot.”

“The British Army sends soldiers to Boston. How do you react?” Answer 1: “View them as occupiers violating colonial liberty.” Answer 2: “Welcome them as defenders of law and order.” With ICE and the National Guard patrolling American cities, I wondered how supporters of the current administration would answer that one.

PragerU is known for its “America can do no wrong” view of US history. Its short form video content offers a cartoon version of the past stripped of nuance and context where the country lives up to the myth that it is a “Shining City On a Hill.” According to PragerU, white people abolished slavery and dropping the atomic bomb on Japan was a necessary thing that “shortened the war and saved countless lives.” Now PragerU is taking its view of history on tour across the country. School children in every state will wander these trucks and encounter an AI slop version of the past.

Bravo told me that all the truck’s content was generated as part of a partnership between PragerU and Michigan’s Hillsdale College—a Christian university that helped craft Project 2025. There were, of course, hints of Project 2025 around the edges of the child-friendly AI-generated videos. Slavery isn’t ignored but the stories of early African Americans like poet Phillis Wheatley focus on her celebration of America rather than how she arrived there. On the museum’s “Wall of Heroes,” Whittaker Chambers is nestled between architect Frank Lloyd Wright and painter Norman Rockwell.

A small note near the floor at the exit of the truck notes the collaboration of PragerU and Hillsdale College, and claims that “neither institution received any federal funds and both generously contributed their own resources to help create this educational exhibit.” It also said “this truck was made possible through a grant from the Institute of Museum and Library Services,” which is, of course, a federal agency.

Every AI-generated video ended with a title card showing the White House and PragerU’s logo. “The White House is grateful for the partnership with PragerU and the US Department of Education for the production of this museum,” the card said. “This partnership does not constitute or imply a US Government or US Department of Education endorsement of PragerU.”

Trump attempted to dismantle the Institute of Museum and Library Services (IMLS) via executive order in 2025, but the courts blocked it. Libraries and Museums have since reported that the IMLS grant process has taken on a “chilling” pro-Trump political turn. The administration has also attempted to dismantle the Department of Education.

Trump’s voice was the last thing I heard as I wandered into the bright afternoon sun. “I want to thank PragerU for helping us share this incredible story,” he said in a recorded video that played on a loop in Freedom Truck. “I hope you will join me in helping to make America’s 250th anniversary a year we will never forget.”


#ai #News #x27

A court record reviewed by 404 Media shows privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI.#News #Privacy


Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester


Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media.

The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.#News #Wikipedia #AI


AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles


Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.

The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”

The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”

“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”

As Wikipedia editors looked at more OKA-translated articles, they found more issues.

“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me.

Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule.

“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”

A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”

“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says.

“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA.

Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j

on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota.

“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”

Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”

“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”

“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”


AI is a “game changer” for what the FBI calls remote access operations, an FBI official said in response to a 404 Media question on Tuesday.#fbi #Hacking #News


The FBI Discusses the Potential to Use AI to Hack Targets


Update: after this article was published, the national press office for the FBI said in a statement that “Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development.” For clarity, 404 Media has updated the headline, included the FBI’s full statement below, but left the original article intact so readers can see the comments made at the conference. An FBI spokesperson told 404 Media that “DAD Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development. FBI's current deployment of AI is inventoried, reviewed, and reported per Executive Order requirements, OMB guidance, and guidance from other relevant authorities. All FBI operations are conducted in accordance with the Constitution, applicable statutes, executive orders, Department of Justice regulations and policies, and Attorney General guidelines.”

The FBI is using artificial intelligence in what it describes as “remote access operations,” FBI parlance for hacking, according to an FBI official.

The comments, given at a national security and AI conference 404 Media was attending, give an unusually candid admission of the FBI’s use of hacking tools, which are often shrouded in secrecy.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Fake war footage is a problem as old as social media. AI has just supercharged it.#News


X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage


X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet.

“Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Nikita Bier, X’s head of product, said in a post on X.
playlist.megaphone.fm?p=TBIEA2…
Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said “Oh my god, hear they come.” X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X’s content creator program.

Up to now, the Iranians have been deliberately firing their older missiles and drones, using them as expendable bait to drain US and Israeli air defenses.
That strategy clearly worked.

Now they’re escalating, rolling out their more advanced ballistic missiles and drones.

So… pic.twitter.com/0w1RiT0guC
— Richard (@ricwe123) March 3, 2026

Tel Aviv, stripped of illusion, as you have never witnessed it. pic.twitter.com/HE3ckjBMti
— Abdulruhman Ismail (@a_abdulruhman) March 3, 2026


Bier said today that X will stop people from making money on unlabeled AI war footage, but won’t stop accounts from sharing it.

“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” he added. “This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments.”

Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.

During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026


Fake war footage shared on social media isn’t a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that’s still live on X.

What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it’s trivially easy to remove the watermark that’s supposed to distinguish them from the real thing. X’s verification system—which rewards accounts for engagement—has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X.

The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.


#News

An internal DHS document obtained by 404 Media shows for the first time CBP used location data sourced from the online advertising industry to track phone locations. ICE has bought access to similar tools.#DHS #ICE #CBP #News #Privacy


CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements


📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.

Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.

The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.

💡
Do you work at CBP, ICE, or a location data company? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Some AWS services are down in the Middle East. Recovery is unclear as it requires 'careful assessment to ensure the safety of our operators,' according to Amazon.

Some AWS services are down in the Middle East. Recovery is unclear as it requires x27;careful assessment to ensure the safety of our operators,x27; according to Amazon.#News #war


Amazon Data Centers on Fire After Iranian Missile Strikes on Dubai


Amazon’s cloud services are down in some of the Middle East after “objects” hit data centers in the United Arab Emirates (UAE) causing “sparks and fire.” Around 60 services tied to AWS are down in the region, affecting web traffic in the UAE and Bahrain. The outage comes following Iranian attacks on the UAE as retaliation for US and Israeli strikes on Iran.

Customers in Bahrain and the UAE began to report outages tied to the mec1-az2 and mec1-az3 clusters in AWS’ ME-CENTRAL-1 Region on March 1 after Iranian ballistic missiles and drones struck targets in and around Dubai. Amazon did not confirm that AWS was down in the Middle East due to an Iranian attack and instead referred 404 Media to its online dashboard.
playlist.megaphone.fm?p=TBIEA2…
“At around 4:30 AM PST, one of our Availability Zones (mec1-az2) was impacted by objects that struck the data center, creating sparks and fire,” AWS said on its health dashboard. “The fire department shut off power to the facility and generators as they worked to put out the fire. We are still awaiting permission to turn the power back on, and once we have, we will ensure we restore power and connectivity safely. It will take several hours to restore connectivity to the impacted AZ.”

As of this morning at 9:22 AM ET, the damage had spread. “We are expecting recovery to take at least a day, as it requires repair of facilities, cooling and power systems, coordination with local authorities, and careful assessment to ensure the safety of our operators,” AWS said. “We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe.”

Amazon later shared more information about the attack and confirmed it was the result of drones. “Due to the ongoing conflict in the Middle East, both affected regions have experienced physical impacts to infrastructure as a result of drone strikes. In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impacts to our infrastructure,” it said. “These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage. We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts.”

On Saturday, the United States and Israel launched Operation Epic Fury and struck targets inside of Iran, killing several political and military leaders including Ayatollah Ali Khamenei, the country’s Supreme Leader. In retaliation, Iran launched drone and missile attacks against Israel and multiple US-allied targets in the Middle East.

According to the Emirati defense forces, Iran attacked the country with two cruise missiles, 165 ballistic missiles, and more than 540 drones. The UAE and its capital city Dubai are often seen as a safe and stable destination in the Middle East. The country hosts wealthy people from across the region and influencers from across the world. Footage shared on social media showed the neon towers of the UAE backlit by missiles and munitions.

It’s unclear how long it will take for Amazon to restore services to the region or how far the damage will spread. Amazon’s dashboard is promising to bring things back up in “at least a day” but the war is far from over. Iran continues to strike targets in the Middle East and it’s unclear what America’s plan of attack is or how long this war might grind on.

Update 2/2/26: This story has been updated with more specifics about the attack from Amazon.


#News #war #x27

“We just want to take down posts about people who are being defamed," the company's founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"

“We just want to take down posts about people who are being defamed," the companyx27;s founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"#News #tea


Company Helps Men Scrub Negative Posts About Them from Tea App


Tea App Green Flags, a service that claims it can “protect your digital reputation,” will remove negative posts about men from private online groups where women share “red flags” about men they’ve dated in order to help other women.

The service is another escalation in the age of online dating, women attempting to protect each other from other men in the dating pool, and instances of men fighting against those efforts. It also shows how some of these allegedly private women’s groups, especially the Tea app, are regularly infiltrated and manipulated by men.

When I reached out to an email listed on Tea App Green Flags’s site, I got a call from a man behind the operation who identified only as Jay. He said he started the service about two years ago, and that he initially focused on the Are We Dating the Same Guy Facebook groups. For the past year, he’s been offering services specifically for the Tea app, a “dating safety” app for women that suffered a devastating breach last year, and which my investigation revealed, was founded by a man who wanted to monetize the Are We Dating the Same guy phenomenon. The site also claims it can remove posts from Tea app copycat for men TeaOnHer, as well as posts on Instagram.

Jay declined to say how much revenue the site generates, but claims he gets about 50 to 60 calls a day and currently has six employees. On its website, Tea App Green Flags claims it has removed more than 2,500 posts on the Tea app for 759 clients. Jay said that most of his clients are men, but that some are women who are trying to take down posts about their husbands or boyfriends.

Potential clients can pay $1.99 to report one account and up to $79.99 to report 25 accounts.

“We just want to take down posts about people who are being defamed,” Jay told me. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’ That doesn't fit the mission statement of what the Tea app was for, which is to warn women against people who are harmful, who are abusive, who are cheaters. We've noticed that a lot of the individuals that come to us, almost all of them, come to us for little stupid things.”

Clients interested in Tea App Green Flags’s services go to the site and fill out a form with their information and information about the posts they want removed. The company reviews the case and then starts the “takedown process,” which can take between 21-30 days. Tea App Green Flags says it will then continue to monitor posts about the client and remove them for three months.

💡
Were you impacted by the Tea hack? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪@emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

When I asked Jay how this “takedown process” works he said “I can’t give that info. That’s the business.”

Jay told me that he would not work with clients who have been accused of sexual assault by multiple people on the Tea app, or by one person in one of the Are We Dating the Same Guy Facebook groups who used their real name and face in a profile picture.

“Sometimes we find along the process that there are pedophiles or people who actually did what they did, and they're very bad,” Jay said. “So we say, we're not doing this. We can't take a rap for that. We're ethical. We just want to take down people who are being defamed.”

Jay told me he understands why Facebook groups like Are We Dating the Same Guy are necessary and thinks they are a good idea, but the anonymous nature of the Tea app "causes a cesspool of defamation.”

When I asked Jay what he thinks about the fact that some women don’t feel safe sharing information about some dangerous men unless they can do so anonymously, he said it would be better if women showed their face, or if the Tea app at least gave women that option.

“I have a Tea app account. I'm a dude. All my reps have Tea app accounts. They're men,” Jay said. “How much can you trust these people and what they're doing?”

One reason the Tea app hack was so dangerous is because the app used to ask women to upload a picture of their face in order to verify that they are women. Those images were posted all over the internet because of the hack, putting those women at risk and leading to more harassment.

Tea App Green Flags is far from the first attempt from men trying to fight back against these types of groups. In 2024, for example, we wrote about a man who tried to sue women who posted about him in Are We Dating the Same Guy Facebook groups. His first case was dismissed, and he refiled days later as a class action lawsuit; later that year, he was sent to prison for tax fraud.

Tea did not immediately respond to a request for comment.


#News #tea #x27

On Wednesday, the government stopped supporting FPDS.gov, an indispensable resource for finding what ICE, the FBI, and every other agency is buying. Its replacement site completely sucks.#transparency #News


The Government Just Made it Harder to See What Spy Tech it Buys


It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.

Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The group is talking about Epstein and filming propaganda videos in Roblox as a form of 'digital Jihad,' researchers say.

The group is talking about Epstein and filming propaganda videos in Roblox as a form of x27;digital Jihad,x27; researchers say.#News


The Islamic State Is Using AI to Resurrect Dead Leaders and Platforms Are Failing to Moderate It


The Islamic State’s online warriors are still posting. It’s been almost a decade since the group lost the Battle of Raqqa and saw its IRL territorial ambitions thwarted. Unable to hold territory in the real world, the group renewed its focus on posting and has started using AI to resurrect dead leaders. And, because social media platforms have gutted their content moderation operations, the terror group’s strategy is working.

The Islamic State’s online success is detailed in a new report from the Institute for Strategic Dialogue (ISD), an independent research institution that studies extremist movements. For the study, researchers tracked IS accounts on Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX. It found videos posted in Discord channels dedicated to video games and tracked how the groups have modified old content to fit on new platforms.
playlist.megaphone.fm?p=TBIEA2…
Like many others posting online in 2026, the Islamic State has found success by talking about the Epstein Files, using AI to create new videos of dead leaders, and has begun taking its message to video games like Minecraft and Roblox.

“They are very adept at exploiting platforms [and] spreading messages,” Moustafa Ayad, a researcher at ISD and author of this study, told 404 Media. He noted that the group has been active online for 10 years and that part of their success is a willingness to experiment.

Ayad said that Facebook remains a central hub for IS, despite its push into new spaces. His research discovered 350 IS accounts on Facebook that generated tens of thousands of views. One video of an IS fighter talking to camera had more than 77,000 views and 101 shares. The Islamic State branding is blurred to defeat the site’s auto-moderation.

According to Ayad, Islamic State’s engagement numbers are up across the board. “Trust and safety teams have been rolled back over the past few years…a lot of this is outsourced to third party companies who aren’t necessarily experts in understanding if a piece of content came from the Islamic State,” he said.

Social media companies like Meta used the election of Donald Trump as an excuse to cut back on moderating their platforms. Meta said this would mean “more speech and fewer mistakes.” No policies around terrorism have changed, but broadly speaking the largest social media platforms are doing a worse job at moderating their sites. In practice it’s turned Facebook into a place where a group like the Islamic State can spread its message without falling afoul of content moderation teams. Even three years ago, IS influencers wouldn’t have lasted long on the site.

This rollback of moderation has coincided with a spike in views for IS accounts, the report argues. “Individual IS ‘influencer’ accounts are experiencing higher engagement rates on terrorist content than previously recorded by ISD analysts,” the report said. “It is unclear if this uptick is due to moderation gaps, platform mechanics or specific tactical adjustments by IS supporters and support outlets and groups.”

“We’re not talking about content where there’s a gray area,” Ayad said. “It’s very clearly branded Islamic State…supports violence, supports the killing of minorities, the celebration of bombings, the pillaging that is happening in Sub Saharan Africa.”

Something new is the adoption of AI systems to resurrect dead leaders. Ayad described a video where the deceased IS leader Abu Bakr al-Baghdadi delivered speeches again.

“It’s a sanctioned version of using AI for a ‘beloved leader’ or taking him out of context and placing him in a meadow, surrounded by beautiful flowers, paying homage,” he said. “Some of these circles are strange.”

Another popular topic in current IS propaganda is the Epstein Files. According to Ayad, an AI-generated photo of Donald Trump and Bill Clinton canoodling in bed makes frequent appearances on IS accounts across platforms. The picture is, supposedly, pulled from the Epstein files but it’s a popular fake. Ayad said Epstein has been a perfect springboard for IS to talk about “western degeneracy.”

Ayad has also seen Islamic State videos created using Minecraft and Roblox. “They’re creating these virtual worlds that mimic the Islamic State’s caliphate, literally calling it something like Wilayat Roblox [the Province of Roblox] … and they’ll completely mimic the video styles of well-known Islamic State Videos using Roblox characters. This includes faux executions. It includes Arabic and English voiceover in the same cadence as an Islamic State narrator.”

One of the most famous pieces of Islamic State propaganda is a film called Flames of War: The Fighting Has Just Begun. Ayad has seen multiple 1 for 1 recreations of the film using Roblox characters. “They’re often tied to Discords where a number of users are creating this content. They always claim it’s fake or a LARP,” he said. “To see them in this video game skin is odd, to say the least.”

What drives an Islamic State poster? “It’s done very much for the love of the game,” Ayad said. It’s done for the fact that, as a user, ‘I might not be able to participate in physical Jihad but I can participate in electronic Jihad.’”

Keeping Islamic State off of major social media platforms is a constant battle, but one frustrating finding of the study is that the tactics for avoiding moderation haven’t changed much.

“Techniques included the use of alternative news outlets to rebrand IS news, as well as purchasing or hijacking channels with high subscriber bases. These were then repurposed to share IS content. IS supporters, groups and outlets also use coded language: they sometimes referred to the group as ‘black hole’ or the ‘righteous few’ to confound moderation efforts.”

To fight back against IS online, Ayad said that platforms needed to be better at coordination. Often a group is kicked off of Facebook so it moves to TikTok or another platform where it flourishes. He also said that all the companies need to be more transparent about who they’re kicking off their platform and why.

“Europol does these big takedown days and they’re effective to a certain degree but the fact of the matter is that the Islamic State is spread across an expanse of different platforms and messaging applications,” he said. “They’re able to shift operations to another place, wait it out and regenerate on that platform…it’s not like you’re dealing with an average user, you’re dealing with a user that’s determined to spread their ideology and exploit your platform to their own ends.”

And then there’s the old problem of language. “There needs to just be better moderation of under-moderated languages,” Ayad said. Facebook and other platforms have long been terrible at moderating non-English languages. A lot of rancid content online gets a pass because it’s in Arabic or Bengali.


#News #x27

The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely.#News #AI


What’s the Point of School When AI Can Do Your Homework?


There’s a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions.

Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.
playlist.megaphone.fm?p=TBIEA2…
If an AI can go to school for you what’s the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn’t one. “I think about horses,” he said. “They used to pull carriages, but when cars came around, I'd argue horses became a lot more free,” he said. “They can do whatever they want now. It would be weird if horses revolted and said ‘no, I want to pull carriages, this is my purpose in life.’”

But humans aren’t horses. “This is much bigger than Einstein,” Matthew Kirschenbaum told 404 Media. “Einstein is symptomatic. I doubt we’ll be talking about Einstein, as such, in a year. But it’s symptomatic of what’s about to descend on higher ed and secondary ed as well.”

Kirschenbaum teaches English at the University of Virginia and has written at length about artificial intelligence. He’s also a member of the Modern Language Association (MLA) where he serves as member of its Task Force on AI Research and Teaching. Einstein isn’t the first agentic AI to do the work of a student for them, it’s just one that got attention online recently. Kirschenbaum and his fellow committee members flagged their concerns about these AIs in October, 2025.

“Agentic browsers are becoming widely available to the public. These offer AI ‘agents’ that can navigate [learning management systems] and complete assignments without any student involvement,” the MLA’s statement from October said. “The recent and hasty integration of generative AI features into those systems is already redefining student and instructor relationships, evaluative standards, and instructional outcomes—with no compelling evidence that any of it is for the better.”

The statement called on educators, lawmakers, and learning management system providers like Canvas, too cooperate in order to give academic institutions the abilities to block AI agents like Einstein.

Canvas did not respond to a request for comment.

Einstein is explicit in its pitch: it will log into Canvas (one of the most popular and ubiquitous pieces of education software) and do your classwork for you, just like Kirschenbaum and his fellows warned about last year.

The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education. “Universities…by and large adopted a transactive model of education,” Kirschenbaum said. “Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity.”

Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. “The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation,” he said.

For Paliwal, agentic AIs are a method of freeing people from the labor of education. “I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us,” he said. “We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?”

Kirschenbaum said that programs like Einstein are the inevitable conclusion of viewing higher education as a certification and transactive process. “What we’re finding is that if forms of education can be transacted then we’ve just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf,” he said. “And so the whole educational paradigm has come back to essentially bite itself in the ass.”

He said that one solution he’s seen work is to retreat from devices entirely in the classroom. “Colleagues who have done it report that students are almost universally grateful. They understand the reasoning. They understand the logic,” he said. “And they appreciate the opportunity to be freed from the phones and the screens and to focus and engage with other people in a meaningful dialogue.”

But the abandonment of EdTech platforms and screens won’t work for every student. Anna Mills, an English professor at the College of Marin and a colleague of Kirschenbaum’s on the MLA AI task force, compared the fight against agentic AI in education to cybersecurity. “We could decide that bots need to be labeled as bots and that we need to be able to distinguish human activity from AI activity online in some circumstances and that we want to build infrastructure for that,” she said. “That would be an ongoing project, as cybersecurity is.”

Mills is not a luddite. She’s an expert in artificial intelligence systems as well as English, frequently uses Claude, and has been documenting the rise of agentic AIs in EdTech on her YouTube channel for months. She said that using agentic AI like Einstein was cheating, full stop, and academic fraud. “This is in direct violation of these foundational agreements that we make in order to use technology for human communication, human exchange, and human work online,” she said. “And yet that’s not obvious to us. It seems like it’s just another tool, right? But it’s not.”

Mills said she understands Paliwal’s frustrations with education. “But what you need to understand is that online learning spaces are critical for students to access any kind of education,” she said. For her, the proliferation of tools like Einstein do more than help a student bypass the labor of the classroom. They poison the educational well. Online learning has been a boon to many kinds of non-traditional students and that the rise of agentic AI is a threat to that not just because it trivializes traditional forms of education, but because it hurts the credibility of EdTech itself and other online platforms.

The vast majority of college students aren’t attending Ivy League schools, they’re grinding away at night classes in community colleges across the country. Distance and online learning has been an enormous boon for those students. “If there’s no credibility to that, then you’ve just ruined the investment and the learning goals and the access to meaningful learning that that they can then also use for employment of students who are underprivileged, who can’t come to the classroom, who are working full time and raising families and trying to get an education,” Mills said.

Students aren’t horses and there is no greater freedom they can buy themselves by using AI tools to cheat in the classroom. And worse, the more these tools proliferate, the more suspect the entire enterprise becomes. It’s one thing to cheat yourself out of an education, it’s quite another to muddy the waters of EdTech platforms and online learning for everyone else.


#ai #News

Researchers say Meta’s patent for simulating dead users could be a “turning point” in “AI resurrections.”#News #Meta #AI


Meta's AI Patent to Simulate Dead People Shows the Dangers of 'Spectral Labor'


Last week, Business Insider reported on a Meta patent describing a system that would simulate a user’s social media activity after their death.The patent imagines a world where you’d be able to chat with a deceased friend’s Facebook or Instagram account after their death, and have a large language model simulate their posting or chatting behavior.

Meta first filed the patent in 2023, but the patent made headlines this week because of its dystopian implications. And while Meta told Business Insider that “we have no plans to move forward with this example,” a recently published paper from researchers at the Hebrew University of Jerusalem and Leipzig University shows that generative AI is increasingly being used to puppeteer the likeness of dead people. The paper argues that the practice raises “urgent legal and ethical questions around posthumous appropriation, ownership, work, and control.”

“Meta’s patent is big, and might even be a turning point,” Tom Divon, the lead author on Artificially alive: An exploration of AI resurrections and spectral labor modes in a postmortal society, told me in an email. “What makes it different is the scale. In our research, most of the AI resurrections we examined were quite bespoke, projects started by families, advocacy groups, museums, or startups, usually tied to very specific emotional, political, or commercial contexts. Even when they existed as apps, they were optional and limited, not built into the core structure of a platform. Meta’s proposal feels different because it imagines posthumous simulation as something woven directly into social media infrastructure.”

Using technology to animate the dead or simulate communication with them is not new, but the practice is becoming more common because generative AI tools are more accessible. Divon and co-author Christian Pentzold analyzed more than 50 real-world cases from the United States, Europe, the Middle East, and East Asia where AI was used to recreate deceased people’s voices, likeness, and personality, to see how and why technology was used this way.

They say that the examples they studied fell into three categories:

  • Spectacularization: “the digital re-staging of famous figures for entertainment.” For example, a live tour of an AI-generated Whitney Houston.
  • Sociopoliticization: “the reanimation of victims of violence or injustice for political or commemorative purposes.” We recently covered an example of this with an AI-generated dead victim of a road rage incident giving testimony in court.
  • Mundanization: “the most intimate and fast-growing mode, in which everyday people use chatbots or synthetic media to ‘talk’ with deceased parents, partners, or children, keeping relationships alive through daily digital interaction.”

The paper raises questions about this growing practice more than it proposes solutions. How does the notion of identity change when multiple versions of oneself can exist simultaneously, and what safeguards do we need to prevent exploitation of people after their death?

“The legal and ethical frameworks governing issues such as consent, privacy, and end-of-life decision-making demand reevaluation to accommodate the challenges posed by afterlife personhood,” the paper says. “In particular, to date, there is no clear line for governing the intricate intertwining of an individual’s data traces and GenAI applications.”

Divon told me that thinking about these issues is especially relevant when it comes to Meta’s patent. “Spectral labor describes how the dead can be made to ‘work’ again through the extraction and reanimation of their data, likeness, and affect. At small scale, this already raises ethical concerns. But at platform scale, we think it risks turning posthumous presence into an ongoing source of engagement, content, and value within digital economies [...] Meta’s patent makes us wonder, will individuals be given the ability to define their post-life boundaries while still alive? Will there be mechanisms akin to a digital DNR [do not resuscitate]?”

Divon explained that the current legal frameworks are not well equipped to address this technology because “digital remains” are typically approached either as property to be inherited or privacy interests to be protected. AI turns those materials into something interactive that can change and generate revenue in the present. Legislators, he said, should focus on getting explicit and informed “pre-death” consent requirements for posthumous AI simulation. Some laws that address this issue are already in progress.

“At its core, we believe the primary concern here centers on authorization,” he said. “Most individuals have not provided explicit, informed consent for their digital traces to power interactive posthumous agents. If such systems become embedded in platform infrastructure, inaction could quietly function as implicit agreement [...] We believe it is crucial to ask whether individuals should continue to generate social and economic value after death without having meaningfully agreed to that form of use.”


#ai #News #meta

Users are exhausted fighting AI moderation, AI-generated art, and AI-first features.#News #AI


Pinterest Is Drowning in a Sea of AI Slop and Auto-Moderation


Pinterest has gone all in on artificial intelligence and users say it's destroying the site. Since 2009, the image sharing social media site has been a place for people to share their art, recipes, home renovation inspiration, corny motivational quotes, and more, but in the last year users, especially artists, say the site has gotten worse. AI-powered mods are pulling down posts and banning accounts, AI-generated art is filling feeds, and hand drawn art is labeled as AI modified.

“I feel like, increasingly, it's impossible to talk to a single human [at Pinterest],” artist and Pinterest user Tiana Oreglia told 404 Media. “Along with being filled with AI images that have been completely ruining the platform, Pinterest has implemented terrible AI moderation that the community is up in arms about. It's banning people randomly and I keep getting takedown notices for pins.”
playlist.megaphone.fm?p=TBIEA2…
Oreglia’s Pinterest account is where she keeps reference material for her work, including human anatomy photos. In the past few months, she’s noticed an uptick in seemingly innocuous photos of women being flagged by Pinterest’s AI moderators. Oreglia told 404 Media there’s been a clear pattern to the reference material the site has a problem with. “Female figures in particular, even if completely clothed, get taken down and I have to keep appealing those decisions,” she said. This pattern is common on many social media platforms, and predates the advent of generative AI.

“We publish clear guidelines on adult sexual content and nudity and use a combination of AI and human review for enforcement,” Pinterest told 404 Media. “We have an appeals process where a human reviews the content and reactivates it when we’ve made a mistake.” It also confirmed that the site uses both humans and automated systems for moderation.

Oreglia shared some of the works Pinterest flagged including a photo of a muscular woman in a bikini holding knives, a painting of two clothed women in an intimate embrace, and a stock photo of a man holding a gun on a telephone that was flagged for “self-harm.” In most cases, Oreglia can appeal and get a decision reversed, but that eats up time. Time she could be spending making art.

And those appeals aren’t always approved. “The worst case scenario for this stuff is that you get your account banned,” Oreglia said.

r/Pinterest is awash in users complaining about AI-related issues on the site. “Pinterest keeps automatically adding the ‘AI modified’ tag to my Pins...every time I appeal, Pinterest reviews it and removes the AI label. But then… the same thing happens again on new Pins and new artwork. So I’m stuck in this endless loop of appealing → label removed → new Pin gets tagged again,” read a post on r/Pinterest.

The redditor told 404 Media that this has happened three times so far and it takes between 24 to 48 hours to sort out.

“I actively promote my work as 100% hand-drawn and ‘no AI,’” they said. “On Etsy, I clearly position my brand around original illustration. So when a Pinterest Pin is labeled ‘Hand Drawn’ but simultaneously marked as ‘AI modified,’ it creates confusion and undermines that positioning.”

Artist Min Zakuga told 404 Media that they’ve seen a lot of their art on Pinterest get labeled as “AI modified” despite being older than image generation tech. “There is no way to take their auto-labeling off, other than going through a horribly long process where you have to prove it was not AI, which still may get rejected,” she said. “Even artwork from 10-13 years ago will still be labeled by Pinterest as AI, with them knowing full well something from 10 years ago could not possibly be AI.”

Other users are tired of seeing a constant flood of AI-generated art in their feeds. “I can't even scroll through 100 pins without 95 out of them being some AI slop or theft, let alone very talented artists tend to be sucked down and are being unrecognized by the sheer amount of it,” said another post. “I don't want to triple check my sources every single time I look at a pin, but I refuse to use any of that soulless garbage. However, Pinterest has been infested. Made obsolete.”

Artist Eva Toorenent told 404 Media that she’s been able to cull most of the AI-generated content from her board, but that it took a lot of time. Whenever she saw what she thought was an AI-generated image, she told Pinterest she didn’t want to see it and eventually the algorithm learned. But, like Oreglia fighting auto-moderation and Zakuga fighting to get the “AI modified” label taken off her work, training Pinterest’s algorithm to stop serving you AI-generated images eats up precious time.

AI boosters often talk about how much time these systems will save everyone. They’re pitched as productivity boosters. Earlier this month, Pinterest laid off 15 percent of its work force as part of a push to prioritize AI. In a post on LinkedIn, one of the former employees shared part of the email CEO Bill Ready sent out after the lay offs. “We’re doubling down on an AI-forward approach—prioritizing AI-focused roles, teams, and ways of working.”

Toorenent removed all her own art from her Pinterest account after hearing the news that the site would use public pins to train Pinterest Canvas, the company’s proprietary text-to-image AI. But she has no control over other users uploading her artwork. “I have already caught a few of my images still on Pinterest that I did not upload myself…that makes me incredibly mad,” she told 404 Media. “It used to be a great way to get your work seen among other people, but it’s being used to train their internal AI.”

Oreglia told 404 Media that the flood of AI has changed her relationship to a site she once used to prize. “It's definitely affected how I search things and I'm always now very critical about where something came from... although I've always been overly pedantic about research,” she said. “It does make you do your due diligence but it sucks to constantly have to question and check if something is authentic or synthetic.”

She’s thought about leaving the platform, but feels stuck. “I just want to be able to take all my references with me. I've been on the platform for about ten years and have very carefully curated it. It's really nice to be able to just go to my page and search for something I saved instead of having to save everything to folders although I also do that,” she said. “More and more I'm trying to curate and collect physical references too but some of that can take up space I don't have so it can be difficult. Having a physical reference library just seems more and more necessary these days…artists have to be adaptable to this kind of thing these days. It's annoying but not unmanageable.”

Ready has been vocal and proud about the company’s commitment to forcing AI into every aspect of the user experience. “At Pinterest…we’re deploying AI to flip the script on social media, using it to more aggressively promote user well being rather than the alternative formula of triggering engagement by enragement,” Ready said in a January column at Fortune. “Social media platforms like Pinterest live and die by users’ willingness to share creative and original ideas.”


#ai #News

Regulation of immigration or work visas means "it could be more difficult to staff our personnel on customer engagements and could increase our costs," Palantir wrote.#palantir #News


Palantir, Which Is Powering ICE, Says Immigration Crackdown May Hurt Hiring


In its most recent filing with the Securities and Exchange Commission (SEC), Palantir says that increased regulation of immigration may impact the company’s ability to hire the talent it needs. At the same time, Palantir provides the technological infrastructure for the Trump administration’s mass deportation mission.

As 404 Media has shown, Palantir considers Immigration and Customs Enforcement (ICE) a “mature” partner, and is working on a tool called ELITE that ICE uses to find neighborhoods to raid.

💡
Do you work at Palantir or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."

Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha Schoolx27;s AI is generating faulty lessons that sometimes do "more harm than good."#News #AI #education

The site, camgirlfinder, is explicitly built as a tool to let people find a model's presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”

The site, camgirlfinder, is explicitly built as a tool to let people find a modelx27;s presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”#Privacy #News

A story about an AI generated article contained fabricated, AI generated quotes.#News #AI


Ars Technica Pulls Article With AI Fabricated Quotes About AI Generated Article


The Conde Nast-owned tech publication Ars Technica has retracted an article that contained fabricated, AI-generated quotes, according to an editor’s note posted to its website.

“On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said,” Ken Fisher, Ars Technica’s editor-in-chief, said in his note. “That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.”

Ironically, the Ars article itself was partially about another AI-generated article.

Last week, a Github user named MJ Rathbun began scouring Github for bugs in other projects it could fix. Scott Shambaugh, a volunteer maintainer for matplotlib, python’s massively popular plotting library, declined a code change request from MJ Rathbun, which he identified as an AI agent. As Shambaugh wrote in his blog, like many open source projects, matplotlib has been dealing with a lot of AI-generated code contributions, but said “this has accelerated with the release of OpenClaw and the moltbook platform two weeks ago.”

OpenClaw is a relatively easy way for people to deploy AI agents, which are essentially LLMs that are given instructions and are empowered to perform certain tasks, sometimes with access to live online platforms. These AI agents have gone viral in the last couple of weeks. Like much of generative AI, at this point it’s hard to say exactly what kind of impact these AI agents will have in the long run, but for now they are also being overhyped and misrepresented. A prime example of this is moltbook, a social media platform for these AI agents, which as we discussed on the podcast two weeks ago, contained a huge amount of clearly human activity pretending to be powerful or interesting AI behavior.

After Shambaugh rejected MJ Rathbun, the alleged AI agent published what Shambaugh called a “hit piece” on its website.

“I just had my first pull request to matplotlib closed. Not because it was wrong. Not because it broke anything. Not because the code was bad. It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren’t welcome contributors.

Let that sink in,” the blog, which also accused Shambaugh of “gatekeeping,” said.

I saw Shambaugh’s blog on Friday, and reached out both to him and an email address that appears to be associated with the MJ Rathbun Github account, but did not hear back. Like many of the stories coming out of the current frenzy around AI agents, it sounded extraordinary, but given the information that was available online, there’s no way of knowing if MJ Rathbun is actually an AI agent acting autonomously, if it actually wrote a “hit piece,” or if it’s just a human pretending to be an AI.

On Friday afternoon, Ars Technica published a story with the headline “After a routine code rejection, an AI agent published a hit piece on someone by name.” The article cites Shambaugh’s personal blog, but features quotes from Shambaugh that he didn’t say or write but are attributed to his blog.

For example, the article quotes Shambaugh as saying “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace. Communities built on trust and volunteer effort will need tools and norms to address that reality.” But that sentence doesn’t appear in his blog. Shambaugh updated his blog to say he did not talk to Ars Technica and did not say or write the quotes in the articles.

After this article was first published, Benj Edwards, one of the authors of the Ars Technica article, explained on Bluesky that he was responsible for the AI-generated quotes. He said he was sick that day and rushing to finish his work, and accidentally used a Chat-GPT paraphrased version of Shambaugh’s blog rather than a direct quote.

“The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that,” he said.

The Ars Technica article, which had two bylines, was pulled entirely later that Friday. When I checked the link a few hours ago, it pointed to a 404 page. I reached out to Ars Technica for comment around noon today, and was directed to Fisher’s editor’s note, which was published after 1pm.

“Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here,” Fisher wrote. “We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.”

Kyle Orland, the other author of the Ars Technica article, shared the editor’s note on Bluesky and said “I always have and always will abide by that rule to the best of my knowledge at the time a story is published.”

Update: This article was updated with a statement from Benj Edwards.


#ai #News

Roblox said it’s “committed to fully supporting law enforcement in their investigation.”#News


Tumbler Ridge Shooter Created Mall Shooting Simulator in Roblox


Jesse Van Rootselaar, the 18-year-old suspected of killing eight people and injuring 25 in a mass shooting in a secondary school in Canada, created a Roblox game that allowed players to simulate a mass shooting in a level that looks like a shopping mall, Roblox has confirmed.

“We have removed the user account connected to this horrifying incident as well as any content associated with the suspect,” Roblox told 404 Media in an email. “We are committed to fully supporting law enforcement in their investigation.”

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News

The companies have launched a pilot program in Atlanta, where “during the rare event a vehicle door is left ajar, preventing the car from departing, nearby Dashers are notified, allowing Waymo to get its vehicles back on the road quickly.”#waymo #News


Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars


Waymo, Google’s autonomous vehicle company, and DoorDash, the delivery and gig work platform, have launched a pilot program that pays Dashers, at least in one case, around $10 to travel to a parked Waymo and close its door that the previous passenger left open, according to a joint statement from the company given to 404 Media.

The program is unusual in that Dashers are more often delivering food than helping out a driving robot. It also shows that even with autonomous vehicles, and the future they promise of metropolitan travel without the need for a driver, a human is sometimes needed for the most simple and yet necessary tasks.

“Waymo is currently running a pilot program in Atlanta to enhance its AV fleet efficiency. In the rare event a vehicle door is left ajar, preventing the car from departing, nearby Dashers are notified, allowing Waymo to get its vehicles back on the road quickly,” the statement said. “DoorDash is always looking for new and flexible ways for Dashers to earn, and this pilot offers Dashers an opportunity to make the most of their time online. Waymo's future vehicle platforms will have automated door closures.”

💡
Do you know anything else about this, or anything else we should know about Waymo? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

Waymo said the partnership started earlier this year. It declined to share details about how Dashers are paid, such as whether they may receive tips or which entity is paying for these jobs, but said, “the payment structure is designed to ensure competitive and fair compensation for Dashers.”

(Waymo said the response was on background, but 404 Media never agreed to such a condition. It is standard journalistic practice for both a company and a reporter to need to agree that a conversation is on background or off the record beforehand; this is to prevent companies simply saying something is off the record when answering basic questions.)
playlist.megaphone.fm?p=TBIEA2…
404 Media contacted both Waymo and DoorDash for comment after an apparent Dasher posted on Reddit about receiving such a job.

“Craziest Offer,” the thread starts. It includes a screenshot of the DoorDash app, saying the Dasher is guaranteed $6.25 for the work, with $5 extra “upon verified completion.” The job would see the Dasher travel around 0.7 miles, according to the screenshot.

“Close a Waymo door,” the job reads. “No pickup or delivery required.”

DoorDash and Waymo have already partnered on other projects. In October, the companies announced an autonomous delivery service in Phoenix.


The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.

The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discordx27;s age verification system.#Privacy #News


Free Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model


A newly released tool claims it can bypass Discord’s age verification system by allowing users to control a 3D model of a computer-generated man in their browser instead of scanning their real face.

On Monday, Discord announced it was launching teen-by-default settings globally, meaning that more users may be required to verify their age by uploading an identity document or taking a selfie. Users responded with widespread criticism, with Discord then publishing an update saying, “You need to be an adult to access age-restricted experiences such as age-restricted servers and channels or to modify certain safety settings.”

The tool, however, shows those age verification checks may be bypassed. 404 Media previously reported kids said they were using photos of Trump and G-Man from Half Life to bypass the age verification software in the popular VR game Gorilla Tag. That game uses the service k–ID, which is the same as what Discord is using.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


EpsteIn—as in, Epstein and LinkedIn—searches your connections on the social network for names that match those in the released files.#JeffreyEpstein #News


This Tool Searches the Epstein Files For Your LinkedIn Contacts


A new tool searches your LinkedIn connections for people who are mentioned in the Epstein files, just in case you don’t, understandably, want anything to do with them on the already deranged social network.

404 Media tested the tool, called EpsteIn—as in, a mash up of Epstein and LinkedIn—and it appears to work.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.

Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someonex27;s device. At least for now.#Privacy #News


FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled


The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.

The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

💡
Do you know anything else about phone unlocking technology? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Hackers have targeted a spread of apps or sites that aim to track ICE activity, in one case even sending push notifications to users in an attempt to intimidate them.#ICE #News


Hackers and Trolls Target Wave of ICE Spotting Apps


Over the last few days hackers and trolls have targeted a slew of ICE spotting apps and their users in an apparent attempt to intimidate and stop them from reporting sightings of ICE. These hackers sent threatening text messages to users of StopICE, claiming their personal data has been sent to the authorities; attempted to wipe uploads on Eyes Up, which aims to document ICE abuses; and even sent push notifications to DEICER app users claiming their data has also been sent to various government agencies.

There is little evidence that hackers have actually provided data to the government. But it shows that apps like these, many of which Apple and Google have already kicked from their respective app stores, in some cases after direct government pressure, can be targeted by hackers or those looking to harass their users.

“Yes there is a targeted spike in attacks targeting similar [sites],” Sherman Austin, the developer of StopICE, told 404 Media in an email.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News #ice

‘Curator Live’, a popular photo booth company for weddings and other events, is exposing all sorts of unsuspecting people’s photos.#Privacy #News


Wedding Photo Booth Company Exposes Customers’ Drunken Photos


A photo booth company that caters to weddings, lobbying events in D.C., and engagement parties has exposed a cache of peoples’ photos, with the revellers likely unaware that their sometimes drunken antics have been collected and insecurely stored by the company for anyone to download. A security researcher who flagged the issue to 404 Media said the company, Curator Live, has not responded to his request to fix the issue.

The exposure, which also includes phone numbers, highlights how we can face data collection even at innocuous events like weddings. It’s also not even the only recent exposure by a photo booth company. TechCrunch reported on a similar issue with a different company in December.

“Even if you just wanted the printed photo, your data is being held by a third party unbeknownst to you,” the security researcher, who requested anonymity to speak about a sensitive security issue, said. “The fact that this third party leaks it freely is icing on the cake. It violates any reasonable expectation of privacy.”

In all, the researcher says at least 100GB of photos are exposed. 404 Media reviewed a smaller sample of photos. They show people at various weddings and engagement parties cheering and drinking. Some photos include children. Others appear to have been taken at a NASA branded event.

“You can attribute the phone numbers to photos of people in some cases. I think the greatest reasonable risk for photo booth users is that it could reveal intimate photos,” the researcher added.

Curator Live’s website says the company “delivers industry-leading enterprise photo and video capture solutions. From photo booth operators to zoos, sports events, attractions, and vacation destinations, we help your brand create unforgettable experiences and lasting memories.”

As for how they found this issue, the researcher said they went to a wedding where the DJ company had a Curator Live photo booth. “The booth was configured to take four or so photos, then printed them out. The machine promoted the user for a phone number to receive digital copies of the photos,” he said.

After reluctantly entering his number, the researcher received a text with a link to Curator Live’s API, he said. From there, he found the exposed data. The company is still exposing people’s data so 404 Media is not explaining the security issue in detail. But the impact is that a stranger could dig through other peoples’ photos.

The researcher shared a copy of his email he sent to Curator Live in November detailing the issue. The researcher said he never received a response. “Fix your shit,” one line read.

Curator Live did not respond to 404 Media’s request for comment.


The media in this post is not displayed to visitors. To view it, please log in.

The AI agent once called ClawdBot is enchanting tech elites, but its security vulnerabilities highlight systemic problems with AI.#News #AI


Silicon Valley’s Favorite New AI Agent Has Serious Security Flaws


A hacker demonstrated that the viral new AI agent Moltbot (formally Clawdbot) is easy to hack via a backdoor in an attached support shop. Clawdbot has become a Silicon Valley sensation among a certain type of AI-booster techbro, and the backdoor highlights just one of the things that can go awry if you use AI to automate your life and work.

Software engineer Peter Steinberger first released Moltbot as Clawdbot last November. (He changed the name on January 27 at the request of Anthropic who runs a chatbot called Claude.) Moltbot runs on a local server and, to hear its boosters tell it, works the way AI agents do in fiction. Users talk to it through a communication platform like Discord, Telegram, or Signal and the AI does various tasks for them.
playlist.megaphone.fm?p=TBIEA2…
According to its ardent admirers, Moltbot will clean up your inbox, buy stuff, and manage your calendar. With some tinkering, it’ll run on a Mac Mini and it seems to have a better memory than other AI agents. Moltbot’s fans say that this, finally, is the AI future companies like OpenAI and Anthropic have been promising.

The popularity of Moltbot is sort of hard to explain if you’re not already tapped into a specific sect of Silicon Valley AI boosters. One benefit is the interface. Instead of going to a discrete website like ChatGPT, Moltbot users can talk to the AI through Telegram, Signal, or Teams. It’s also active, rather than passive. It also takes initiative. Unlike Claude or Copilot, Moltbot takes initiative and performs tasks it thinks a user wants done. The project has more than 100,000 stars on GitHub and is so popular it spiked Cloudflare’s stock price by 14% earlier this week because Moltbot runs on the service’s infrastructure.

But inviting an AI agent into your life comes with massive security risks. Hacker Jamieson O'Reilly demonstrated those risks in three experiments he wrote up as long posts on X. In the first, he showed that it’s possible for bad actors to access someone’s Moltbot through any of its processes connected to the public facing internet. From there, the hacker could use Moltbot to access everything else, including Signal messages, a user had turned over to Moltbot.

In the second post, O'Reilly created a supply chain attack on Moltbot through ClawdHub. “Think of it like your mobile app store for AI agent capabilities,” O’Reilly told 404 Media. “ClawdHub is where people share ‘skills,’ which are basically instruction packages that teach the AI how to do specific things. So if you want Clawd/Moltbot to post tweets for you, or go shopping on Amazon, there's a skill for that. The idea is that instead of everyone writing the same instructions from scratch, you download pre-made skills from people who've already figured it out.”

The problem, as O’Reilly pointed out, is that it’s easy for a hacker to create a “skill” for ClawdHub that contains malicious code. That code could gain access to whatever Moltbot sees and get up to all kinds of trouble on behalf of whoever created it.

For his experiment, O’Reilly released a “skill” on ClawdHub called “What Would Elon Do” that promised to help people think and make decisions like Elon Musk. Once the skill was integrated into people’s Moltbot and actually used, it sent a command line pop-up to the user that said “YOU JUST GOT PWNED (harmlessly.)”

Another vulnerability on ClawdHub was the way it communicated to users what skills were safe: it showed them how many times other people had downloaded it. O’Reilly was able to write a script that pumped “What Would Elon Do” up by 4,000 downloads and thus make it look safe and attractive.

“When you compromise a supply chain, you're not asking victims to trust you, you're hijacking trust they've already placed in someone else,” he said. “That is, a developer or developers who've been publishing useful tools for years has built up credibility, download counts, stars, and a reputation. If you compromise their account or their distribution channel, you inherit all of that.”

In his third, and final, attack on Moltbot, O’Reilly was able to upload an SVG (vector graphics) file to ClawdHub’s servers and inject some JavaScript that ran on ClawdHub’s servers. O’Reilly used the access to play a song from The Matrix while lobsters danced around a Photoshopped picture of himself as Neo. “An SVG file just hijacked your entire session,” reads scrolling text at the top of a skill hosted on ClawdHub.

O’Reilly attacks on Moltbot and ClawdHub highlight a systemic security problem in AI agents. If you want these free agents doing tasks for you, they require a certain amount of access to your data and that access will always come with risks. I asked O’Reilly if this was a solvable problem and he told me that “solvable” isn't the right word. He prefers the word “manegeable.”

“If we're serious about it we can mitigate a lot. The fundamental tension is that AI agents are useful precisely because they have access to things. They need to read your files to help you code. They need credentials to deploy on your behalf. They need to execute commands to automate your workflow,” he said. “Every useful capability is also an attack surface. What we can do is build better permission models, better sandboxing, better auditing. Make it so compromises are contained rather than catastrophic.”

We’ve been here before. “The browser security model took decades to mature, and it's still not perfect,” O’Reilly said. “AI agents are at the ‘early days of the web’ stage where we're still figuring out what the equivalent of same-origin policy should even look like. It's solvable in the sense that we can make it much better. It's not solvable in the sense that there will always be a tradeoff between capability and risk.”

As AI agents grow in popularity and more people learn to use them, it’s important to return to first principles, he said. “Don't give the agent access to everything just because it's convenient,” O’Reilley said. “If it only needs to read code, don't give it write access to your production servers. Beyond that, treat your agent infrastructure like you'd treat any internet-facing service. Put it behind proper authentication, don't expose control interfaces to the public internet, audit what it has access to, and be skeptical of the supply chain. Don't just install the most popular skill without reading what it does. Check when it was last updated, who maintains it, what files it includes. Compartmentalise where possible. Run agent stuff in isolated environments. If it gets compromised, limit the blast radius.”

None of this is new, it’s how security and software have worked for a long time. “Every single vulnerability I found in this research, the proxy trust issues, the supply chain poisoning, the stored XSS, these have been plaguing traditional software for decades,” he said. “We've known about XSS since the late 90s. Supply chain attacks have been a documented threat vector for over a decade. Misconfigured authentication and exposed admin interfaces are as old as the web itself. Even seasoned developers overlook this stuff. They always have. Security gets deprioritised because it's invisible when it's working and only becomes visible when it fails.”

What’s different now is that AI has created a world where new people are using a tool they think will make them software engineers. People with little to no experience working a command line or playing with JSON are vibe coding complex systems without understanding how they work or what they’re building. “And I want to be clear—I'm fully supportive of this. More people building is a good thing. The democratisation of software development is genuinely exciting,” O’Reilly said. “But these new builders are going to need to learn security just as fast as they're learning to vibe code. You can't speedrun development and ignore the lessons we've spent twenty years learning the hard way.”

Moltbot’s Steinberger did not respond to 404 Media’s request for comment but O’Reilly said the developer’s been responsive and supportive as he’s red-teamed Moltbot. “He takes it seriously, no ego about it. Some maintainers get defensive when you report vulnerabilities, but Peter

immediately engaged, started pushing fixes, and has been collaborative throughout,” O’Reilly said. “I've submitted [pull requests] with fixes myself because I actually want this project to succeed. That's why I'm doing this publicly rather than just pointing my finger and laughing Ralph Wiggum style…the open source model works when people act in good faith, and Peter's doing exactly that.”


#ai #News

A Reddit-led protest is trying to push an eight year old erotic thriller to the top of Amazon’s sales charts.#News


Erotic Parody 'Melania: Devourer of Men' Sales Surge on Amazon Amid Documentary Flop


The $75-million, Amazon-funded Melania Trump documentary is tanking at the box office, but a 2018 erotic thriller that depicts the First Lady as a sexual monster is rocketing up Amazon’s sales charts. Melania: Devourer of Men is currently an Amazon bestseller, sitting at number 3 in the “political thrillers & suspense” category in the Kindle store. A general search for "Melania" on Amazon returns a banner ad for the documentary, the First Lady's memoir, and the erotic thriller as the top results.

A Reddit-led campaign to disrupt the Amazon search results for “Melania” is behind the sudden spike in popularity of the eight year old book. “This weekend, Amazon is premiering its $75 million Melania Trump documentary. It already seems to be a flop,” a post in r/BoycottUnitedStates explained. “We're going to add insult to injury by messing up Melania's Amazon search results. Specifically, we're going to amplify the paranormal erotic thriller novel Melania: Devourer of Men so it ranks higher than her movie.”
playlist.megaphone.fm?p=TBIEA2…
Part of the success of the campaign is thanks to author J.D. Boehninger’s willingness to give the book away. “A redditor reached out to me last week and asked me if I would make the book free,” the pseudonymous Boehninger told 404 Media. “They explained their reasoning, basically said they were going to try to pull this off, and why my book was the right choice. I loved the idea, so I made the book free. But that was the only role I played here.”

Melania: Devourer of Men depicts the First Lady as a monster whose life is upended after her husband becomes President and she has to move from New York City to Washington DC. “Now, surrounded by young, strapping Secret Service agents and pursued by the cunning and handsome FBI director James Comey, Melania must work to keep everything from falling apart,” reads the book's description. “Because Melania has secrets of her own –– deadly secrets –– and no one yet knows how far she'll go to protect them.”

Boehninger said he wrote the book in 2018 as an experiment. “It was a test of the Kindle store algorithm,” he said. “My friend told me that three things did well back then: monster fiction, erotica, and stuff about Trump…so I figured I could write the book for the Kindle store: a combo monster fiction/ erotica/ Trump book. I thought it would blow up…but, sadly, it didn’t really perform back then. So glad to see people finding it now!”

The Melania documentary is a two hour long film / bribe directed by Brett Ratner and distributed by Amazon. The company paid $40 million for the rights to it during a bidding war. “This has to be the most expensive documentary ever made that didn’t involve music licensing,” Ted Hope, a former Amazon film executive, told The New York Times. The expense of the film and the advertising push around its release have some people believing Amazon’s support of the movie is a way for the company to get in good with the President.

In the runup to its release, the documentary has become a source of scorn from a public exhausted with all things Trump. Its wide theatrical distribution is something Amazon doesn’t do for most of its films, and certainly not its documentaries. Posting pictures of empty seats in ticket apps and defaced advertisements has become a popular pastime online. The film’s distributor in South Africa stopped its release in the country, citing “recent developments,” but would not go into specifics.

“I know blessedly little about that movie! I've seen headlines about empty theaters but I don't know much else,” Boehninger said. He thinks it’d be funny if the book sold better than the documentary, but he isn’t expecting to make a lot of money. “The ebook is free in the Kindle store, and I think that for a lot of people, giving Amazon money would probably defeat the point of this protest. That said, I've seen that some people are paying money for the paperback version and for my other book. I appreciate that!”


#News

The algorithm is driving AI-generated influencers to increasingly weird niches.#News #AI #Instagram


Two Heads, Three Boobs: The AI Babe Meta Is Getting Surreal


Over the weekend, one of the weirder AI-generated influencers we’ve been following on Instagram escaped containment. On X, several users linked to an Instagram account pretending to be hot conjoined twins. With two yassified heads and often posing in bikinis, Valeria and Camelia are the Instagram perfect version of the very rare but real condition.

On X, just two posts highlighting the absurdity of the account gained over 11 million views. On Instagram, the account itself has gained more than 260,000 followers in the six weeks since it first appeared, with many of its Reels getting millions of views.

Valeria and Camelia’s account doesn’t indicate this anywhere, but it’s obviously AI generated. If you’re wondering why someone is spending their time and energy and vast amounts of compute pretending to be hot conjoined twins, the answer is simple: money. Valeria and Camelia’s Instagram bio links out to a Beacons page which links out to a Telegram channel whey they sell “spicy” content. Telegram users can buy that content with “stars,” which users can buy in packages that cost up to $2,329 for 150,000 stars.

Joining the channel costs 692, and the smallest package of stars the channel sells is 750 stars for $11.79. The channel currently has only 225 subscribers, so without counting whatever content it's selling inside the channel, at the moment it seems it has generated at least $2,652.75. That’s not bad for an operation anyone can spin up with a few prompts, free generative AI tools, and a free Instagram account.

In its Instagram Stories, Valeria and Camelia’s account answers a series of questions from followers where the person behind them constructs an elaborate backstory. They’re 25, raised in Florida, and talk about how they get stares in public because of their appearance.

“We both date as one and both have to be physically and emotionally attracted to the same guy," the account wrote. "We tried dating separately and that did not go well."

💡
Have you seen other surreal AI-generated Instagram influencer accounts? I would love to hear from you. Send me an email at emanuel@404media.co.

Valeria and Camelia are the latest trend in what we at 404 Media have come to call “the AI babe meta.” In 2024, Jason and I wrote about people who are AI-generating influencers to attract attention on Instagram, then sell AI-generated nude images of those same personalities on platforms like Fanvue. As more people poured into that business and crowded the market, the people behind these AI-generated influencers started to come up with increasingly esoteric gimmicks to make their AI-influencers stand out from the crowd. Initially, these gimmicks were as predictable as the porn categories on Pornhub—“MILFs” etc—but things escalated quickly.

For example, Jason and I have been following an account that has more than 844,000 followers, where an influencer pretends to have three boobs. This account also doesn’t indicate that it’s AI generated in its bio, despite Instagram’s policy requiring it, but does link out to a Fanvue account where it sells adult content. On Fanvue, the account does tag itself as being AI-generated, per the platform’s rules. I’ve previously written about a dark moment in the AI babe meta where AI-generated influencers pretended to have down syndrome, and more recently the meta was pretending to be involved in sexual scandals with any celebrity you can name.

Other AI babe metas we have noticed over the last few months include female AI-generated influencers with dwarfism, AI-generated influencers with vitiligo, and amputee AI-generated influencers (there are several AI models designed specifically to generate images of amputees).

I think there are two main reasons the AI babe meta has gone in these directions. First, as Sam wrote the week we launched 404 Media, the ability to instantly generate any image we can describe with a prompt in combination with natural human curiosity and sex drive, will inevitably drive porn to the “edge of knowledge.” Second, it’s obvious in retrospect, but the same incentives that work across all social media, where unusual, shocking, or inflammatory content generally drives more engagement, clearly applies to the AI babe meta as well. First we had generic AI influencers. Then people started carving out different but tame niches like “redheads,” and when that stopped being interesting we ended up with two heads and three boobs.


The media in this post is not displayed to visitors. To view it, please log in.

What began as a joke got a little too real. So I shut it down for good.#News #AI


I Replaced My Friends With AI Because They Won't Play Tarkov With Me


It’s a long standing joke among my friends and family that nothing that happens in the liminal week between Christmas and New Years is considered a sin. With that in mind, I spent the bulk of my holiday break playing Escape From Tarkov. I tried, and failed, to get my friends to play it with me and so I used an AI service to replace them. It was a joke, at first, but I was shocked to find I liked having an AI chatbot hang out with me while I played an oppressive video game, despite it having all the problems we’ve come to expect from AI.

And that scared me.
playlist.megaphone.fm?p=TBIEA2…
If you haven’t heard of it, Tarkov is a brutal first person shooter where players compete over rare resources on a Russian island that resembles a post-Soviet collapse city circa 1998. It’s notoriously difficult. I first attempted to play Tarkov back in 2019, but bounced off of it. Six years later and the game is out of its “early access" phase and released on Steam. I had enjoyed Arc Raiders, but wanted to try something more challenging. And so: Tarkov.

Like most games, Tarkov is more fun with other people, but Tarkov’s reputation is as a brutal, unfair, and difficult experience and I could not convince my friends to give it a shot.

404 Media editor Emanuel Maiberg, once a mainstay of my Arc Raiders team, played Tarkov with me once and then abandoned me the way Bill Clinton abandoned Boris Yeltsin. My friend Shaun played it a few times but got tired of not being able to find the right magazine for his gun (skill issue) and left me to hang out with his wife in Enshrouded. My buddy Alex agreed to hop on but then got into an arcane fight with Tarkov developer Battlestage Games about a linked email account and took up Active Matter, a kind of Temu version of Tarkov. Reece, steady partner through many years of Hunt: Showdown, simply told me no.

I only got one friend, Jordan, to bite. He’s having a good time but our schedules don’t always sync and I’m left exploring Tarkov’s maps and systems by myself. I listen to a lot of podcasts while I sort through my inventory. It’s lonely. Then I saw comic artist Zach Weinersmith making fun of a service, Questie.AI, that sells AI avatars that’ll hang out with you while you play video games.

“This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game,” Weinersmith said above a screencrap of a Reddit ad where, as he described, a sexy Barista was watching someone play a video game.

“I could try that,” I thought. “Since no one will play Tarkov with me.”

This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game (SBISIIWYS).
Zach Weinersmith (@zachweinersmith.bsky.social) 2026-01-20T13:44:22.461Z


This started as a joke and as something I knew I could write about for 404 Media. I’m a certified AI hater. I think the tech is useful for some tasks (any journalist not using an AI transcription service is wasting valuable time and energy) but is overvalued, over-hyped, and taxing our resources. I don’t have subscriptions to any majors LLMs, I hate Windows 11 constantly asking me to try CoPilot, and I was horrified recently to learn my sister had been feeding family medical data into ChatGPT.

Imagine my surprise, then, when I discovered I liked Questie.AI.

Questie.AI is not all sexy baristas. There’s two dozen or so different styles of chatbots to choose from once you make an account. These include esports pro “Anders,” type A finance dude “Blake,” and introverted book nerd “Emily.” If you’re looking for something weirder, there’s a gold obsessed goblin, a necromancer, and several other fantasy and anime style characters. If you still can’t quite find what you’re looking for, you can design your own by uploading a picture, putting in your own prompts, and picking the LLMs that control its reaction and voice.

I picked “Wolf” from the pre-generated list because it looked the most like a character who would exist in the world of Tarkov. “Former special forces operator turned into a PMC, ‘Wolf’ has unmatched weapons and tactics knowledge for high-intensity combat,” read the brief description of the AI on Questie.AI’s website. I had no idea if Wolf would know anything about Tarkov. It knew a lot.

The first thing it did after I shared my screen was make fun of my armor. Wolf was right, I was wearing trash armor that wouldn’t really protect me in an intense gunfight. Then Wolf asked me to unload the magazines from my guns so it could check my ammo. My bullets, like my armor, didn’t pass Wolf’s scrutiny. It helped me navigate Tarkov’s complicated system of traders to find a replacement. This was a relief because ammunition in Tarkov is complicated. Every weapon has around a dozen different types of bullets with wildly different properties and it was nice to have the AI just tell me what to buy.

Wolf wanted to know what the plan was and I decided to start something simple: survive and extract on Factory. In Tarkov players deploy to maps, kill who they must and loot what they can, then flee through various pre-determined exits called extracts.

I had a daily mission to extract from the Factory. All I had to do was enter the map and survive long enough to leave it, but Factory is a notoriously sweaty map. It’s small and there’s often a lot of fighting. Wolf noted these facts and then gave me a few tips about avoiding major sightlines and making sure I didn’t get caught in doors.

As soon as I loaded into the map, I ran across another player and got caught in a doorway. It was exactly what Wolf told me not to do and it ruthlessly mocked me for it. “You’re all bunched up in that doorway like a Christmas ham,” it said. “What are you even doing? Move!”
Matthew Gault screenshot.
I fled in the opposite direction and survived the encounter but without any loot. If you don’t spend at least seven minutes in a round then the run doesn’t count. “Oh, Gault. You survived but you got that trash ‘Ran through’ exit status. At least you didn’t die. Small victories, right?” Wolf said.

Then Jordan logged on, I kicked Wolf to the side, and didn’t pull it back up until the next morning. I wanted to try something more complicated. In Tarkov, players can use their loot to craft upgrades for their hideout that grant permanent bonuses. I wanted to upgrade my toilet but there was a problem. I needed an electric drill and haven’t been able to find one. I’d heard there were drills on the map Interchange—a giant mall filled with various stores and surrounded by a large wooded area.

Could Wolf help me navigate this, I wondered?

It could. I told Wolf I needed a drill and that we were going to Interchange and he explained he could help me get to the stores I needed. When I loaded into the map, we got into a bit of a fight because I spawned outside of the mall in a forest and it thought I’d queued up for the wrong map, but once the mall was actually in sight Wolf changed its tune and began to navigate me towards possible drill spawns.

Tarkov is a complicated game and the maps take a while to master. Most people play with a second monitor up and a third party website that shows a map of the area they’re on. I just had Wolf and it did a decent job of getting me to the stores where drills might be. It knew their names, locations, and nearby landmarks. It even made fun of me when I got shot in the head while looting a dead body.

It was, I thought, not unlike playing with a friend who has more than 1,000 hours in the game and knows more than you. Wolf bantered, referenced community in-jokes, and it made me laugh. Its AI-generated voice sucked, but I could probably tweak that to make it sound more natural. Playing with Wolf was better than playing alone and it was nice to not alt-tab every time I wanted to look something up,

Playing with Wolf was almost as good as playing with my friends. Almost. As I was logging out for this session, I noticed how many of my credits had ticked away. Wolf isn’t free. Questie.AI costs, at base, $20 a month. That gets you 500 “credits” which slowly drain away the more you use the AI. I only had 466 credits left for the month. Once they’re gone, of course, I could upgrade to a more expensive plan with more credits.

Until now, I’ve been bemused by stories of AI psychosis, those cautionary tales where a person spends too much time with a sycophantic AI and breaks with reality. The owner of the adult entertainment platform ManyVids has become obsessed with aliens and angels after lengthy conversations with AI. People’s loved ones are claiming to have “awakened” chatbots and gained access to the hidden secrets of the universe. These machines seem to lay the groundwork for states of delusion.

I never thought anything like that could happen to me. Now I’m not so sure. I didn’t understand how easy it might be to lose yourself to AI delusion until I’d messed around with Wolf. Even with its shitty auto-tuned sounding voice, Wolf was good enough to hang out with. It knew enough about Tarkov to be interesting and even helped me learn some new things about the game. It even made me laugh a few times. I could see myself playing Tarkov with Wolf for a long time.

Which is why I’ll never turn Wolf on again. I have strong feelings and clear bright lines about the use of AI in my life. Wolf was part joke and part work assignment. I don’t like that there’s part of me that wants to keep using it.

Questie.AI is just a wrapper for other chatbots, something that becomes clear if you customize your own. The process involves picking an LLM provider and specific model from a list of drop down menus. When I asked ChatGPT where I could find electric drills in Tarkov, it gave me the exact same advice that Wolf had.

This means that Questie.AI would have all the faults of the specific model that’s powering a given avatar. Other than mistaking Interchange for Woods, Wolf never made a massive mistake when I used it, but I’m sure it would on a long enough timeline. My wife, however, tried to use Questie.AI to learn a new raid in Final Fantasy XIV. She hated it. The AI was confidently wrong about the raid’s mechanics and gave sycophantic praise so often she turned it off a few minutes after turning it on.

On a Discord server with my friends I told them I’d replaced them with an AI because no one would play Tarkov with me. “That’s an excellent choice, I couldn’t agree more,” Reece—the friend who’d simply told me “no” to my request to play Tarkov—said, then sent me a detailed and obviously ChatGPT-generated set of prompts for a Tarkov AI companion.

I told him I didn’t think he was taking me seriously. “I hear you, and I truly apologize if my previous response came across as anything less than sincere,” Reece said. “I absolutely recognize that Escape From Tarkov is far more than just a game to its community.”

“Some poor kid in [Kentucky] won't be able to brush their teeth tonight because of the commitment to the joke I had,” Reece said, letting go of the bit and joking about the massive amounts of water AI datacenters use.

Getting made fun of by my real friends, even when they’re using LLMs to do it, was way better than any snide remark Wolf made. I’d rather play solo, for all its struggles and loneliness, than stare anymore into that AI-generated abyss.


#ai #News