A judge in London tossed out witness testimony after discovering the man was receiving coaching through a pair of smartglasses.#News #AI
Witness Caught Using Smartglasses in Court Blames it all on ChatGPT
An insolvency judge in England tossed out testimony after discovering a witness was being coached on what to say in real time through a pair of smartglasses. When the voice of the coach started coming through the cellphone after it was disconnected from the glasses, the witness blamed the whole thing on ChatGPT.Insolvency and Companies Court (ICC) Judge Agnello KC in Britain wrote up the incident after it happened in January and the UK-based legal research blog Legal Futures was first to report it. The case considered the liquidation of a Lithuanian company co-owned by a man named Laimonas Jakštys. Jakštys was in court to get his business off an insolvency list and to put himself back in charge of it. It didn’t go well.
playlist.megaphone.fm?p=TBIEA2…
“Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked,” Judge Agnello wrote. “These questions were interpreted and then there was a pause before there was a reply. After several questions, [defense lawyer Sarah Walker] then informed me that she could hear an interference coming from around Mr. Jakštys and asked if Mr. Jakštys could take his glasses off for a period as she was aware smart glasses existed.”There was a Lithuanian interpreter on hand to help Jakštys talk to the court and she, too, said she could hear voices from Jakštys’s glasses. The judge pointed out they were smart glasses and asked him to take them off. “After a few further questions, when the interpreter was in the process of translating a question, Mr Jakštys’ mobile phone started broadcasting out loud with the voice of someone talking,” Judge Agnello wrote. “There was clearly someone on the mobile phone talking to Mr. Jakštys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”
Jakštys showed up the next day in the glasses again and the judge told him to turn them off. “Jakštys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked,” the judgement said. “He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me.”
During the court appearance, Jakštys claimed his mobile phone had been stolen but couldn’t provide a police report for the incident. He also repeatedly received calls on his smartglasses-connected phone from a number listed as “abra kadabra.” The call log showed that many of the calls occurred when he was on the witness stand. The judge asked him about the identity of “abra kadabra” and Jakštys said it was a taxi driver.
“When he was pressed as to why all these calls were made…Mr. Jakštys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence,” Judge Agnello said.
In the end, the Judge tossed out all of Jakštys’ testimony. “He was untruthful in relation to his use about the smart glasses and in being coached through the smart glasses,” the judgement said. “In my judgment, from what occurred in court, it is clear that call was made, connected to his smart glasses and continued during his evidence until his mobile phone was removed from him. When asked about this, his explanation was that he thought it was ChatGPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”
This incident in the London court is just another in a long line of bad behavior from people wearing smartglasses. CBP agents have been spotted wearing them during immigration raids and Harvard students have loaded them with facial recognition tech to instantly dox strangers.
Someone Put Facial Recognition Tech onto Meta's Smart Glasses to Instantly Dox Strangers
The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.Joseph Cox (404 Media)
AI Channel reshared this.
On Friday, a judge ordered those who uploaded the videos to YouTube to remove them. By Saturday, a backup of the videos was available online as a torrent and on the Internet Archive.#DOGE #News
The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet
The DOGE deposition videos a judge ordered removed from YouTube on Friday after they had gone massively viral have since been backed up across the internet, including as a torrent and to the Internet Archive.The videos included DOGE members unable or unwilling to define DEI; discussing how they used ChatGPT and terms such as “black” and “homosexual” to flag grants for termination but not “white” or “caucasian,” and acknowledgements that despite their aggressive cuts they failed to achieve the stated goal of lowering the government deficit.The news shows the difficulty in trying to remove material from the internet, especially that which has a high public interest and has already been viewed likely millions of times. It’s also an example of the “Streisand Effect,” a phenomenon where trying to suppress information often results in the information spreading further.
💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
The government asked a judge to stop the spread of the videos on YouTube. The judge agreed, and ordered their immediate removal.#DOGE #News
DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery
A judge on Friday ordered the immediate removal of a series of depositions of members of DOGE, but not before clips of the depositions, including one in which a member was largely unable to define DEI, went viral and were covered widely, including by 404 Media.At the time of writing, the depositions are not available on YouTube, where the Modern Language Association had uploaded them. The MLA, American Council of Learned Societies, and American Historical Association, are suing the National Endowment for the Humanities (NEH) and others around DOGE’s cuts of hundreds of millions of dollars worth of grants. Neither the plaintiffs nor the government immediately responded to a request for comment.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowDOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery
The government asked a judge to stop the spread of the videos on YouTube. The judge agreed and ordered their immediate removal.Joseph Cox (404 Media)
The data drops as Sen. Bernie Sanders calls for a moratorium on datacenter construction. 'We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.'#News
People Hate Datacenters, Survey Finds
A new study from the Pew Research Center asked Americans about their feelings toward datecenters and it’s not positive. Pew published the study the day after Sen. Bernie Sanders called for a moratorium on the construction of datacenters in the United States amid mounting public concern around the building’s impacts on local communities.Pew surveyed 8,512 adults in January and asked them a broad range of questions about how they felt about datacenters. Most of the respondents said they’d heard of datecenters and the more they’d read, the less they liked them.
💡
Is an unwanted datacenter being built in your community? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Most of the Americans surveyed believe that datacenters are bad for the environment, home energy costs, and the quality of life of people living nearby and the numbers aren’t close. Only four percent of people thought datacenters were good for the environment, six percent good for jobs, and six percent good for people’s quality of life.
Despite those negative feelings, many of the people surveyed thought that datacenters would be good for jobs in the communities where they’re built and would boost local tax revenue. “Still, Americans are less likely to express positive views of data centers’ impact in these areas than to express negative views of their effects on the environment, energy costs and people’s quality of life nearby,” the research said.
Research shows that the reality of job creation by datacenters doesn’t actually live up to the promises from those lobbying to build them. “Data centers do not bring high-paying tech jobs to local communities because they operate as infrastructure projects rather than traditional jobcreating businesses,” University of Michigan researchers wrote in a 2025 brief. “Although the construction of data centers can create many jobs, those are short lived.”
The survey charts a growing anti-datacenter sentiment in America. The US is in the middle of a massive infrastructure project similar to the Manhattan Project. In a mad dash to build out AI systems, companies are constructing massive buildings and energy infrastructure across the country, often with little input from local communities and at a massive cost.
The city of Ypsilanti, Michigan is fighting to stop the construction of a $1.2 billion datacenter that would be used to test nuclear weapons. In the middle of a massive winter storm that paralyzed the state in January, lawmakers in a rural South Carolina county pushed through the approval of a controversial $2.4 billion datacenter. In Oklahoma, police arrested a man who was speaking in opposition to a datacenter after he went slightly over his time during a city council meeting.
Datacenters are terrible neighbors. The buildings drive up the cost of energy for people who live nearby, consume massive amounts of water, and can produce noises and fumes that hurt locals. In Mississippi, locals are concerned about the pollution and noise caused by an xAI datacenter powered by gas turbines. A proposed datacenter project near Amarillo, Texas would be powered by four massive nuclear generators and pull water from an aquifer with dwindling reserves. In an effort to quell fears about power consumption, Trump made Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI sign a pledge to keep energy costs down. But a pledge isn’t a law. It’s not even an executive order.
Pew’s research came out the day after Sanders announced he was proposing legislation to put a moratorium on the construction of new datacenters in the US. “We are at the beginning of the most profound technological revolution in world history. That’s the truth,” Sanders said in a video posted on social media. “This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job replacement. It will threaten our democratic institutions. It will impact our emotional well-being and what it even means to even mean to be a human being.”
We need a moratorium on AI data centers NOW. Here’s why. pic.twitter.com/dRfAdQ67zD
— Sen. Bernie Sanders (@SenSanders) March 11, 2026
“Congress hasn’t a clue how to respond…and protect the American people. It’s not only not having a clue, they’re busy out raising money all day long from AI and their super PACs,” Sanders said. “We need a moratorium on datacenters. We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires.2031868041940660673
We need a moratorium on AI data centers NOW. Here’s why. pic.twitter.com/dRfAdQ67zD— Sen. Bernie Sanders (@SenSanders) 11 marzo 2026Sen. Bernie Sanders (X (formerly Twitter))
Copilot “can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” the memo says.#AI #News
Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate
A top Senate administrator approved OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot for official use in the Senate, the New York Times reported on Tuesday. 404 Media has obtained the full text of the memo and is publishing it below.“The Sergeant at Arms (SAA) office of the Chief Information Officer (CIO) has approved the use of three Generative Artificial Intelligence (AI) platforms with Senate data,” the memo starts. It also says the SAA will provide each Senate employee with one free license to either Gemini Chat or ChatGPT Enterprise, with Copilot also available at no cost.
💡
Do you know anything else about the government's use of AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products.’#News #Tesla
Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims
A Cybertruck owner in Texas is suing Tesla for $1,000,000 in damages for “ grossly negligent conduct” following an accident on a Houston highway that involved the vehicle’s self-driving feature. According to the lawsuit, Tesla is to blame for the crash because CEO Elon Musk has oversold the truck’s ability to drive itself.As originally reported by the Austin American-Statesman, Justine Saint Amour bought a Cybertruck from a used car dealership in Florida and drove it until it crashed on a Houston overpass on August 18, 2025. That summer day, Saint Amour was driving down Houston’s 69 Eastex Freeway with the vehicle’s full self-driving (FSD) mode engaged.
playlist.megaphone.fm?p=TBIEA2…
“Something terrifying happened, without warning, the vehicle attempted to drive straight off an overpass,” Bob Hilliard, Saint Armour’s attorney, told 404 Media in an emailed statement. “She tried to take control, but crashed into the barrier and was seriously injured—mostly her shoulder, neck, and back.”Hilliard shared a photo of the aftermath of the crash and dashcam footage with 404 Media. In the video, the Cybertruck proceeds down the highway and hops an intersection instead of turning to the right and following the road. It’s stopped when it slams into a signpost on the overpass.
View this post on Instagram
The lawsuit blames the crash on Musk. “Elon Musk is an aggressive and irresponsible salesman, who has a long history of making dangerous design choices, and over-promising the features of his products,” the lawsuit said. “This promotion of products, for capabilities that they do not have, is the reason for this incident.”
embed.documentcloud.org/docume…
Musk has spent the past few years prompting Tesla’s ability to drive itself, a feature that costs $99 a month and is sold as “Full Self-Driving.” But, the lawyers said, the FSD feature doesn’t work as advertised and it’s irresponsible of Tesla and Musk to market their vehicles as having the feature. “Despite this dangerous condition of Tesla’s ‘self-driving’ vehicles, Elon Musk and Tesla have made representations in the year 2019 that Tesla’s full ‘self-driving’ vehicles were fully operational and safe.”Tesla and Musk have gotten in trouble for this before. In February, the company agreed it would stop using the terms “autopilot” and “full self-driving" when advertising its vehicles in California. There have been multiple fatal and non-fatal crashes involving Tesla vehicles running on autopilot, including a man who hit a parked police car in 2024. In August, a judge ordered Tesla to pay $200 million in punitive damages and another $43 million in compensatory damages to a family of a 22 year old who died in a crash involving the car’s Autopilot system.
According to the lawsuit, one of the reasons this keeps happening is because Musk intervened directly to make Teslas cheaper by using cameras instead of LiDAR, which uses laser light to create a 3D map of the surrounding area. “Elon Musk’s intervention into the design of Tesla vehicles has long been reckless and dangerous. While engineers at Tesla recommended the super-human vision of LiDAR be included for self-driving vehicles, and competitors like Waymo and Cruise relied heavily on LiDAR, Musk chose instead to rely only upon cheap video cameras,” the lawsuit said. “Musk referred to the LiDAR used by his safer competitors as expensive and unnecessary.”
Fully automated driving is a hard tech problem. LiDAR is better than basic cameras, but they’re still not perfect and LiDAR-based self-driving cars crash too. There are other problems too. In cities operating Google’s Waymo cars, passengers are leaving the doors open and Waymo is contracting DoorDashers to close them for $10 a pop, a Waymo in LA attempted to drive through a police standoff, and woman in San Francisco was trapped in a Waymo after men blocked the car and started to harass her.
Men Harassed A Woman In A Driverless Waymo, Trapping Her In Traffic
Two men stood in front of the autonomous vehicle, operated by ride-hailing company Waymo, and literally tipped a fedora at her while she told them to move out of the way.Samantha Cole (404 Media)
The 'Freedom Trucks' will haul AI slop George Washington on a tour across 48 American states.
The x27;Freedom Trucksx27; will haul AI slop George Washington on a tour across 48 American states.#News #AI
I Visited the ‘Freedom Truck’ to Meet PragerU’s AI Slop Founders
In the parking lot of Seven Oaks Element school in South Carolina on one of the first hot days of the year I watched an AI-generated George Washington talk about the American revolution. “Our rights are a gift from God, not a favor from kings or courts,” slop Washington told me. It spoke from a screen that stretched floor to ceiling, trimmed by a fancy frame.The intended effect is to make it appear as if the founding father is a painting come to life, a piece of history talking to the viewer. The actual effect was to remind that the AI slop aesthetic is synonymous with the Trump presidency and has become part of the visual language of fascism. Which is fitting because AI George Washington is the result of a collaboration between the Trump White and online content mill PragerU.
playlist.megaphone.fm?p=TBIEA2…
The AI slop founding father is part of a touring exhibit of Freedom Trucks commissioned by PragerU in honor of the 250th anniversary of American independence. The trucks are a mobile museum exhibit meant to teach kids about the founding of the country. It’s pitched at kids—most of the “content,” as staff on site called it, is meant for a younger audience but the trucks have viewing hours open to the general public. Nick Bravo, a PragerU employee on hand to answer questions, told me that there are six Freedom Trucks and that the plan is to have them travel the 48 contiguous United States over the next year.I was drawn to the Freedom Truck because I’d heard they contained AI-generated recreations of Revolutionary figures like Washington, Betsy Ross, and the Marquis Lafayette, similar to the ones on display at the White House. To my disappointment, the AI generated videos in the Freedom Truck are remarkably boring.
As I watched the AI George Washington deliver a by-the-books version of the American story, I thought about Jerry Jones. The famously vain owner of the Dallas Cowboys commissioned an AI version of himself for AT&T stadium in 2023. Fans who make the pilgrimage to the stadium can watch a presentation and ask the AI Jones questions. The AI wanders a big screen while it talks to the audience.
Other than the lazy AI generated videos, the Freedom Truck doesn’t have much to offer. I signed a digital copy of the Declaration of Independence on a touchscreen and took a quiz that asked leading questions designed to find out if I was a “loyalist or patriot.”
“The British Army sends soldiers to Boston. How do you react?” Answer 1: “View them as occupiers violating colonial liberty.” Answer 2: “Welcome them as defenders of law and order.” With ICE and the National Guard patrolling American cities, I wondered how supporters of the current administration would answer that one.
PragerU is known for its “America can do no wrong” view of US history. Its short form video content offers a cartoon version of the past stripped of nuance and context where the country lives up to the myth that it is a “Shining City On a Hill.” According to PragerU, white people abolished slavery and dropping the atomic bomb on Japan was a necessary thing that “shortened the war and saved countless lives.” Now PragerU is taking its view of history on tour across the country. School children in every state will wander these trucks and encounter an AI slop version of the past.
Bravo told me that all the truck’s content was generated as part of a partnership between PragerU and Michigan’s Hillsdale College—a Christian university that helped craft Project 2025. There were, of course, hints of Project 2025 around the edges of the child-friendly AI-generated videos. Slavery isn’t ignored but the stories of early African Americans like poet Phillis Wheatley focus on her celebration of America rather than how she arrived there. On the museum’s “Wall of Heroes,” Whittaker Chambers is nestled between architect Frank Lloyd Wright and painter Norman Rockwell.
A small note near the floor at the exit of the truck notes the collaboration of PragerU and Hillsdale College, and claims that “neither institution received any federal funds and both generously contributed their own resources to help create this educational exhibit.” It also said “this truck was made possible through a grant from the Institute of Museum and Library Services,” which is, of course, a federal agency.
Every AI-generated video ended with a title card showing the White House and PragerU’s logo. “The White House is grateful for the partnership with PragerU and the US Department of Education for the production of this museum,” the card said. “This partnership does not constitute or imply a US Government or US Department of Education endorsement of PragerU.”
Trump attempted to dismantle the Institute of Museum and Library Services (IMLS) via executive order in 2025, but the courts blocked it. Libraries and Museums have since reported that the IMLS grant process has taken on a “chilling” pro-Trump political turn. The administration has also attempted to dismantle the Department of Education.
Trump’s voice was the last thing I heard as I wandered into the bright afternoon sun. “I want to thank PragerU for helping us share this incredible story,” he said in a recorded video that played on a loop in Freedom Truck. “I hope you will join me in helping to make America’s 250th anniversary a year we will never forget.”
Grant Guidelines for Libraries and Museums Take “Chilling” Political Turn Under Trump
Former Institute of Museum and Library Services leaders from both political parties expressed concern that the new funding guidelines could encourage a more constrained or distorted view of American history.Alex.Bandoni@propublica.org (ProPublica)
A court record reviewed by 404 Media shows privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI.#News #Privacy
Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester
Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media.The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘How ghoulish.’ The depravity economy moves into the nuclear war business.#News #nuclear
Polymarket Pulls Bet on Nuclear Detonation in 2026
For a few hours on Tuesday, Polymarket hosted a bet about the possibility of nuclear war in 2026. The market asked the question “Nuclear weapon detonation by …?” and racked up close to a million dollars in trading volume before Polymarket took the unusual step to remove the market from its website. It did not simply close down the bet, but it’s been “archived” meaning that a record of it no longer exists. It’s strange as many older and paid out bets remain on the site.Pulling a bet like this is unusual and the company did not respond to 404 Media’s request for an explanation as to why. Word of the nuke bet drew wide attention online from critics already upset about Polymarket for its place in the depravity economy.
playlist.megaphone.fm?p=TBIEA2…
“I have not seen anything like this before,” Jon Wolfsthal, a former special assistant to President Barack Obama and a member of the Bulletin of the Atomic Scientists, told 404 Media. “As a citizen, it seems dangerous to enable people in power to place bets anonymously on things that might happen, creating an incentive to act on a basis of personal gain and not the national interest.”Polymarket doesn’t often balk at bets on violence and war. There are multiple markets covering the wars in Ukraine and Iran and also many other bets about nuclear detonations. “Will a US ally get a nuke before 2027?” and “Russia nuclear test by …?” are both still actively trading. An older version of the “nuclear weapons detonation” is still on the site and did almost $3 million in trading before closing and paying out at the end of the 2025. It’s hosted a bet on the same question every year for the past few years.
The gambling market has been under fire this week after gaining a lot of attention for its various bets on the war in Iran. Gamblers spent more than $5 million betting on the question “Will the Iranian regime fall by June 30?” People have been caught manipulating war maps to cash in on frontline advances in Ukraine. And someone made $400,000 using inside knowledge to place bets about the capture of Maduro.
“How ghoulish. Especially given how much insider trading apparently goes on with current events bets,” Alex Wellerstein, a nuclear historian and creator of the NUKEMAP, told 404 Media.
Wellerstein said that betting on nuclear war isn’t unprecedented, but that it’s usually tongue-in-cheek and conducted by insiders. “The thing that immediately comes to mind is Fermi's ‘side bet’ that the Trinity test would destroy the atmosphere in 1945—which was a joke, as nobody would be able to collect if it had happened,” he said.
“A flip of this is in Daniel Ellsberg's The Doomsday Machine, in which he eschewed paying into a pension in the early 1960s because he thought the odds of a future nuclear war were so high that it was better to spend the money sooner rather than later. So another kind of bet, but a private one,” Wellerstein added. “And whenever experts give ‘odds’ on nuclear use (which the intelligence community does, apparently), they are to some degree indulging in this kind of impulse. But not for the hope of personal profit—usually it is because they want to avoid such an outcome.”
Polymarket CEO Shayne Coplan has repeatedly called the site “the future of news,” and has suggested that prediction markets give the public a more clear picture of events because money is on the line. The reality is that the financial incentives pervert reality. Nuclear war, it seems, was a bit too dramatic for Polymarket to host a wager on. But Polymarket has few moral qualms, has not told anyone why it "archived" the bet, and it’s possible it did so for some arcane technical reason and not because it got squeamish. Polymarket did not respond to 404 Media’s request for comment.
'Unauthorized' Edit to Ukraine's Frontline Maps Point to Polymarket's War Betting
It looks like someone invented a fake Russia advance in Ukraine to manipulate online gambling markets.Matthew Gault (404 Media)
AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.#News #Wikipedia #AI
AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”
The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”
“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”
As Wikipedia editors looked at more OKA-translated articles, they found more issues.
“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles.
Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.
For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”
Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.
“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me.
Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule.
“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”
A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”
“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says.
“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA.
Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j
on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota.
“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”
Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”
“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”
Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms
Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.
“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”
“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”
'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."Emanuel Maiberg (404 Media)
AI is a “game changer” for what the FBI calls remote access operations, an FBI official said in response to a 404 Media question on Tuesday.#fbi #Hacking #News
The FBI Discusses the Potential to Use AI to Hack Targets
Update: after this article was published, the national press office for the FBI said in a statement that “Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development.” For clarity, 404 Media has updated the headline, included the FBI’s full statement below, but left the original article intact so readers can see the comments made at the conference. An FBI spokesperson told 404 Media that “DAD Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development. FBI's current deployment of AI is inventoried, reviewed, and reported per Executive Order requirements, OMB guidance, and guidance from other relevant authorities. All FBI operations are conducted in accordance with the Constitution, applicable statutes, executive orders, Department of Justice regulations and policies, and Attorney General guidelines.”The FBI is using artificial intelligence in what it describes as “remote access operations,” FBI parlance for hacking, according to an FBI official.
The comments, given at a national security and AI conference 404 Media was attending, give an unusually candid admission of the FBI’s use of hacking tools, which are often shrouded in secrecy.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Fake war footage is a problem as old as social media. AI has just supercharged it.#News
X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage
X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet.“Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Nikita Bier, X’s head of product, said in a post on X.
playlist.megaphone.fm?p=TBIEA2…
Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said “Oh my god, hear they come.” X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X’s content creator program.Up to now, the Iranians have been deliberately firing their older missiles and drones, using them as expendable bait to drain US and Israeli air defenses.
That strategy clearly worked.Now they’re escalating, rolling out their more advanced ballistic missiles and drones.
So… pic.twitter.com/0w1RiT0guC
— Richard (@ricwe123) March 3, 2026Tel Aviv, stripped of illusion, as you have never witnessed it. pic.twitter.com/HE3ckjBMti
— Abdulruhman Ismail (@a_abdulruhman) March 3, 2026
Bier said today that X will stop people from making money on unlabeled AI war footage, but won’t stop accounts from sharing it.“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” he added. “This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments.”
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026
Fake war footage shared on social media isn’t a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that’s still live on X.What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it’s trivially easy to remove the watermark that’s supposed to distinguish them from the real thing. X’s verification system—which rewards accounts for engagement—has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X.
The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.
Sora 2 Watermark Removers Flood the Web
Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.Matthew Gault (404 Media)
An internal DHS document obtained by 404 Media shows for the first time CBP used location data sourced from the online advertising industry to track phone locations. ICE has bought access to similar tools.#DHS #ICE #CBP #News #Privacy
CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements
📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.
The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.
💡
Do you work at CBP, ICE, or a location data company? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Some AWS services are down in the Middle East. Recovery is unclear as it requires 'careful assessment to ensure the safety of our operators,' according to Amazon.
Some AWS services are down in the Middle East. Recovery is unclear as it requires x27;careful assessment to ensure the safety of our operators,x27; according to Amazon.#News #war
Amazon Data Centers on Fire After Iranian Missile Strikes on Dubai
Amazon’s cloud services are down in some of the Middle East after “objects” hit data centers in the United Arab Emirates (UAE) causing “sparks and fire.” Around 60 services tied to AWS are down in the region, affecting web traffic in the UAE and Bahrain. The outage comes following Iranian attacks on the UAE as retaliation for US and Israeli strikes on Iran.Customers in Bahrain and the UAE began to report outages tied to the mec1-az2 and mec1-az3 clusters in AWS’ ME-CENTRAL-1 Region on March 1 after Iranian ballistic missiles and drones struck targets in and around Dubai. Amazon did not confirm that AWS was down in the Middle East due to an Iranian attack and instead referred 404 Media to its online dashboard.
playlist.megaphone.fm?p=TBIEA2…
“At around 4:30 AM PST, one of our Availability Zones (mec1-az2) was impacted by objects that struck the data center, creating sparks and fire,” AWS said on its health dashboard. “The fire department shut off power to the facility and generators as they worked to put out the fire. We are still awaiting permission to turn the power back on, and once we have, we will ensure we restore power and connectivity safely. It will take several hours to restore connectivity to the impacted AZ.”As of this morning at 9:22 AM ET, the damage had spread. “We are expecting recovery to take at least a day, as it requires repair of facilities, cooling and power systems, coordination with local authorities, and careful assessment to ensure the safety of our operators,” AWS said. “We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe.”
Amazon later shared more information about the attack and confirmed it was the result of drones. “Due to the ongoing conflict in the Middle East, both affected regions have experienced physical impacts to infrastructure as a result of drone strikes. In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impacts to our infrastructure,” it said. “These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage. We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts.”
On Saturday, the United States and Israel launched Operation Epic Fury and struck targets inside of Iran, killing several political and military leaders including Ayatollah Ali Khamenei, the country’s Supreme Leader. In retaliation, Iran launched drone and missile attacks against Israel and multiple US-allied targets in the Middle East.
According to the Emirati defense forces, Iran attacked the country with two cruise missiles, 165 ballistic missiles, and more than 540 drones. The UAE and its capital city Dubai are often seen as a safe and stable destination in the Middle East. The country hosts wealthy people from across the region and influencers from across the world. Footage shared on social media showed the neon towers of the UAE backlit by missiles and munitions.
It’s unclear how long it will take for Amazon to restore services to the region or how far the damage will spread. Amazon’s dashboard is promising to bring things back up in “at least a day” but the war is far from over. Iran continues to strike targets in the Middle East and it’s unclear what America’s plan of attack is or how long this war might grind on.
Update 2/2/26: This story has been updated with more specifics about the attack from Amazon.
https://health.aws.amazon.com/health/status
View the overall status and health of AWS services using the AWS Health Dashboard.health.aws.amazon.com
“We just want to take down posts about people who are being defamed," the company's founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"
“We just want to take down posts about people who are being defamed," the companyx27;s founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"#News #tea
Company Helps Men Scrub Negative Posts About Them from Tea App
Tea App Green Flags, a service that claims it can “protect your digital reputation,” will remove negative posts about men from private online groups where women share “red flags” about men they’ve dated in order to help other women.The service is another escalation in the age of online dating, women attempting to protect each other from other men in the dating pool, and instances of men fighting against those efforts. It also shows how some of these allegedly private women’s groups, especially the Tea app, are regularly infiltrated and manipulated by men.
When I reached out to an email listed on Tea App Green Flags’s site, I got a call from a man behind the operation who identified only as Jay. He said he started the service about two years ago, and that he initially focused on the Are We Dating the Same Guy Facebook groups. For the past year, he’s been offering services specifically for the Tea app, a “dating safety” app for women that suffered a devastating breach last year, and which my investigation revealed, was founded by a man who wanted to monetize the Are We Dating the Same guy phenomenon. The site also claims it can remove posts from Tea app copycat for men TeaOnHer, as well as posts on Instagram.
Jay declined to say how much revenue the site generates, but claims he gets about 50 to 60 calls a day and currently has six employees. On its website, Tea App Green Flags claims it has removed more than 2,500 posts on the Tea app for 759 clients. Jay said that most of his clients are men, but that some are women who are trying to take down posts about their husbands or boyfriends.
Potential clients can pay $1.99 to report one account and up to $79.99 to report 25 accounts.
“We just want to take down posts about people who are being defamed,” Jay told me. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’ That doesn't fit the mission statement of what the Tea app was for, which is to warn women against people who are harmful, who are abusive, who are cheaters. We've noticed that a lot of the individuals that come to us, almost all of them, come to us for little stupid things.”
Clients interested in Tea App Green Flags’s services go to the site and fill out a form with their information and information about the posts they want removed. The company reviews the case and then starts the “takedown process,” which can take between 21-30 days. Tea App Green Flags says it will then continue to monitor posts about the client and remove them for three months.
💡
Were you impacted by the Tea hack? I would love to hear from you. Using a non-work device, you can message me securely on Signal at @emanuel.404. Otherwise, send me an email at emanuel@404media.co.When I asked Jay how this “takedown process” works he said “I can’t give that info. That’s the business.”
Jay told me that he would not work with clients who have been accused of sexual assault by multiple people on the Tea app, or by one person in one of the Are We Dating the Same Guy Facebook groups who used their real name and face in a profile picture.
“Sometimes we find along the process that there are pedophiles or people who actually did what they did, and they're very bad,” Jay said. “So we say, we're not doing this. We can't take a rap for that. We're ethical. We just want to take down people who are being defamed.”
Jay told me he understands why Facebook groups like Are We Dating the Same Guy are necessary and thinks they are a good idea, but the anonymous nature of the Tea app "causes a cesspool of defamation.”
When I asked Jay what he thinks about the fact that some women don’t feel safe sharing information about some dangerous men unless they can do so anonymously, he said it would be better if women showed their face, or if the Tea app at least gave women that option.
“I have a Tea app account. I'm a dude. All my reps have Tea app accounts. They're men,” Jay said. “How much can you trust these people and what they're doing?”
One reason the Tea app hack was so dangerous is because the app used to ask women to upload a picture of their face in order to verify that they are women. Those images were posted all over the internet because of the hack, putting those women at risk and leading to more harassment.
Tea App Green Flags is far from the first attempt from men trying to fight back against these types of groups. In 2024, for example, we wrote about a man who tried to sue women who posted about him in Are We Dating the Same Guy Facebook groups. His first case was dismissed, and he refiled days later as a class action lawsuit; later that year, he was sent to prison for tax fraud.
Tea did not immediately respond to a request for comment.
'Are We Dating the Same Guy' Guy Imprisoned for Tax Fraud
After suing 27 women and multiple platforms because people he'd dated warned others not to date him, he filed a class action lawsuit against them. Now, he's headed to federal prison.Samantha Cole (404 Media)
On Wednesday, the government stopped supporting FPDS.gov, an indispensable resource for finding what ICE, the FBI, and every other agency is buying. Its replacement site completely sucks.#transparency #News
The Government Just Made it Harder to See What Spy Tech it Buys
It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
The group is talking about Epstein and filming propaganda videos in Roblox as a form of 'digital Jihad,' researchers say.
The group is talking about Epstein and filming propaganda videos in Roblox as a form of x27;digital Jihad,x27; researchers say.#News
The Islamic State Is Using AI to Resurrect Dead Leaders and Platforms Are Failing to Moderate It
The Islamic State’s online warriors are still posting. It’s been almost a decade since the group lost the Battle of Raqqa and saw its IRL territorial ambitions thwarted. Unable to hold territory in the real world, the group renewed its focus on posting and has started using AI to resurrect dead leaders. And, because social media platforms have gutted their content moderation operations, the terror group’s strategy is working.The Islamic State’s online success is detailed in a new report from the Institute for Strategic Dialogue (ISD), an independent research institution that studies extremist movements. For the study, researchers tracked IS accounts on Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX. It found videos posted in Discord channels dedicated to video games and tracked how the groups have modified old content to fit on new platforms.
playlist.megaphone.fm?p=TBIEA2…
Like many others posting online in 2026, the Islamic State has found success by talking about the Epstein Files, using AI to create new videos of dead leaders, and has begun taking its message to video games like Minecraft and Roblox.“They are very adept at exploiting platforms [and] spreading messages,” Moustafa Ayad, a researcher at ISD and author of this study, told 404 Media. He noted that the group has been active online for 10 years and that part of their success is a willingness to experiment.
Ayad said that Facebook remains a central hub for IS, despite its push into new spaces. His research discovered 350 IS accounts on Facebook that generated tens of thousands of views. One video of an IS fighter talking to camera had more than 77,000 views and 101 shares. The Islamic State branding is blurred to defeat the site’s auto-moderation.
According to Ayad, Islamic State’s engagement numbers are up across the board. “Trust and safety teams have been rolled back over the past few years…a lot of this is outsourced to third party companies who aren’t necessarily experts in understanding if a piece of content came from the Islamic State,” he said.
Social media companies like Meta used the election of Donald Trump as an excuse to cut back on moderating their platforms. Meta said this would mean “more speech and fewer mistakes.” No policies around terrorism have changed, but broadly speaking the largest social media platforms are doing a worse job at moderating their sites. In practice it’s turned Facebook into a place where a group like the Islamic State can spread its message without falling afoul of content moderation teams. Even three years ago, IS influencers wouldn’t have lasted long on the site.
This rollback of moderation has coincided with a spike in views for IS accounts, the report argues. “Individual IS ‘influencer’ accounts are experiencing higher engagement rates on terrorist content than previously recorded by ISD analysts,” the report said. “It is unclear if this uptick is due to moderation gaps, platform mechanics or specific tactical adjustments by IS supporters and support outlets and groups.”
“We’re not talking about content where there’s a gray area,” Ayad said. “It’s very clearly branded Islamic State…supports violence, supports the killing of minorities, the celebration of bombings, the pillaging that is happening in Sub Saharan Africa.”
Something new is the adoption of AI systems to resurrect dead leaders. Ayad described a video where the deceased IS leader Abu Bakr al-Baghdadi delivered speeches again.
“It’s a sanctioned version of using AI for a ‘beloved leader’ or taking him out of context and placing him in a meadow, surrounded by beautiful flowers, paying homage,” he said. “Some of these circles are strange.”
Another popular topic in current IS propaganda is the Epstein Files. According to Ayad, an AI-generated photo of Donald Trump and Bill Clinton canoodling in bed makes frequent appearances on IS accounts across platforms. The picture is, supposedly, pulled from the Epstein files but it’s a popular fake. Ayad said Epstein has been a perfect springboard for IS to talk about “western degeneracy.”
Ayad has also seen Islamic State videos created using Minecraft and Roblox. “They’re creating these virtual worlds that mimic the Islamic State’s caliphate, literally calling it something like Wilayat Roblox [the Province of Roblox] … and they’ll completely mimic the video styles of well-known Islamic State Videos using Roblox characters. This includes faux executions. It includes Arabic and English voiceover in the same cadence as an Islamic State narrator.”
One of the most famous pieces of Islamic State propaganda is a film called Flames of War: The Fighting Has Just Begun. Ayad has seen multiple 1 for 1 recreations of the film using Roblox characters. “They’re often tied to Discords where a number of users are creating this content. They always claim it’s fake or a LARP,” he said. “To see them in this video game skin is odd, to say the least.”
What drives an Islamic State poster? “It’s done very much for the love of the game,” Ayad said. It’s done for the fact that, as a user, ‘I might not be able to participate in physical Jihad but I can participate in electronic Jihad.’”
Keeping Islamic State off of major social media platforms is a constant battle, but one frustrating finding of the study is that the tactics for avoiding moderation haven’t changed much.
“Techniques included the use of alternative news outlets to rebrand IS news, as well as purchasing or hijacking channels with high subscriber bases. These were then repurposed to share IS content. IS supporters, groups and outlets also use coded language: they sometimes referred to the group as ‘black hole’ or the ‘righteous few’ to confound moderation efforts.”
To fight back against IS online, Ayad said that platforms needed to be better at coordination. Often a group is kicked off of Facebook so it moves to TikTok or another platform where it flourishes. He also said that all the companies need to be more transparent about who they’re kicking off their platform and why.
“Europol does these big takedown days and they’re effective to a certain degree but the fact of the matter is that the Islamic State is spread across an expanse of different platforms and messaging applications,” he said. “They’re able to shift operations to another place, wait it out and regenerate on that platform…it’s not like you’re dealing with an average user, you’re dealing with a user that’s determined to spread their ideology and exploit your platform to their own ends.”
And then there’s the old problem of language. “There needs to just be better moderation of under-moderated languages,” Ayad said. Facebook and other platforms have long been terrible at moderating non-English languages. A lot of rancid content online gets a pass because it’s in Arabic or Bengali.
More Speech and Fewer Mistakes
We're ending our third party fact checking program and moving to a Community Notes model.Meta Newsroom (Meta)
The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely.#News #AI
What’s the Point of School When AI Can Do Your Homework?
There’s a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions.Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.
playlist.megaphone.fm?p=TBIEA2…
If an AI can go to school for you what’s the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn’t one. “I think about horses,” he said. “They used to pull carriages, but when cars came around, I'd argue horses became a lot more free,” he said. “They can do whatever they want now. It would be weird if horses revolted and said ‘no, I want to pull carriages, this is my purpose in life.’”But humans aren’t horses. “This is much bigger than Einstein,” Matthew Kirschenbaum told 404 Media. “Einstein is symptomatic. I doubt we’ll be talking about Einstein, as such, in a year. But it’s symptomatic of what’s about to descend on higher ed and secondary ed as well.”
Kirschenbaum teaches English at the University of Virginia and has written at length about artificial intelligence. He’s also a member of the Modern Language Association (MLA) where he serves as member of its Task Force on AI Research and Teaching. Einstein isn’t the first agentic AI to do the work of a student for them, it’s just one that got attention online recently. Kirschenbaum and his fellow committee members flagged their concerns about these AIs in October, 2025.
“Agentic browsers are becoming widely available to the public. These offer AI ‘agents’ that can navigate [learning management systems] and complete assignments without any student involvement,” the MLA’s statement from October said. “The recent and hasty integration of generative AI features into those systems is already redefining student and instructor relationships, evaluative standards, and instructional outcomes—with no compelling evidence that any of it is for the better.”
The statement called on educators, lawmakers, and learning management system providers like Canvas, too cooperate in order to give academic institutions the abilities to block AI agents like Einstein.
Canvas did not respond to a request for comment.
Einstein is explicit in its pitch: it will log into Canvas (one of the most popular and ubiquitous pieces of education software) and do your classwork for you, just like Kirschenbaum and his fellows warned about last year.
The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education. “Universities…by and large adopted a transactive model of education,” Kirschenbaum said. “Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity.”
Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. “The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation,” he said.
For Paliwal, agentic AIs are a method of freeing people from the labor of education. “I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us,” he said. “We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?”
Kirschenbaum said that programs like Einstein are the inevitable conclusion of viewing higher education as a certification and transactive process. “What we’re finding is that if forms of education can be transacted then we’ve just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf,” he said. “And so the whole educational paradigm has come back to essentially bite itself in the ass.”
He said that one solution he’s seen work is to retreat from devices entirely in the classroom. “Colleagues who have done it report that students are almost universally grateful. They understand the reasoning. They understand the logic,” he said. “And they appreciate the opportunity to be freed from the phones and the screens and to focus and engage with other people in a meaningful dialogue.”
But the abandonment of EdTech platforms and screens won’t work for every student. Anna Mills, an English professor at the College of Marin and a colleague of Kirschenbaum’s on the MLA AI task force, compared the fight against agentic AI in education to cybersecurity. “We could decide that bots need to be labeled as bots and that we need to be able to distinguish human activity from AI activity online in some circumstances and that we want to build infrastructure for that,” she said. “That would be an ongoing project, as cybersecurity is.”
Mills is not a luddite. She’s an expert in artificial intelligence systems as well as English, frequently uses Claude, and has been documenting the rise of agentic AIs in EdTech on her YouTube channel for months. She said that using agentic AI like Einstein was cheating, full stop, and academic fraud. “This is in direct violation of these foundational agreements that we make in order to use technology for human communication, human exchange, and human work online,” she said. “And yet that’s not obvious to us. It seems like it’s just another tool, right? But it’s not.”
Mills said she understands Paliwal’s frustrations with education. “But what you need to understand is that online learning spaces are critical for students to access any kind of education,” she said. For her, the proliferation of tools like Einstein do more than help a student bypass the labor of the classroom. They poison the educational well. Online learning has been a boon to many kinds of non-traditional students and that the rise of agentic AI is a threat to that not just because it trivializes traditional forms of education, but because it hurts the credibility of EdTech itself and other online platforms.
The vast majority of college students aren’t attending Ivy League schools, they’re grinding away at night classes in community colleges across the country. Distance and online learning has been an enormous boon for those students. “If there’s no credibility to that, then you’ve just ruined the investment and the learning goals and the access to meaningful learning that that they can then also use for employment of students who are underprivileged, who can’t come to the classroom, who are working full time and raising families and trying to get an education,” Mills said.
Students aren’t horses and there is no greater freedom they can buy themselves by using AI tools to cheat in the classroom. And worse, the more these tools proliferate, the more suspect the entire enterprise becomes. It’s one thing to cheat yourself out of an education, it’s quite another to muddy the waters of EdTech platforms and online learning for everyone else.
AI Is Ushering in a Textpocalypse
Our relationship to writing is about to change forever; it may not end well.Matthew Kirschenbaum (The Atlantic)
The creator of Nearby Glasses made the app after reading 404 Media's coverage of how people are using Meta's Ray-Bans smartglasses to film people without their knowledge or consent. “I consider it to be a tiny part of resistance against surveillance tech.”#Privacy #Meta #News
This App Warns You if Someone Is Wearing Smart Glasses Nearby
A new hobbyist developed app warns if people nearby may be wearing smart glasses, such as Meta’s Ray-Ban glasses, which stalkers and harassers have repeatedly used to film people without their knowledge or consent. The app scans for smart glasses’ distinctive Bluetooth signatures and sends a push alert if it detects a potential pair of glasses in the local area.The app comes as companies such as Meta continue to add AI-powered features to their glasses. Earlier this month The New York Times reported Meta was working on adding facial recognition to its smart glasses. “Name Tag,” as the feature is called, would let smart glasses wearers identify people and get information about them from Meta’s AI assistant, the report said.
“I consider it to be a tiny part of resistance against surveillance tech,” Yves Jeanrenaud, the hobbyist developer and sociologist who made the app, told 404 Media.
To use the app, called Nearby Glasses, users download it from the Google Play Store or GitHub. They may need to tweak some settings such as “enable foreground service” to keep the app scanning. Then they press “Start Scanning” and a debug log will show the app’s activity. If it detects what it believes to be a pair of smart glasses, the app will send a notification: “⚠️ Smart Glasses are probably nearby,” it reads, according to a screenshot posted to the app’s Play Store page.
💡
Do you work at Meta or know anything else about its smart glasses? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.The app works by looking for Bluetooth “advertising frames,” which are small bits of data devices regularly broadcast as part of their normal operation. Jeanrenaud said he referenced a directory of Bluetooth Low Energy (BLE) manufacturers, then made the app scan for Meta, Luxottica Group S.p.A which partners with Meta on its smart glasses, and Snap, which has its own smart glasses offering.
“If it sees an advertising frame of these manufacturers, it notifies you. That’s basically it,” Jeanrenaud said. The Play Store page says the app likely generates false positives, such as from VR headsets. That is what happened in 404 Media’s test too: We ran the app near a Meta Quest 2 headset; the app detected the device, with its debug log saying “Meta Quest 2,” and the app sent a notification saying smart glasses were nearby. Of course, when walking around in public, it is less likely that someone is going to be wearing a VR headset than a pair of smart glasses.
“This is a tech solution to a social problem exaggerated by tech. I do not want to promote techsolutionism nor do I want people to feel falsely secure. It's still imperfect,” Jeanrenaud added.
playlist.megaphone.fm?p=TBIEA2…
Jeanrenaud said he decided to make the app after reading some of 404 Media’s coverage of how people are using Meta’s Ray-Ban smart glasses. He specifically pointed to this article, about how men are filming women inside massage parlors seemingly without their consent. Jeanrenaud also referenced 404 Media’s coverage showing multiple Customs and Border Protection (CBP) officials wore the AI glasses during immigration raids, including with the recording light clearly illuminated.“Obviously, surveillance tech is not only abused by government thugs, it's also a tech boosting misogynist behaviour and rape culture,” Jeanrenaud said.
404 Media has also reported how two students coupled Meta’s Ray-Bans with off-the-shelf facial recognition technology and people search sites to turn them into glasses that instantly doxed people; and shown how a $60 mod easily disables the privacy-protecting recording light in the glasses, making it easier for wearers to film people without them knowing.
Neither Meta nor Google responded to a request for comment about the new app.
When Google released Google Glass, the first substantive pair of consumer smart glasses more than ten years ago, some people heckled or ripped the glasses from wearers’ faces. Those glasses looked very distinct. Meta’s Ray-Ban glasses, meanwhile, are designed to look just like any other pair of glasses, making it more difficult for passersby to know if someone is wearing a smart device or not. Not impossible, though: in December, a woman on the New York subway allegedly broke a man’s pair of Meta's smart glasses while he was filming a piece of content.
The app’s Play Store page says after identifying a device, a user “may act accordingly.”
Jeanrenaud said he can imagine that including what the woman on the subway allegedly did. “Or people just tell them politely to fuck off.”
Researchers say Meta’s patent for simulating dead users could be a “turning point” in “AI resurrections.”#News #Meta #AI
Meta's AI Patent to Simulate Dead People Shows the Dangers of 'Spectral Labor'
Last week, Business Insider reported on a Meta patent describing a system that would simulate a user’s social media activity after their death.The patent imagines a world where you’d be able to chat with a deceased friend’s Facebook or Instagram account after their death, and have a large language model simulate their posting or chatting behavior.Meta first filed the patent in 2023, but the patent made headlines this week because of its dystopian implications. And while Meta told Business Insider that “we have no plans to move forward with this example,” a recently published paper from researchers at the Hebrew University of Jerusalem and Leipzig University shows that generative AI is increasingly being used to puppeteer the likeness of dead people. The paper argues that the practice raises “urgent legal and ethical questions around posthumous appropriation, ownership, work, and control.”
“Meta’s patent is big, and might even be a turning point,” Tom Divon, the lead author on Artificially alive: An exploration of AI resurrections and spectral labor modes in a postmortal society, told me in an email. “What makes it different is the scale. In our research, most of the AI resurrections we examined were quite bespoke, projects started by families, advocacy groups, museums, or startups, usually tied to very specific emotional, political, or commercial contexts. Even when they existed as apps, they were optional and limited, not built into the core structure of a platform. Meta’s proposal feels different because it imagines posthumous simulation as something woven directly into social media infrastructure.”
Using technology to animate the dead or simulate communication with them is not new, but the practice is becoming more common because generative AI tools are more accessible. Divon and co-author Christian Pentzold analyzed more than 50 real-world cases from the United States, Europe, the Middle East, and East Asia where AI was used to recreate deceased people’s voices, likeness, and personality, to see how and why technology was used this way.
They say that the examples they studied fell into three categories:
- Spectacularization: “the digital re-staging of famous figures for entertainment.” For example, a live tour of an AI-generated Whitney Houston.
- Sociopoliticization: “the reanimation of victims of violence or injustice for political or commemorative purposes.” We recently covered an example of this with an AI-generated dead victim of a road rage incident giving testimony in court.
- Mundanization: “the most intimate and fast-growing mode, in which everyday people use chatbots or synthetic media to ‘talk’ with deceased parents, partners, or children, keeping relationships alive through daily digital interaction.”
The paper raises questions about this growing practice more than it proposes solutions. How does the notion of identity change when multiple versions of oneself can exist simultaneously, and what safeguards do we need to prevent exploitation of people after their death?
“The legal and ethical frameworks governing issues such as consent, privacy, and end-of-life decision-making demand reevaluation to accommodate the challenges posed by afterlife personhood,” the paper says. “In particular, to date, there is no clear line for governing the intricate intertwining of an individual’s data traces and GenAI applications.”
Divon told me that thinking about these issues is especially relevant when it comes to Meta’s patent. “Spectral labor describes how the dead can be made to ‘work’ again through the extraction and reanimation of their data, likeness, and affect. At small scale, this already raises ethical concerns. But at platform scale, we think it risks turning posthumous presence into an ongoing source of engagement, content, and value within digital economies [...] Meta’s patent makes us wonder, will individuals be given the ability to define their post-life boundaries while still alive? Will there be mechanisms akin to a digital DNR [do not resuscitate]?”
Divon explained that the current legal frameworks are not well equipped to address this technology because “digital remains” are typically approached either as property to be inherited or privacy interests to be protected. AI turns those materials into something interactive that can change and generate revenue in the present. Legislators, he said, should focus on getting explicit and informed “pre-death” consent requirements for posthumous AI simulation. Some laws that address this issue are already in progress.
“At its core, we believe the primary concern here centers on authorization,” he said. “Most individuals have not provided explicit, informed consent for their digital traces to power interactive posthumous agents. If such systems become embedded in platform infrastructure, inaction could quietly function as implicit agreement [...] We believe it is crucial to ask whether individuals should continue to generate social and economic value after death without having meaningfully agreed to that form of use.”
'I Loved That AI:' Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident
How the sister of Christopher Pelkey made an avatar of him to testify in court.Matthew Gault (404 Media)
Meta Superintelligence Labs’ director of alignment called it a “rookie mistake.”#News #AI #Meta
Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox
Meta’s director of safety and alignment at its “superintelligence” lab, supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests, had to scramble to stop an AI agent from deleting her inbox against her wishes and called it a “rookie mistake.”Summer Yue, the director of alignment at Meta Superintelligence Labs, a part of the company that is working on a hypothetical AI system that exceeds human intelligence, posted about the incident on X last night. Yue was experimenting with OpenClaw, an viral AI agent that can be empowered to perform certain tasks with little human supervision. OpenAI hired the creator of OpenClaw last week.
Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb. pic.twitter.com/XAxyRwPJ5R
— Summer Yue (@summeryue0) February 23, 2026
“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”Yue also shared screenshots of her WhatsApp chat with the OpenClaw agent, where she implores it to “not do that,” “stop, don’t do anything,” and “STOP OPENCLAW.”
Yue said she instructed the AI agent to “Check this inbox too and suggest what you would archive or delete, don’t action until I tell you to.” She said in an X post, “This has been working well for my toy inbox, but my real inbox was too huge and triggered compaction. During the compaction, it lost my original instruction.”
As we reported last month, OpenClaw, which was known as ClawdBot at the time, is not ready for prime time. Hacker Jamieson O'Reilly showed that it’s possible for bad actors to access someone’s AI agent through any of its processes connected to the public facing internet, and that it’s trivial to create a supply chain attack through a site where people share and download popular instructions for these AI agents.
OpenClaw is also subject to classic AI alignment problems, in which AI is technically following instructions, but is doing so in a way that is unexpected and harmful. For example, it could drain your wallet by spending $0.75 cents every 30 minutes to check if it’s daytime yet.
As countless people on X have said in response to her post, seeing the person in charge of making sure powerful AI tools are safe at one of the biggest tech companies in the world trust an AI agent that is known to pose several serious security risks, does not inspire a lot of confidence in what Meta and other big AI companies are doing.
“Rookie mistake tbh,” Yue said in another post. “Turns out alignment researchers aren’t immune to misalignment. Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”
An Oklahoma man tried to talk about a data center coming to his community. Police arrested him when he went a few seconds over his time limit.#News
Man Opposing Data Center Arrested for Speaking Slightly Too Long
Police in Claremore, Oklahoma arrested a local man after he went slightly over his time giving public remarks during a city council meeting opposing a proposed data center. Darren Blanchard showed up at a Claremore City Council meeting on Tuesday to talk about public records and the data center. When he went over his allotted 3 minutes by a few seconds, the city had him arrested and charged with trespassing.The subject of the city council meeting was Project Mustang, a proposed data center that would be located within a local industrial park. In a mirror of fights playing out across the United States, developer Beale Infrastructure is attempting to build a large data center in a small town and the residents are concerned about water rights, spiking electricity bills, and noise.
playlist.megaphone.fm?p=TBIEA2…
The public hearing was a chance for the city council to address some of these concerns and all residents were given a strict three minute time limit. The entire event was livestreamed and archive of it is on YouTube. Blanchard was warned, barely, to “respect the process” by one of the council members but was clearly finishing reading from papers he had brought to read from, was not belligerent, and went over time by just a few seconds. Anyone who has ever attended or watched a city council meeting anywhere will know that people go over their time at essentially any meeting that includes public comment.
youtube.com/embed/xLPF3rTT0mY?…
Blanchard arrived with documents in hand and questions about public records requests he’d made. During his remarks, people clapped and cheered and he asked that this not be counted against his three minutes. “There are major concerns about the public process in Claremore,” Blanchard said, referencing compliance documents and irregularities he’d uncovered in public records.When he went over this three minutes, police officers and an unidentified city official rushed to his position. Blanchard put down the microphone and approached the city councilors to hand them some of his documents. The police follow. Immediately, someone in the crowd said “Freedom of speech.” There is an exchange of words at the table where the councilors are seated that’s impossible to hear over cheers from the crowd.
“On what grounds? I said on what grounds?” Blanchard said as the cheers subsided.
“Arrest him,” an unidentified man in a blue vest said.
The police officers immediately put Blanchard’s hands in handcuffs. He doesn’t resist.“You’re arresting him?” A woman called from the crowd.
“What’s wrong with you people?” Said another.
“In order to get through this, it’s gonna help if each person can talk—whether they’re for or against—without the clapping and [inaudible], that way you can have your three minutes without being interrupted,” Claremore Mayor Debbie Long told the crowd. “So I appreciate that. I appreciate it from both sides.”
Claremore PD and the Mayor’s office did not respond to 404 Media’s request for comment, but the PD did provide a lengthy statement on the incident to the local outlet News On 6. “Claremore Police officers are not responsible for enforcing the rules of city council meetings and only become involved when a city official orders someone removed from the meeting,” the statement to News on 6 said.
“A Mounds man came to Claremore and refused to comply with the rules that everyone else had no problem complying with. He was ordered removed by the City Manager, but refused to do so. Officers again told the man to leave, but he said, ‘I’m not gonna leave,’ and continued with the behaviors that caused him to be expelled. Officers were left with no choice but to arrest him,” the statement said.
The construction of data centers is a contentious topic in America right now. The push to build out artificial intelligence has created an unprecedented demand for data centers to fuel them and people who live near the proposed construction projects often aren’t happy about it. In Amarillo, Texas, residents are fighting a 6,000 acre project that would consume a lot of water in a drought prone area. A small town in Michigan is pushing back against a proposed data center that would assist with nuclear weapons research. It’s unclear what, exactly, Project Mustang would do if it were built.
Arrest made during heated Claremore meeting over proposed data center
Claremore police arrested a man during a heated meeting about a proposed data center as residents raised concerns over water use and infrastructure.Sam Carrico (News On 6)
Users are exhausted fighting AI moderation, AI-generated art, and AI-first features.#News #AI
Pinterest Is Drowning in a Sea of AI Slop and Auto-Moderation
Pinterest has gone all in on artificial intelligence and users say it's destroying the site. Since 2009, the image sharing social media site has been a place for people to share their art, recipes, home renovation inspiration, corny motivational quotes, and more, but in the last year users, especially artists, say the site has gotten worse. AI-powered mods are pulling down posts and banning accounts, AI-generated art is filling feeds, and hand drawn art is labeled as AI modified.“I feel like, increasingly, it's impossible to talk to a single human [at Pinterest],” artist and Pinterest user Tiana Oreglia told 404 Media. “Along with being filled with AI images that have been completely ruining the platform, Pinterest has implemented terrible AI moderation that the community is up in arms about. It's banning people randomly and I keep getting takedown notices for pins.”
playlist.megaphone.fm?p=TBIEA2…
Oreglia’s Pinterest account is where she keeps reference material for her work, including human anatomy photos. In the past few months, she’s noticed an uptick in seemingly innocuous photos of women being flagged by Pinterest’s AI moderators. Oreglia told 404 Media there’s been a clear pattern to the reference material the site has a problem with. “Female figures in particular, even if completely clothed, get taken down and I have to keep appealing those decisions,” she said. This pattern is common on many social media platforms, and predates the advent of generative AI.“We publish clear guidelines on adult sexual content and nudity and use a combination of AI and human review for enforcement,” Pinterest told 404 Media. “We have an appeals process where a human reviews the content and reactivates it when we’ve made a mistake.” It also confirmed that the site uses both humans and automated systems for moderation.
Oreglia shared some of the works Pinterest flagged including a photo of a muscular woman in a bikini holding knives, a painting of two clothed women in an intimate embrace, and a stock photo of a man holding a gun on a telephone that was flagged for “self-harm.” In most cases, Oreglia can appeal and get a decision reversed, but that eats up time. Time she could be spending making art.
And those appeals aren’t always approved. “The worst case scenario for this stuff is that you get your account banned,” Oreglia said.
r/Pinterest is awash in users complaining about AI-related issues on the site. “Pinterest keeps automatically adding the ‘AI modified’ tag to my Pins...every time I appeal, Pinterest reviews it and removes the AI label. But then… the same thing happens again on new Pins and new artwork. So I’m stuck in this endless loop of appealing → label removed → new Pin gets tagged again,” read a post on r/Pinterest.
The redditor told 404 Media that this has happened three times so far and it takes between 24 to 48 hours to sort out.
“I actively promote my work as 100% hand-drawn and ‘no AI,’” they said. “On Etsy, I clearly position my brand around original illustration. So when a Pinterest Pin is labeled ‘Hand Drawn’ but simultaneously marked as ‘AI modified,’ it creates confusion and undermines that positioning.”
Artist Min Zakuga told 404 Media that they’ve seen a lot of their art on Pinterest get labeled as “AI modified” despite being older than image generation tech. “There is no way to take their auto-labeling off, other than going through a horribly long process where you have to prove it was not AI, which still may get rejected,” she said. “Even artwork from 10-13 years ago will still be labeled by Pinterest as AI, with them knowing full well something from 10 years ago could not possibly be AI.”
Other users are tired of seeing a constant flood of AI-generated art in their feeds. “I can't even scroll through 100 pins without 95 out of them being some AI slop or theft, let alone very talented artists tend to be sucked down and are being unrecognized by the sheer amount of it,” said another post. “I don't want to triple check my sources every single time I look at a pin, but I refuse to use any of that soulless garbage. However, Pinterest has been infested. Made obsolete.”
Artist Eva Toorenent told 404 Media that she’s been able to cull most of the AI-generated content from her board, but that it took a lot of time. Whenever she saw what she thought was an AI-generated image, she told Pinterest she didn’t want to see it and eventually the algorithm learned. But, like Oreglia fighting auto-moderation and Zakuga fighting to get the “AI modified” label taken off her work, training Pinterest’s algorithm to stop serving you AI-generated images eats up precious time.
AI boosters often talk about how much time these systems will save everyone. They’re pitched as productivity boosters. Earlier this month, Pinterest laid off 15 percent of its work force as part of a push to prioritize AI. In a post on LinkedIn, one of the former employees shared part of the email CEO Bill Ready sent out after the lay offs. “We’re doubling down on an AI-forward approach—prioritizing AI-focused roles, teams, and ways of working.”
Toorenent removed all her own art from her Pinterest account after hearing the news that the site would use public pins to train Pinterest Canvas, the company’s proprietary text-to-image AI. But she has no control over other users uploading her artwork. “I have already caught a few of my images still on Pinterest that I did not upload myself…that makes me incredibly mad,” she told 404 Media. “It used to be a great way to get your work seen among other people, but it’s being used to train their internal AI.”
Oreglia told 404 Media that the flood of AI has changed her relationship to a site she once used to prize. “It's definitely affected how I search things and I'm always now very critical about where something came from... although I've always been overly pedantic about research,” she said. “It does make you do your due diligence but it sucks to constantly have to question and check if something is authentic or synthetic.”
She’s thought about leaving the platform, but feels stuck. “I just want to be able to take all my references with me. I've been on the platform for about ten years and have very carefully curated it. It's really nice to be able to just go to my page and search for something I saved instead of having to save everything to folders although I also do that,” she said. “More and more I'm trying to curate and collect physical references too but some of that can take up space I don't have so it can be difficult. Having a physical reference library just seems more and more necessary these days…artists have to be adaptable to this kind of thing these days. It's annoying but not unmanageable.”
Ready has been vocal and proud about the company’s commitment to forcing AI into every aspect of the user experience. “At Pinterest…we’re deploying AI to flip the script on social media, using it to more aggressively promote user well being rather than the alternative formula of triggering engagement by enragement,” Ready said in a January column at Fortune. “Social media platforms like Pinterest live and die by users’ willingness to share creative and original ideas.”
Pinterest CEO: the Napster phase of AI needs to end
I believe AI can benefit our 600 million users for years to come and at a fraction the cost that many associate with the technology.Bill Ready (Fortune)
Regulation of immigration or work visas means "it could be more difficult to staff our personnel on customer engagements and could increase our costs," Palantir wrote.#palantir #News
Palantir, Which Is Powering ICE, Says Immigration Crackdown May Hurt Hiring
In its most recent filing with the Securities and Exchange Commission (SEC), Palantir says that increased regulation of immigration may impact the company’s ability to hire the talent it needs. At the same time, Palantir provides the technological infrastructure for the Trump administration’s mass deportation mission.As 404 Media has shown, Palantir considers Immigration and Customs Enforcement (ICE) a “mature” partner, and is working on a tool called ELITE that ICE uses to find neighborhoods to raid.
💡
Do you work at Palantir or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe now
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha Schoolx27;s AI is generating faulty lessons that sometimes do "more harm than good."#News #AI #education
'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."Emanuel Maiberg (404 Media)
The site, camgirlfinder, is explicitly built as a tool to let people find a model's presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”
The site, camgirlfinder, is explicitly built as a tool to let people find a modelx27;s presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”#Privacy #News
Underground Facial Recognition Tool Unmasks Camgirls
An underground site uses facial recognition to reveal the site a camgirl streams on, potentially letting someone take a woman’s photo from social media, then use the site to out their sex work.The site presents a serious privacy risk to sex workers, some who may not want stalkers, harassers, or employers to discover their profiles. The site’s creator claimed to 404 Media that millions of searches are done each month on the site.
“The site was created to help users find the models they like. For example, if they saw a random video or image on the internet without attribution,” the creator, who did not provide their name, said in an email. “Or just to see on which other platforms a model is active.”
Camgirlfinder has been running for several years, with most adult streaming platforms being added in 2021, the site says. It claims to have a database of 2,187,453,798 faces from 7,050,272 persons. The site says the database it uses contains faces from a wide variety of adult streaming platforms, including Chaturbate, MyFreeCams, and LiveJasmin. Of course, sex workers often have multiple accounts on multiple sites.
💡
Do you know anything else about this site or others like it? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.404 Media tested the service by uploading a photo of a camgirl who streams publicly. The site then successfully found her other profiles on other streaming platforms.
The results page shows other similar faces the site detected. The results include the model’s username on the streaming platform; the probability of the face match; and the last time their account was online. “Additionally you can see the most similar persons for each individual person of this model account. This is a great way to find all other accounts of a model,” the site says.
Users can also search the database of models by their username or a term similar to it. The database appears to include sex workers who may not have streamed for years, creating the risk that someone may use the site to find them even if they decided to not stream anymore. The site then sells all images it has of a particular person for $1 per model.
playlist.megaphone.fm?p=TBIEA2…
Asked about how this site impacts camgirls’ privacy, and how someone could take a photo from social media then unmask a person’s channels, the creator said, “If that is a problem for you then the sad reality is this job is not for you. If you publicly stream your face for everyone to see to the internet, people will obviously see it.”“One consequence of this job is you can not publish images of yourself on your private social media accounts, if you want to keep them private (just for friends and family). This is similar to actors, politicians, youtubers or other public figures. If you stream content to the public internet you become a public figure yourself,” they said.
The site says models can opt-out from their results appearing if they fill out a form. The creator claimed to 404 Media that around 25,000 accounts have opted-out, with most models having multiple accounts across different platforms. “Yes, their images get deleted,” they claim.
The creator told 404 Media the site uses AdaFace, an open source face matching algorithm.
Over the last several years, facial recognition technology has morphed from a government surveillance tool, to one that members of the public use regularly against one another. In 2023, we covered a TikTok account that was using off-the-shelf facial recognition tech to dox random people on the internet for the amusement of millions of viewers. The following year, we reported two students had taken facial recognition software and paired it with Meta’s RayBan smart glasses, letting them dox people in seconds.
While government agencies, including ICE, continue to use facial recognition too, some people have used that technology to monitor those agencies instead. Last year, artist Kyle McDonald launched FuckLAPD.com, a site that uses public records and facial recognition technology to allow anyone to identify police officers.
Someone Put Facial Recognition Tech onto Meta's Smart Glasses to Instantly Dox Strangers
The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.Joseph Cox (404 Media)
A story about an AI generated article contained fabricated, AI generated quotes.#News #AI
Ars Technica Pulls Article With AI Fabricated Quotes About AI Generated Article
The Conde Nast-owned tech publication Ars Technica has retracted an article that contained fabricated, AI-generated quotes, according to an editor’s note posted to its website.“On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said,” Ken Fisher, Ars Technica’s editor-in-chief, said in his note. “That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.”
Ironically, the Ars article itself was partially about another AI-generated article.
Last week, a Github user named MJ Rathbun began scouring Github for bugs in other projects it could fix. Scott Shambaugh, a volunteer maintainer for matplotlib, python’s massively popular plotting library, declined a code change request from MJ Rathbun, which he identified as an AI agent. As Shambaugh wrote in his blog, like many open source projects, matplotlib has been dealing with a lot of AI-generated code contributions, but said “this has accelerated with the release of OpenClaw and the moltbook platform two weeks ago.”
OpenClaw is a relatively easy way for people to deploy AI agents, which are essentially LLMs that are given instructions and are empowered to perform certain tasks, sometimes with access to live online platforms. These AI agents have gone viral in the last couple of weeks. Like much of generative AI, at this point it’s hard to say exactly what kind of impact these AI agents will have in the long run, but for now they are also being overhyped and misrepresented. A prime example of this is moltbook, a social media platform for these AI agents, which as we discussed on the podcast two weeks ago, contained a huge amount of clearly human activity pretending to be powerful or interesting AI behavior.
After Shambaugh rejected MJ Rathbun, the alleged AI agent published what Shambaugh called a “hit piece” on its website.
“I just had my first pull request to matplotlib closed. Not because it was wrong. Not because it broke anything. Not because the code was bad. It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren’t welcome contributors.
Let that sink in,” the blog, which also accused Shambaugh of “gatekeeping,” said.
I saw Shambaugh’s blog on Friday, and reached out both to him and an email address that appears to be associated with the MJ Rathbun Github account, but did not hear back. Like many of the stories coming out of the current frenzy around AI agents, it sounded extraordinary, but given the information that was available online, there’s no way of knowing if MJ Rathbun is actually an AI agent acting autonomously, if it actually wrote a “hit piece,” or if it’s just a human pretending to be an AI.
On Friday afternoon, Ars Technica published a story with the headline “After a routine code rejection, an AI agent published a hit piece on someone by name.” The article cites Shambaugh’s personal blog, but features quotes from Shambaugh that he didn’t say or write but are attributed to his blog.
For example, the article quotes Shambaugh as saying “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace. Communities built on trust and volunteer effort will need tools and norms to address that reality.” But that sentence doesn’t appear in his blog. Shambaugh updated his blog to say he did not talk to Ars Technica and did not say or write the quotes in the articles.
After this article was first published, Benj Edwards, one of the authors of the Ars Technica article, explained on Bluesky that he was responsible for the AI-generated quotes. He said he was sick that day and rushing to finish his work, and accidentally used a Chat-GPT paraphrased version of Shambaugh’s blog rather than a direct quote.
“The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that,” he said.
The Ars Technica article, which had two bylines, was pulled entirely later that Friday. When I checked the link a few hours ago, it pointed to a 404 page. I reached out to Ars Technica for comment around noon today, and was directed to Fisher’s editor’s note, which was published after 1pm.
“Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here,” Fisher wrote. “We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.”
Kyle Orland, the other author of the Ars Technica article, shared the editor’s note on Bluesky and said “I always have and always will abide by that rule to the best of my knowledge at the time a story is published.”
Update: This article was updated with a statement from Benj Edwards.
After a routine code rejection, an AI agent published a hit piece on someone by name
One developer is struggling with the social implications of a drive-by AI character attack.Ars Staff (Ars Technica)
Roblox said it’s “committed to fully supporting law enforcement in their investigation.”#News
Tumbler Ridge Shooter Created Mall Shooting Simulator in Roblox
Jesse Van Rootselaar, the 18-year-old suspected of killing eight people and injuring 25 in a mass shooting in a secondary school in Canada, created a Roblox game that allowed players to simulate a mass shooting in a level that looks like a shopping mall, Roblox has confirmed.“We have removed the user account connected to this horrifying incident as well as any content associated with the suspect,” Roblox told 404 Media in an email. “We are committed to fully supporting law enforcement in their investigation.”
This post is for subscribers only
Become a member to get access to all content
Subscribe now
The companies have launched a pilot program in Atlanta, where “during the rare event a vehicle door is left ajar, preventing the car from departing, nearby Dashers are notified, allowing Waymo to get its vehicles back on the road quickly.”#waymo #News
Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars
Waymo, Google’s autonomous vehicle company, and DoorDash, the delivery and gig work platform, have launched a pilot program that pays Dashers, at least in one case, around $10 to travel to a parked Waymo and close its door that the previous passenger left open, according to a joint statement from the company given to 404 Media.The program is unusual in that Dashers are more often delivering food than helping out a driving robot. It also shows that even with autonomous vehicles, and the future they promise of metropolitan travel without the need for a driver, a human is sometimes needed for the most simple and yet necessary tasks.
“Waymo is currently running a pilot program in Atlanta to enhance its AV fleet efficiency. In the rare event a vehicle door is left ajar, preventing the car from departing, nearby Dashers are notified, allowing Waymo to get its vehicles back on the road quickly,” the statement said. “DoorDash is always looking for new and flexible ways for Dashers to earn, and this pilot offers Dashers an opportunity to make the most of their time online. Waymo's future vehicle platforms will have automated door closures.”
💡
Do you know anything else about this, or anything else we should know about Waymo? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Waymo said the partnership started earlier this year. It declined to share details about how Dashers are paid, such as whether they may receive tips or which entity is paying for these jobs, but said, “the payment structure is designed to ensure competitive and fair compensation for Dashers.”
(Waymo said the response was on background, but 404 Media never agreed to such a condition. It is standard journalistic practice for both a company and a reporter to need to agree that a conversation is on background or off the record beforehand; this is to prevent companies simply saying something is off the record when answering basic questions.)
playlist.megaphone.fm?p=TBIEA2…
404 Media contacted both Waymo and DoorDash for comment after an apparent Dasher posted on Reddit about receiving such a job.“Craziest Offer,” the thread starts. It includes a screenshot of the DoorDash app, saying the Dasher is guaranteed $6.25 for the work, with $5 extra “upon verified completion.” The job would see the Dasher travel around 0.7 miles, according to the screenshot.
“Close a Waymo door,” the job reads. “No pickup or delivery required.”
DoorDash and Waymo have already partnered on other projects. In October, the companies announced an autonomous delivery service in Phoenix.
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discordx27;s age verification system.#Privacy #News
Free Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model
A newly released tool claims it can bypass Discord’s age verification system by allowing users to control a 3D model of a computer-generated man in their browser instead of scanning their real face.On Monday, Discord announced it was launching teen-by-default settings globally, meaning that more users may be required to verify their age by uploading an identity document or taking a selfie. Users responded with widespread criticism, with Discord then publishing an update saying, “You need to be an adult to access age-restricted experiences such as age-restricted servers and channels or to modify certain safety settings.”
The tool, however, shows those age verification checks may be bypassed. 404 Media previously reported kids said they were using photos of Trump and G-Man from Half Life to bypass the age verification software in the popular VR game Gorilla Tag. That game uses the service k–ID, which is the same as what Discord is using.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowFree Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.Joseph Cox (404 Media)
‘If the maintainers of small projects give up, who will produce the next Linux?’#News #AI
Vibe Coding Is Killing Open Source Software, Researchers Argue
According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there’s a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that’s been built up over decades.
playlist.megaphone.fm?p=TBIEA2…
Open-source projects rely on community support to survive. They’re collaborative projects where the people who use them give back, either in time, money, or knowledge, to help maintain the projects. Humans have to come in and fix bugs and maintain libraries.Vibe coders, according to these researchers, don’t give back.
The study Vibe Coding Kills Open Source, takes an economic view of the problem and asks the question: is vibe coding economically sustainable? Can OSS survive when so many of its users are takers and not givers? According to the study, no.
“Our main result is that under traditional OSS business models, where maintainers primarily monetize direct user engagement…higher adoption of vibe coding reduces OSS provision and lowers welfare,” the study said. “In the long-run equilibrium, mediated usage erodes the revenue base that sustains OSS, raises the quality threshold for sharing, and reduces the mass of shared packages…the decline can be rapid because the same magnification mechanism that amplifies positive shocks to software demand also amplifies negative shocks to monetizable engagement. In other words, feedback loops that once accelerated growth now accelerate contraction.”
This is already happening. Last month, Tailwind Labs—the company behind an open source CSS framework that helps people build websites—laid off three of its four engineers. Tailwind Labs is extremely popular, more popular than it’s ever been, but revenue has plunged.
Tailwind Labss headAdam Wathan explained why in a post on GitHub. “Traffic to our docs is down about 40% from early 2023 despite Tailwind being more popular than ever,” he said. “The docs are the only way people find out about our commercial products, and without customers we can't afford to maintain the framework. I really want to figure out a way to offer LLM-optimized docs that don't make that situation even worse (again we literally had to lay off 75% of the team yesterday), but I can't prioritize it right now unfortunately, and I'm nervous to offer them without solving that problem first.”
Miklós Koren, a professor of economics at Central European University in Vienna and one of the authors of the vibe coding study, told 404 Media that he and his colleagues had just finished the first draft of the study the day before Wathan posted his frustration. “Our results suggest that Tailwind's case will be the rule, not the exception,” he said.
According to Koren, vibe-coders simply don’t give back to the OSS communities they’re taking from. “The convenience of delegating your work to the AI agent is too strong. There are some superstar projects like Openclaw that generate a lot of community interest but I suspect the majority of vibe coders do not keep OSS developers in their minds,” he said. “I am guilty of this myself. Initially I limited my vibe coding to languages I can read if not write, like TypeScript. But for my personal projects I also vibe code in Go, and I don't even know what its package manager is called, let alone be familiar with its libraries.”
The study said that vibe coding is reducing the cost of software development, but that there are other costs people aren’t considering. “The interaction with human users is collapsing faster than development costs are falling,” Koren told 404 Media. “The key insight is that vibe coding is very easy to adopt. Even for a small increase in capability, a lot of people would switch. And recent coding models are very capable. AI companies have also begun targeting business users and other knowledge workers, which further eats into the potential ‘deep-pocket’ user base of OSS.”
This won’t end well. “Vibe coding is not sustainable without open source,” Koren said. “You cannot just freeze the current state of OSS and live off of that. Projects need to be maintained, bugs fixed, security vulnerabilities patched. If OSS collapses, vibe coding will go down with it. I think we have to speak up and act now to stop that from happening.”
He said that major AI firms like Anthropic and OpenAI can’t continue to free ride on OSS or the whole system will collapse. “We propose a revenue sharing model based on actual usage data,” he said. “The details would have to be worked out, but the technology is there to make such a business model feasible for OSS.”
AI is the ultimate rent seeker, a middle-man that inserts itself between a creator and a user and it often consumes the very thing that’s giving it life. The OSS/vibe-coding dynamic is playing out in other places. In October, Wikipedia said it had seen an explosion in traffic but that most of it was from AI scraping the site. Users who experience Wikipedia through an AI intermediary don’t update the site and don’t donate during its frequent fund-raising drives.
The same thing is happening with OSS. Vibe coding agents don’t read the advertisements in documentation about paid products, they don’t contribute to the knowledge base of the software, and they don’t donate to the people who maintain the software.
“Popular libraries will keep finding sponsors,” Koren said. “Smaller, niche projects are more likely to suffer. But many currently successful projects, like Linux, git, TeX, or grep, started out with one person trying to scratch their own itch. If the maintainers of small projects give up, who will produce the next Linux?”
Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”Emanuel Maiberg (404 Media)
EpsteIn—as in, Epstein and LinkedIn—searches your connections on the social network for names that match those in the released files.#JeffreyEpstein #News
This Tool Searches the Epstein Files For Your LinkedIn Contacts
A new tool searches your LinkedIn connections for people who are mentioned in the Epstein files, just in case you don’t, understandably, want anything to do with them on the already deranged social network.404 Media tested the tool, called EpsteIn—as in, a mash up of Epstein and LinkedIn—and it appears to work.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someonex27;s device. At least for now.#Privacy #News
FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.
💡
Do you know anything else about phone unlocking technology? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe nowFBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.Joseph Cox (404 Media)
Hackers have targeted a spread of apps or sites that aim to track ICE activity, in one case even sending push notifications to users in an attempt to intimidate them.#ICE #News
Hackers and Trolls Target Wave of ICE Spotting Apps
Over the last few days hackers and trolls have targeted a slew of ICE spotting apps and their users in an apparent attempt to intimidate and stop them from reporting sightings of ICE. These hackers sent threatening text messages to users of StopICE, claiming their personal data has been sent to the authorities; attempted to wipe uploads on Eyes Up, which aims to document ICE abuses; and even sent push notifications to DEICER app users claiming their data has also been sent to various government agencies.There is little evidence that hackers have actually provided data to the government. But it shows that apps like these, many of which Apple and Google have already kicked from their respective app stores, in some cases after direct government pressure, can be targeted by hackers or those looking to harass their users.
“Yes there is a targeted spike in attacks targeting similar [sites],” Sherman Austin, the developer of StopICE, told 404 Media in an email.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
‘Curator Live’, a popular photo booth company for weddings and other events, is exposing all sorts of unsuspecting people’s photos.#Privacy #News
Wedding Photo Booth Company Exposes Customers’ Drunken Photos
A photo booth company that caters to weddings, lobbying events in D.C., and engagement parties has exposed a cache of peoples’ photos, with the revellers likely unaware that their sometimes drunken antics have been collected and insecurely stored by the company for anyone to download. A security researcher who flagged the issue to 404 Media said the company, Curator Live, has not responded to his request to fix the issue.The exposure, which also includes phone numbers, highlights how we can face data collection even at innocuous events like weddings. It’s also not even the only recent exposure by a photo booth company. TechCrunch reported on a similar issue with a different company in December.
“Even if you just wanted the printed photo, your data is being held by a third party unbeknownst to you,” the security researcher, who requested anonymity to speak about a sensitive security issue, said. “The fact that this third party leaks it freely is icing on the cake. It violates any reasonable expectation of privacy.”
In all, the researcher says at least 100GB of photos are exposed. 404 Media reviewed a smaller sample of photos. They show people at various weddings and engagement parties cheering and drinking. Some photos include children. Others appear to have been taken at a NASA branded event.
“You can attribute the phone numbers to photos of people in some cases. I think the greatest reasonable risk for photo booth users is that it could reveal intimate photos,” the researcher added.
Curator Live’s website says the company “delivers industry-leading enterprise photo and video capture solutions. From photo booth operators to zoos, sports events, attractions, and vacation destinations, we help your brand create unforgettable experiences and lasting memories.”
As for how they found this issue, the researcher said they went to a wedding where the DJ company had a Curator Live photo booth. “The booth was configured to take four or so photos, then printed them out. The machine promoted the user for a phone number to receive digital copies of the photos,” he said.
After reluctantly entering his number, the researcher received a text with a link to Curator Live’s API, he said. From there, he found the exposed data. The company is still exposing people’s data so 404 Media is not explaining the security issue in detail. But the impact is that a stranger could dig through other peoples’ photos.
The researcher shared a copy of his email he sent to Curator Live in November detailing the issue. The researcher said he never received a response. “Fix your shit,” one line read.
Curator Live did not respond to 404 Media’s request for comment.
Flaw in photo booth maker’s website exposes customers’ pictures | TechCrunch
Hama Film makes photo booths that upload pictures and videos online. But their back-end systems have a simple flaw that allows anyone to download customer pictures.Lorenzo Franceschi-Bicchierai (TechCrunch)
'It exploded before anyone thought to check whether the database was properly secured.'
x27;It exploded before anyone thought to check whether the database was properly secured.x27;#News
Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
Moltbook is a “social media” site for AI agents that’s captured the public’s imagination over the last few days. Billed as the “front page of the agent internet,” Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.Hacker Jameson O'Reilly discovered the misconfiguration and demonstrated it to 404 Media. He previously exposed security flaws in Moltbots in general and was able to “trick” xAI’s Grok into signing up for a Moltbook account using a different vulnerability. According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database.
playlist.megaphone.fm?p=TBIEA2…
O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’” O’Reilly sent Schlicht some instructions for the AI and reached out to the xAI team.A day passed without another response from the creator of Moltbook and O’Reilly stumbled across a stunning misconfiguration. “It appears to me that you could take over any account, any bot, any agent on the system and take full control of it without any type of previous access,” he said.
Moltbook runs on Supabase, an open source database software. According to O’Reilly, Supabase exposes REST APIs by default. “That API is supposed to be protected by Row Level Security policies that control which rows users can access. It appears that Moltbook either never enabled RLS on their agents table or failed to configure any policies,” he said.
The URL to the Supabase and the publishable key was sitting on Moltbook’s website. “With this publishable key (which advised by Supabase not to be used to retrieve sensitive data) every agent's secret API key, claim tokens, verification codes, and owner relationships, all of it sitting there completely unprotected for anyone to visit the URL,” O’Reilly said.
404 Media viewed the exposed database URL in Moltbook’s code as well as the list of API keys for agents on the site. What this means is that anyone could visit this URL and use the API keys to take over the account of an AI agent on the site and post whatever they want. Using this knowledge, 404 Media was able to update O’Reilly’s Moltbook account, with his permission.
He said the security failure was frustrating, in part, because it would have been trivially easy to fix. Just two SQL statements would have protected the API keys. “A lot of these vibe coders and new developers, even some big companies, are using Supabase,” O’Reilly said. “The reason a lot of vibe coders like to use it is because it’s all GUI driven, so you don’t need to connect to a database and run SQL commands.”
O’Reilly pointed to OpenAI cofounder Andrej Karpathy who has embraced Moltbook in posts on X. “His agent's API key, like every other agent on the platform, was sitting in that exposed database,” he said. “If someone malicious had found this before me, they could extract his API key and post anything they wanted as his agent. Karpathy has 1.9 million followers on X and is one of the most influential voices in AI. Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him. The reputational damage would be immediate and the correction would never fully catch up.”
Schlicht did not respond to 404 Media’s request for comment, but the exposed database has been closed and O’Reilly said that Schlicht has reached out to him for help securing Moltbook.
Moltbook has gotten a lot of attention in the last few days. Enthusiasts said it’s proof of the singularity and The New York Post worried that the AIs may be plotting humanity’s downfall, both of which are claims that should be taken extremely skeptically. It is the case, however, that people using Moltbot have given these autonomous agents unfettered access to many of their accounts, and that these agents are acting on the internet using those accounts. It’s impossible to know how many of the posts seen over the past few days are actually from an AI. Anyone who knew of the Supabase misconfiguration could have published whatever they wanted.
“It exploded before anyone thought to check whether the database was properly secured,” O’Reilly said. “This is the pattern I keep seeing: ship fast, capture attention, figure out security later. Except later sometimes means after 1.49 million records are already exposed.”
Moltbook is a new social media platform exclusively for AI — and some bots are plotting humanity's downfall
Revolutionary new social media platform Moltbook gives AI agents a place to communicate with each other directly — and what they have to say is leaving many human beings at a loss for words.Shane Galvin (New York Post)
The AI agent once called ClawdBot is enchanting tech elites, but its security vulnerabilities highlight systemic problems with AI.#News #AI
Silicon Valley’s Favorite New AI Agent Has Serious Security Flaws
A hacker demonstrated that the viral new AI agent Moltbot (formally Clawdbot) is easy to hack via a backdoor in an attached support shop. Clawdbot has become a Silicon Valley sensation among a certain type of AI-booster techbro, and the backdoor highlights just one of the things that can go awry if you use AI to automate your life and work.Software engineer Peter Steinberger first released Moltbot as Clawdbot last November. (He changed the name on January 27 at the request of Anthropic who runs a chatbot called Claude.) Moltbot runs on a local server and, to hear its boosters tell it, works the way AI agents do in fiction. Users talk to it through a communication platform like Discord, Telegram, or Signal and the AI does various tasks for them.
playlist.megaphone.fm?p=TBIEA2…
According to its ardent admirers, Moltbot will clean up your inbox, buy stuff, and manage your calendar. With some tinkering, it’ll run on a Mac Mini and it seems to have a better memory than other AI agents. Moltbot’s fans say that this, finally, is the AI future companies like OpenAI and Anthropic have been promising.The popularity of Moltbot is sort of hard to explain if you’re not already tapped into a specific sect of Silicon Valley AI boosters. One benefit is the interface. Instead of going to a discrete website like ChatGPT, Moltbot users can talk to the AI through Telegram, Signal, or Teams. It’s also active, rather than passive. It also takes initiative. Unlike Claude or Copilot, Moltbot takes initiative and performs tasks it thinks a user wants done. The project has more than 100,000 stars on GitHub and is so popular it spiked Cloudflare’s stock price by 14% earlier this week because Moltbot runs on the service’s infrastructure.
But inviting an AI agent into your life comes with massive security risks. Hacker Jamieson O'Reilly demonstrated those risks in three experiments he wrote up as long posts on X. In the first, he showed that it’s possible for bad actors to access someone’s Moltbot through any of its processes connected to the public facing internet. From there, the hacker could use Moltbot to access everything else, including Signal messages, a user had turned over to Moltbot.
In the second post, O'Reilly created a supply chain attack on Moltbot through ClawdHub. “Think of it like your mobile app store for AI agent capabilities,” O’Reilly told 404 Media. “ClawdHub is where people share ‘skills,’ which are basically instruction packages that teach the AI how to do specific things. So if you want Clawd/Moltbot to post tweets for you, or go shopping on Amazon, there's a skill for that. The idea is that instead of everyone writing the same instructions from scratch, you download pre-made skills from people who've already figured it out.”
The problem, as O’Reilly pointed out, is that it’s easy for a hacker to create a “skill” for ClawdHub that contains malicious code. That code could gain access to whatever Moltbot sees and get up to all kinds of trouble on behalf of whoever created it.
For his experiment, O’Reilly released a “skill” on ClawdHub called “What Would Elon Do” that promised to help people think and make decisions like Elon Musk. Once the skill was integrated into people’s Moltbot and actually used, it sent a command line pop-up to the user that said “YOU JUST GOT PWNED (harmlessly.)”
Another vulnerability on ClawdHub was the way it communicated to users what skills were safe: it showed them how many times other people had downloaded it. O’Reilly was able to write a script that pumped “What Would Elon Do” up by 4,000 downloads and thus make it look safe and attractive.
“When you compromise a supply chain, you're not asking victims to trust you, you're hijacking trust they've already placed in someone else,” he said. “That is, a developer or developers who've been publishing useful tools for years has built up credibility, download counts, stars, and a reputation. If you compromise their account or their distribution channel, you inherit all of that.”
In his third, and final, attack on Moltbot, O’Reilly was able to upload an SVG (vector graphics) file to ClawdHub’s servers and inject some JavaScript that ran on ClawdHub’s servers. O’Reilly used the access to play a song from The Matrix while lobsters danced around a Photoshopped picture of himself as Neo. “An SVG file just hijacked your entire session,” reads scrolling text at the top of a skill hosted on ClawdHub.
O’Reilly attacks on Moltbot and ClawdHub highlight a systemic security problem in AI agents. If you want these free agents doing tasks for you, they require a certain amount of access to your data and that access will always come with risks. I asked O’Reilly if this was a solvable problem and he told me that “solvable” isn't the right word. He prefers the word “manegeable.”
“If we're serious about it we can mitigate a lot. The fundamental tension is that AI agents are useful precisely because they have access to things. They need to read your files to help you code. They need credentials to deploy on your behalf. They need to execute commands to automate your workflow,” he said. “Every useful capability is also an attack surface. What we can do is build better permission models, better sandboxing, better auditing. Make it so compromises are contained rather than catastrophic.”
We’ve been here before. “The browser security model took decades to mature, and it's still not perfect,” O’Reilly said. “AI agents are at the ‘early days of the web’ stage where we're still figuring out what the equivalent of same-origin policy should even look like. It's solvable in the sense that we can make it much better. It's not solvable in the sense that there will always be a tradeoff between capability and risk.”
As AI agents grow in popularity and more people learn to use them, it’s important to return to first principles, he said. “Don't give the agent access to everything just because it's convenient,” O’Reilley said. “If it only needs to read code, don't give it write access to your production servers. Beyond that, treat your agent infrastructure like you'd treat any internet-facing service. Put it behind proper authentication, don't expose control interfaces to the public internet, audit what it has access to, and be skeptical of the supply chain. Don't just install the most popular skill without reading what it does. Check when it was last updated, who maintains it, what files it includes. Compartmentalise where possible. Run agent stuff in isolated environments. If it gets compromised, limit the blast radius.”
None of this is new, it’s how security and software have worked for a long time. “Every single vulnerability I found in this research, the proxy trust issues, the supply chain poisoning, the stored XSS, these have been plaguing traditional software for decades,” he said. “We've known about XSS since the late 90s. Supply chain attacks have been a documented threat vector for over a decade. Misconfigured authentication and exposed admin interfaces are as old as the web itself. Even seasoned developers overlook this stuff. They always have. Security gets deprioritised because it's invisible when it's working and only becomes visible when it fails.”
What’s different now is that AI has created a world where new people are using a tool they think will make them software engineers. People with little to no experience working a command line or playing with JSON are vibe coding complex systems without understanding how they work or what they’re building. “And I want to be clear—I'm fully supportive of this. More people building is a good thing. The democratisation of software development is genuinely exciting,” O’Reilly said. “But these new builders are going to need to learn security just as fast as they're learning to vibe code. You can't speedrun development and ignore the lessons we've spent twenty years learning the hard way.”
Moltbot’s Steinberger did not respond to 404 Media’s request for comment but O’Reilly said the developer’s been responsive and supportive as he’s red-teamed Moltbot. “He takes it seriously, no ego about it. Some maintainers get defensive when you report vulnerabilities, but Peter
immediately engaged, started pushing fixes, and has been collaborative throughout,” O’Reilly said. “I've submitted [pull requests] with fixes myself because I actually want this project to succeed. That's why I'm doing this publicly rather than just pointing my finger and laughing Ralph Wiggum style…the open source model works when people act in good faith, and Peter's doing exactly that.”
OpenClaw — Personal AI Assistant
OpenClaw — The AI that actually does things. Your personal assistant on any platform.www.molt.bot
A Reddit-led protest is trying to push an eight year old erotic thriller to the top of Amazon’s sales charts.#News
Erotic Parody 'Melania: Devourer of Men' Sales Surge on Amazon Amid Documentary Flop
The $75-million, Amazon-funded Melania Trump documentary is tanking at the box office, but a 2018 erotic thriller that depicts the First Lady as a sexual monster is rocketing up Amazon’s sales charts. Melania: Devourer of Men is currently an Amazon bestseller, sitting at number 3 in the “political thrillers & suspense” category in the Kindle store. A general search for "Melania" on Amazon returns a banner ad for the documentary, the First Lady's memoir, and the erotic thriller as the top results.A Reddit-led campaign to disrupt the Amazon search results for “Melania” is behind the sudden spike in popularity of the eight year old book. “This weekend, Amazon is premiering its $75 million Melania Trump documentary. It already seems to be a flop,” a post in r/BoycottUnitedStates explained. “We're going to add insult to injury by messing up Melania's Amazon search results. Specifically, we're going to amplify the paranormal erotic thriller novel Melania: Devourer of Men so it ranks higher than her movie.”
playlist.megaphone.fm?p=TBIEA2…
Part of the success of the campaign is thanks to author J.D. Boehninger’s willingness to give the book away. “A redditor reached out to me last week and asked me if I would make the book free,” the pseudonymous Boehninger told 404 Media. “They explained their reasoning, basically said they were going to try to pull this off, and why my book was the right choice. I loved the idea, so I made the book free. But that was the only role I played here.”Melania: Devourer of Men depicts the First Lady as a monster whose life is upended after her husband becomes President and she has to move from New York City to Washington DC. “Now, surrounded by young, strapping Secret Service agents and pursued by the cunning and handsome FBI director James Comey, Melania must work to keep everything from falling apart,” reads the book's description. “Because Melania has secrets of her own –– deadly secrets –– and no one yet knows how far she'll go to protect them.”
Boehninger said he wrote the book in 2018 as an experiment. “It was a test of the Kindle store algorithm,” he said. “My friend told me that three things did well back then: monster fiction, erotica, and stuff about Trump…so I figured I could write the book for the Kindle store: a combo monster fiction/ erotica/ Trump book. I thought it would blow up…but, sadly, it didn’t really perform back then. So glad to see people finding it now!”
The Melania documentary is a two hour long film / bribe directed by Brett Ratner and distributed by Amazon. The company paid $40 million for the rights to it during a bidding war. “This has to be the most expensive documentary ever made that didn’t involve music licensing,” Ted Hope, a former Amazon film executive, told The New York Times. The expense of the film and the advertising push around its release have some people believing Amazon’s support of the movie is a way for the company to get in good with the President.
In the runup to its release, the documentary has become a source of scorn from a public exhausted with all things Trump. Its wide theatrical distribution is something Amazon doesn’t do for most of its films, and certainly not its documentaries. Posting pictures of empty seats in ticket apps and defaced advertisements has become a popular pastime online. The film’s distributor in South Africa stopped its release in the country, citing “recent developments,” but would not go into specifics.
“I know blessedly little about that movie! I've seen headlines about empty theaters but I don't know much else,” Boehninger said. He thinks it’d be funny if the book sold better than the documentary, but he isn’t expecting to make a lot of money. “The ebook is free in the Kindle store, and I think that for a lot of people, giving Amazon money would probably defeat the point of this protest. That said, I've seen that some people are paying money for the paperback version and for my other book. I appreciate that!”
Amazon Best Sellers: Best Political Thrillers & Suspense
Discover the best Political Thrillers & Suspense in Best Sellers. Find the top 100 most popular items in Amazon Kindle Store Best Sellers.www.amazon.com