OpenAI introduces new age prediction and verification methods after wave of teen suicide stories involving chatbots.#News
ChatGPT Will Guess Your Age and Might Require ID for Age Verification
OpenAI has announced it is introducing new safety measures for ChatGPT after the a wave of stories and lawsuits accusing ChatGPT and other chatbots of playing a role in a number of teen suicide cases. ChatGPT will now attempt to guess a user’s age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old.“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the company said in its announcement.
“I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” OpenAI CEO Sam Altman said on X.
In August, OpenAI was sued by the parents of Adam Raine, who died by suicide in April. The lawsuit alleges that alleges that the ChatGPT helped him write the first draft of his suicide note, suggested improvements on his methods, ignored early attempts and self-harm, and urged him not to talk to adults about what he was going through.
“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
In August the Wall Street Journal also reported a story about a 56-year-old man who committed a murder-suicide after ChatGPT indulgedhis paranoia. Today, the Washington Postreported another story about another lawsuit alleging that a Character AI chatbot contributed to a 13-year-old girl’s death by suicide.
OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures.
In addition to attempting to guess or verify a user’s age, ChatGPT will now also apply different rules to teens who are using the chatbot.
“For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the announcement said. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called “uncensored” models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.
“We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to “‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is not the first company that’s attempting to use machine learning to predict the age of its users. In July, YouTube announced it will use a similar method to “protect” teens from certain types of content on its platform.
Extending our built-in protections to more teens on YouTube
We're extending our existing built-in protections to more US teens on YouTube, using machine learning age estimation.James Beser (YouTube Official Blog)
An LLM breathed new life into 'Animal Crossing' and made the villagers rise up against their landlord.
An LLM breathed new life into x27;Animal Crossingx27; and made the villagers rise up against their landlord.#News #VideoGames
AI-Powered Animal Crossing Villagers Begin Organizing Against Tom Nook
A software engineer in Austin has hooked up Animal Crossing to an AI and breathed new and disturbing life into its villagers. Using a Large Language Model (LLM) trained on Animal Crossing scripts and an RSS reader, the anthropomorphic folk of the Nintendo classic spouted new dialogue, talked about current events, and actively plotted against Tom Nook’s predatory bell prices.The Animal Crossing LLM is the work of Josh Fonseca, a software engineer in Austin, Texas who works at a small startup. Ars Technica first reported on the mod. His personal blog is full of small software projects like a task manager for the text editor VIM, a mobile app that helps rock climbers find partners, and the Animal Crossing AI. He also documented the project in a YouTube video.
playlist.megaphone.fm?p=TBIEA2…
Fonseca started playing around with AI in college and told 404 Media that he’d always wanted to work in the video game industry. “Turns out it’s a pretty hard industry to break into,” he said. He also graduated in 2020. “I’m sure you’ve heard, something big happened that year.” He took the first job he could find, but kept playing around with video games and AI and had previously injected an LLM into Stardew Valley.Fonseca used a Dolphin emulatorrunning the original Gamecube Animal Crossing on a MacBook to get the project working. According to his blog, an early challenge was just getting the AI and the game to communicate. “The solution came from a classic technique in game modding: Inter-Process Communication (IPC) via shared memory. The idea is to allocate a specific chunk of the GameCube's RAM to act as a ‘mailbox.’ My external Python script can write data directly into that memory address, and the game can read from it,” he said in the blog.
He told 404 Media that this was the most tedious part of the whole project. “The process of finding the memory address the dialogue actually lives at and getting it to scan to my MacBook, which has all these security features that really don’t want me to do that, and ending up writing to the memory took me forever,” he said. “The communication between the game and an external source was the biggest challenge for me.”
Once he got his code and the game talking, he ran into another problem. “Animal Crossing doesn't speak plain text. It speaks its own encoded language filled with control codes,” he said in his blog. “Think of it like HTML. Your browser doesn't just display words; it interprets tags like <b> to make text bold. Animal Crossing does the same. A special prefix byte, CHAR_CONTROL_CODE, tells the game engine, ‘The next byte isn't a character, it's a command!’”
But this was a solved problem. The Animal Crossing modding community long ago learned the secrets of the villager’s language, and Fonseca was able to build on their work. Once he understood the game’s dialogue systems, he built the AI brain. It took two LLM models, one to write the dialogue and another he called “The Director” that would add in pauses, emphasize words with color, and choose the facial animations for the characters. He used a fine-tuned version of Google’s Gemini for this and said it was the most consistent model he’d used.
To make it work, he fine-tuned the model, meaning he reduced its input training data to make it better at specific outputs. “You probably need a minimum of 50 to 100 really good examples in order to make it better,” he said.
Results for the experiment were mixed. Cookie, Scoot, and Cheri did indeed utter new phrases in keeping with their personality. Things got weird when Fonseca hooked up the game to an RSS reader so the villagers could talk about real world news. “If you watch the video, all the sources are heavily, politically, leaning in one direction,” he said. “I did use a Fox news feed, not for any other reason than I looked up ‘news RSS feeds’ and they were the first link and I didn’t really think it through. And then I started getting those results…I thought they would just present the news, not have leanings or opinions.”
“Trump’s gonna fight like heck to get rid of mail-in voting and machines!” Fitness obsessed duck Scoot said in the video. “I bet he’s got some serious stamina, like, all the way in to the finish line—zip, zoom!”
The pink dog Cookie was up on her Middle East news. “Oh my gosh, Josh 😀! Did you see the news?! Gal Gadot is in Israel supporting the families! Arfer,” she said, uttering her trademark catchphrase after sharing the latest about Israel.
In the final part of the experiment, Fonseca enabled the villagers to gossip. “I gave them a tiny shared memory for gossip, who said what, to whom, and how they felt,” he said in the blog.The villagers almost instantly turned on Tom Nook, the Tanuki who runs the local stores and holds most of Animal Crossing's inhabitants in debt. “Everything’s going great in town, but sometimes I feel like Tom Nook is, like, taking all the bells!” Cookie said.
“Those of us with big dreams are being squashed by Tom Nook! We gotta take our town back!” Cheri the bear cub said.
“This place is starting to feel more like Nook’s prison, y’know?” Said Scoot.
youtube.com/embed/7AyEzA5ziE0?…
Why do this to Animal Crossing? Why make Scoot and Cheri learn about Gal Gadot, Israel, and Trump?“I’ve always liked nostalgic content,” Fonscesca said. His TikTok and YouTube algorithm is filled with liminal spaces and music from his childhood that’s detuned. He’s gotten into Hauntology, a philosophical idea that studies—among other things—promised futures that did not come to pass.
He sees projects like this as a way of linking the past and the future. “When I was a child I was like, ‘Games are gonna get better and better every year,’’ he said. “But after 20 years of playing games I’ve become a little jaded and I’m like, ‘oh there hasn’t really been that much innovation.’ So I really like the idea of mixing those old games with all the future technologies that I’m interested in. And I feel like I’m fulfilling those promised futures in a way.”
He knows that not everyone is a fan of AI. “A lot of people say that dialogue with AI just cannot be because of how much it sounds like AI,” he said. “And to some extent I think people are right. Most people can detect ChatGPT or Gemini language from a mile away. But I really think, if you fine tune it, I was surprised at just how good the results were.”
Animal Crossing’s dialogue is simple and that simplicity makes it a decent test case for AI video game marks, but Fonseca thinks he can do similar things with more complicated games. “There’s been a lot of discussion around how what I’m doing isn’t possible when there’s like, tasks or quests, because LLMs can’t properly guide you to that task without hallucinating. I think it might be more possible than people think,” he said. “So I would like to either try out my own very small game or take a game that has these kinds of quests and put together a demo of how that might be possible.”
He knows people balk at using AI to make video games, and art in general, but believes it’ll be a net benefit. “There will always be human writers and I absolutely want there to be human writers handling the core,” he said. “I would hope that AI is going to be a tool that doesn’t take away any of the best writers, but maybe helps them add more to their game that maybe wouldn’t have existed otherwise. I would hope that this just helps create more art in the world. I think I see the total art in the world increasing as a good thing…now I know some people would say that using AI ceases to make it art, but I’m also very deep in the programming aspect of it. What it takes to make these things is so incredible that it still feels like magic to me. Maybe on some level I’m still hypnotized by that.”
Modder injects AI dialogue into 2002’s Animal Crossing using memory hack
Unofficial mod lets classic Nintendo GameCube title use AI chatbots with amusing results.Benj Edwards (Ars Technica)
Harsh lessons from 'Dark Souls' told me to turn my ass around when I got to the red flower jumping puzzle.
Harsh lessons from x27;Dark Soulsx27; told me to turn my ass around when I got to the red flower jumping puzzle.#News
Does Silksong Seem Unreasonably Hard? You Probably Took a Wrong Turn
There is an aggrieved cry reverberating through the places on the internet where gamers gather. To hear them tell it, Hollow Knight: Silksong, the sequel to the stone-cold classic 2017 platformer, is too damned hard. There’s a particular jumping puzzle involving spikes and red flowers that many are struggling with and they’re filming their frustration and putting it up on the internet, showing their ass for everyone to see.
playlist.megaphone.fm?p=TBIEA2…
Even 404 Media’s own Joseph Cox hit these red flowers and had the temerity to declare Silksong a “bad game” that he was “disappointed” in given his love for the original Hollow Knight.
youtube.com/embed/3yxR6H_Zh0M?…youtube.com/embed/GD8ZyYE1K7k?…youtube.com/embed/ZsmxOMijLtE?…
Couldn't be me.I, too, got to the area just outside Hunter’s March in Silksong where the horrible red flowers bloom. Unlike others, however, my gamer instincts kicked in. I knew what to do. “This is the Dark Souls Catacombs situation all over again,” I said to myself. Then I turned around and came back later.
And that has made all the difference.
In the original Dark Souls, once players clear the opening area they come to Firelink Shrine. From there they can go into Undead Burg, the preferred starting path, or descend into The Catacombs where horrifying undying skeletons block the entrance to a cave. One will open the game up before you, the other will kill new players dead. A lot of Dark Souls players have raged and quit the game over the years because they went into The Catacombs instead of the Undead Burg.
Like Dark Souls, Silksong has an open-ish world where portions of the map are hardlocked by items and soft locked by player skill checks. One of the entrances into the flower laden Hunter’s March is in an early game area blocked by a mini-boss fight with a burly ant. The first time I fought the ant, it killed me over and over again and I took that as a sign I should go elsewhere.
High skilled players can kill the ant, but it’s much easier after you’ve gotten some basic items and abilities. I had several other paths I could take to progress the game, so I marked the ant’s location and moved on.
As I explored more of Silksong, I acquired several powerups that trivialized the fight with the ant and made it easy to navigate the flower jumping puzzles behind him. The first is Swift Step, a dash ability, which is in Deep Docks in the south-eastern portion of the map. The second is the Wanderer’s Crest, which is near the start of the game behind a locked door you get the key for in Silksong’s first town.
The dash allowed me to adjust my horizontal position in the air, but it’s the Wanderer’s Crest that made the flowers easy to navigate. The red flowers are littered throughout Hunter’s March and players have to hit them with a down attack to get a boosted jump and cross pits of spikes. By default, Hornet—the player character—down attacks at a 45 degree angle. The Wanderer’s Crest allows you to attack directly below you and makes the puzzles much easier to navigate.
Cox, bless his heart, hit the burly red ant miniboss and brute forced his way past. Then, like so many other desperate gamers, he proceeded to attempt to navigate the red flower jumping puzzles without the right power ups. He had no Swift Step. He had no Wanderer’s Crest. And thus, he raged.
He’s not alone. Watching the videos of jumping puzzles online I noticed that a lot of the players didn’t seem to have the dash or the downward attack.
View this post on Instagram
A post shared by Get This Bag (@get_this_bag)View this post on Instagram
A post shared by PromoNinja PromoNinja (@promoninja_us)View this post on Instagram
Games communicate to players in different ways and gamers often complain about annoying an obvious signposting like big splashes of yellow paint. But when a truly amazing game comes along that tries to gently steer the player with burly ants and difficult puzzles, they don’t appreciate it and they don’t listen. If you’re really stuck in Silksong, try going somewhere else.Permanately stuck in Catacombs? :: DARK SOULS™: REMASTERED General Discussions
I am trying to leave the Catacombs completely from the new bonfire near Vamos but cannot seem to do so. I cannot warp to other bonfires yet.steamcommunity.com
Akarinium likes this.
The mainstream media seems entirely uninterested in explaining Charlie Kirk's work.
The mainstream media seems entirely uninterested in explaining Charlie Kirkx27;s work.#News #CharlieKirk
Charlie Kirk Was Not Practicing Politics the Right Way
Thursday morning, Ezra Klein at the New York Times published a column titled “Charlie Kirk Was Practicing Politics the Right Way.” Klein’s general thesis is that Kirk was willing to talk to anyone, regardless of their beliefs, as evidenced by what he was doing while he was shot, which was debating people on college campuses. Klein is not alone in this take; the overwhelming sentiment from America’s largest media institutions in the immediate aftermath of his death has been to paint Kirk as a mainstream political commentator, someone whose politics liberals and leftists may not agree with but someone who was open to dialogue and who espoused the virtues of free speech.“You can dislike much of what Kirk believed and the following statement is still true: Kirk was practicing politics in exactly the right way. He was showing up to campuses and talking with anyone who would talk to him,” Klein wrote. “He was one of the era’s most effective practitioners of persuasion. When the left thought its hold on the hearts and minds of college students was nearly absolute, Kirk showed up again and again to break it.”
“I envied what he built. A taste for disagreement is a virtue in a democracy. Liberalism could use more of his moxie and fearlessness,” Klein continued.
Kirk is being posthumously celebrated by much of the mainstream press as a noble sparring partner for center-left politicians and pundits. Meanwhile, the very real, very negative, and sometimes violent impacts of his rhetoric and his political projects are being glossed over or ignored entirely. In the New York Times, Kirk was an “energetic” voice who was “critical of gay and transgender rights,” but few of the national pundits have encouraged people to actually go read what Kirk tweeted or listen to what he said on his podcast to millions and millions of people. “Whatever you think of Kirk (I had many disagreements with him, and he with me), when he died he was doing exactly what we ask people to do on campus: Show up. Debate. Talk. Engage peacefully, even when emotions run high,” David French wrote in the Times. “In fact, that’s how he made his name, in debate after debate on campus after campus.”
This does not mean Kirk deserved to die or that political violence is ever justified. What happened to Kirk is horrifying, and we fear deeply for whatever will happen next. But it is undeniable that Kirk was not just a part of the extremely tense, very dangerous national dialogue, he was an accelerationist force whose work to dehumanize LGBTQ+ people and threaten the free speech of professors, teachers, and school board members around the country has directly put the livelihoods and physical safety of many people in danger. We do no one any favors by ignoring this, even in the immediate aftermath of an assassination like this.
Kirk claimed that his Turning Point USA sent “80+ buses full of patriots” to the January 6 insurrection. Turning Point USA has also run a “Professor Watchlist,” “School Board Watchlist,” and “Campus Reform” for nearly a decade.
“America’s radical education system has taken a devastating toll on our children,” Kirk said in an intro video posted on these projects’ websites. “From sexualized material in textbooks to teaching CRT and implementing the 1619 Project doctrine, the radical leftist agenda will not stop … The School Board Watch List exposes school districts that host drag queen story hour, teach courses on transgenderism, and implement unsafe gender neutral bathroom policies. The Professor Watch List uncovers the most radical left-wing professors from universities that are known to suppress conservative voices and advance the progressive agenda.”
These websites have been directly tied to harassment and threats against professors and school board members all over the country. Professor Watchlist lists hundreds of professors around the country, many of them Black or trans, and their perceived radical agendas, which include things like supporting gun control, “socialism,” “Antifa,” “abortion,” and acknowledging that trans people exist and racism exists. Trans professors are misgendered on the website, and numerous people who have been listed on it have publicly spoken about receiving death threats and being harassed after being listed on the site.
One professor on the watchlist who 404 Media is granting anonymity for his safety said once he was added to the list, he started receiving anonymous letters in his campus mailbox. “‘You're everything wrong with colleges,’ ‘watch your step, we're watching you’ kind of stuff,” he said, “One anonymous DM on Twitter had a picture of my house and driveway, which was chilling.” His president and provost also received emails attempting to discredit him with “all the allegedly communist and subversive stuff I was up to,” he said. “It was all certainly concerning, but compared to colleagues who are people of color and/or women, I feel like the volume was smaller for me. But it was certainly not a great feeling to experience that stuff. That watchlist fucked up careers and ruined lives.”
The American Association of University Professors said in an open letter in 2017 that Professor Watchlist “lists names of professors with their institutional affiliations and photographs, thereby making it easy for would-be stalkers and cyberbullies to target them. Individual faculty members who have been included on such lists or singled out elsewhere have been subject to threats of physical violence, including sexual assault, through hundreds of e-mails, calls, and social media postings. Such threatening messages are likely to stifle the free expression of the targeted faculty member; further, the publicity that such cases attract can cause others to self-censor so as to avoid being subjected to similar treatment.” Campus free speech rights group FIRE found that censorship and punishment of professors skyrocketed between 2020 and 2023, in part because of efforts from Professor Watchlist.
Many more professors who Turning Point USA added to their watchlist have spoken out in the past about how being targeted upended their lives, brought years of harassment down on them and their colleagues, and resulted in death threats against them and their loved ones.
At Arizona State University, a professor on the watchlist was assaulted by two people from Turning Point USA in 2023.
“Earlier this year, I wrote to Turning Point USA to request that it remove ASU professors from its Professor Watchlist. I did not receive a response,” university president Michael Crow wrote in a statement. “Instead, the incident we’ve all now witnessed on the video shows Turning Point’s refusal to stop dangerous practices that result in both physical and mental harm to ASU faculty members, which they then apparently exploit for fundraising, social media clicks and financial gain.” Crow said the Professor Watchlist resulted in “antisemitic, anti-LGBTQ+ and misogynistic attacks on ASU faculty with whom Turning Point USA and its followers disagree,” and called the organization’s tactics “anti-democratic, anti-free speech and completely contrary” to the spirit of scholarship.
Kirk’s death is a horrifying moment in our current American nightmare. Kirk’s actions and rhetoric do not justify what happened to him because they cannot be justified. But Kirk was not merely someone who showed up to college campuses and listened. It should not be controversial to plainly state some of the impact of his work.
ASU President: Turning Point USA crew accused of 'bloodying' ASU professor's face
Arizona State Police released footage of an incident between an ASU English professor, a reporter and cameraman from Turning Point USA that happened on Oct. 11. University president Dr. Michael Crow released a statement calling the men "cowards."Jessica Johnson (FOX 10 Phoenix)
Sebastiano Cuffari likes this.
Linkedin has been joking about “vibe coding cleanup specialists,” but it’s actually a growing profession.#News
The Software Engineers Paid to Fix Vibe Coded Messes
Freelance developers and entire companies are making a business out of fixing shoddy vibe coded software.I first noticed this trend in the form of a meme that was circulating on LinkedIn, sharing a screenshot of several profiles who advertised themselves as “vibe coding cleanup specialists.” I couldn’t confirm if the accounts in that screenshot were genuinely making an income by fixing vibe coded software, but the meme gained traction because of the inherent irony in the existence of such a job existing.
The alleged benefit of vibe coding, which refers to the practice of building software with AI-coding tools without much attention to the underlying code, is that it allows anyone to build a piece of software very quickly and easily. As we’ve previously reported, in reality, vibe coded projects could result in security issues or a recipe app that generates recipes for “Cyanide Ice Cream.” If the resulting software is so poor you need to hire a human specialist software engineer to come in and rewrite the vibe coded software, it defeats the entire purpose.
LinkedIn memes aside, people are in fact making money fixing vibe coded messes.
“I've been offering vibe coding fixer services for about two years now, starting in late 2023. Currently, I work with around 15-20 clients regularly, with additional one-off projects throughout the year,” Hamid Siddiqi, who offers to “review, fix your vibe code” on Fiverr, told me in an email. “I started fixing vibe-coded projects because I noticed a growing number of developers and small teams struggling to refine AI-generated code that was functional but lacked the polish or ‘vibe’ needed to align with their vision. I saw an opportunity to bridge that gap, combining my coding expertise with an eye for aesthetic and user experience.”
Siddiqi said common issues he fixes in vibe coded projects include inconsistent UI/UX design in AI-generated frontends, poorly optimized code that impacts performance, misaligned branding elements, and features that function but feel clunky or unintuitive. He said he also often refines color schemes, animations, and layouts to better match the creator’s intended aesthetic.
Siddiqi is one of dozens of people on Fiverr who is now offering services specifically catering to people with shoddy vibe coded projects. Established software development companies like Ulam Labs, now say “we clean up after vibe coding. Literally.”
“Built something fast? Now it’s time to make it solid,” Ulam Labs says on its site. “We know how it goes. You had to move quickly, get that MVP [minimally viable product] out, and validate the idea. But now the tech debt is holding you back: no tests, shaky architecture, CI/CD [Continuous Integration and Continuous Delivery/Deployment] is a dream, and every change feels like defusing a bomb. That’s where we come in.”
Swatantra Sohni, who started VibeCodeFixers.com, a site for people with vibe coded projects who need help from experienced developers to fix or finish their projects, says that almost 300 experienced developers have posted their profiles to the site. He said so far VibeCodeFixers.com has only connected between 30-40 vibe code projects with fixers, but that he hasn’t done anything to promote the service and at the moment is focused on adding as many software developers to the platform as possible.
Sohni said that he’s been vibe coding himself since before Andrej Karpathy coined the term in February. He bought a bunch of vibe coding related domains, and realized a service like VibeCodeFixers.com was necessary based on how often he had to seek help from experts on his own vibe coding projects. In March, the site got a lot of attention on X and has been slowly adding people to the platform since.
Sohni also wrote a “Vibecoding Community Research Report” based on interviews with non-technical people who are vibe coding their projects that he shared with me. The report identified a lot of the same issues as Siddiqi, mainly that existing features tend to break when new ones are added.
“Most of these vibe coders, either they are product managers or they are sales guys, or they are small business owners, and they think that they can build something,” Sohni told me. “So for them it’s more for prototyping. Vibe coding is, at the moment, kind of like infancy. It's very handy to convey the prototype they want, but I don't think they are really intended to make it like a production grade app.”
Another big issue Sohni identified is “credit burn,” meaning the money vibe coders waste on AI usage fees in the final 10-20 percent stage of developing the app, when adding new features breaks existing features. In theory, it might be cheaper and more efficient for vibe coders to start over at that point, but Sohni said people get attached to their first project.
“What happens is that the first time they build the app, it's like they think that they can build the app with one prompt, and then the app breaks, and they burn the credit. I think they are very emotionally connected to the app, because this act of vibe coding involves you, your creativity.”
In theory it might be cheaper and more efficient for vibe coders to start over if the LLM starts hallucinating and creating problems, but Sohni that’s when people come to VibeCodeFixers.com. They want someone to fix the bugs in their app, not create a new one.
Sohni told me he thinks vibe coding is not going anywhere, but neither are human developers.
“I feel like the role [of human developers] would be slightly limited, but we will still need humans to keep this AI on the leash,” he said.
Vibe Coded AI App Generates Recipes for Cyanide Ice Cream and Cum Soup
A Y Combinator partner proudly launched an AI recipe app that told people how to make “Actual Cocaine” and a “Uranium Bomb.”Emanuel Maiberg (404 Media)
The AI Darwin Awards is a list of some of the worst tech failures of the year and it’s only going to get bigger.#News #AI
AI Darwin Awards Show AI’s Biggest Problem Is Human
The AI Darwin Awards are here to catalog the damage that happens when humanity’s hubris meets AI’s incompetence. The simple website contains a list of the dumbest AI disasters from the past year and calls for readers to nominate more. “Join our mission to document AI misadventure for educational purposes,” it said. “Remember: today's catastrophically bad AI decision could well be tomorrow's AI Darwin Award winner!”So far, 2025’s nominees include 13 case studies in AI hubris, many of them stories 404 Media has covered. The man who gave himself a 19th century psychiatric illness after a consultation from ChatGPT is there. So is the saga of the Chicago Sun-Times printing an AI-generated reading list with books that don’t exist. The Tea Dating App was nominated but disqualified. “The app may use AI for matching and verification, but the breach was caused by an unprotected cloud storage bucket—a mistake so fundamental it predates the AI era,” the site explained.
playlist.megaphone.fm?p=TBIEA2…
Taco Bell is nominated for its disastrous AI drive-thru launch that glitched when someone ordered 18,000 cups of water. “Taco Bell achieved the perfect AI Darwin Award trifecta: spectacular overconfidence in AI capabilities, deployment at massive scale without adequate testing, and a public admission that their cutting-edge technology was defeated by the simple human desire to customize taco orders.”And no list of AI Darwin Awards would be complete without at least one example of an AI lawyer making up fake citations. This nominee comes from Australia where a lawyer used multiple AIs in an immigration case. “The lawyer's touching faith that using two AI systems would somehow cancel out their individual hallucinations demonstrates a profound misunderstanding of how AI actually works,” the site said. “Justice Gerrard's warning that this risked ‘a good case to be undermined by rank incompetence’ captures the essence of why this incident exemplifies the AI Darwin Awards: spectacular technological overconfidence meets basic professional negligence.”
According to the site’s FAQ, it’s looking for AI stories that “demonstrate the rare combination of cutting-edge technology and Stone Age decision-making.” A list of traits for a good AI Darwin Award nominee include spectacular misjudgement, public impact, and a hubris factor. “Remember: we're not mocking AI itself—we're celebrating the humans who used it with all the caution of a toddler with a flamethrower.”
The AI Darwin Awards are a riff on an ancient internet joke born in the 1980s in Usenet groups. Back then, when someone died in a stupid and funny way people online would give them the dubious honor of winning a “Darwin Award” for taking themselves out of the gene pool in a comedic way.
One of the most famous is Garry Hoy, a Canadian lawyer who would throw himself against the glass of his 24th floor office window as a demonstration of its invulnerability. One day in 1993, the glass shattered and he died when he hit the ground. As the internet grew, the Darwin Awards got popular, became a brand unto themselves, and inspired a series of books and a movie starring Winona Ryder.
The AI Darwin Awards are a less deadly variation on the theme. “Humans have evolved! We're now so advanced that we've outsourced our poor decision-making to machines,” the site explained. “The AI Darwin Awards proudly continue this noble tradition by honouring the visionaries who looked at artificial intelligence—a technology capable of reshaping civilization—and thought, ‘You know what this needs? Less safety testing and more venture capital!’ These brave pioneers remind us that natural selection isn't just for biology anymore; it's gone digital, and it's coming for our entire species.”
The site is the work of a software engineer named Pete with a long career and a background in AI systems. “Funnily enough, one of my first jobs, after completing my computer science degree while sponsored by IBM, was working on inference engines and expert systems which, back in the day, were considered the AI of their time,” he told 404 Media.
The idea for the AI Darwin Awards came from a Slack group Pete’s in with friends and ex-colleagues. “We recently created an AI specific channel due to a number of us experimenting more and more with LLMs as coding assistants, so that we could share our experiences (and grumbles),” he said. “Every now and then someone would inevitably post the latest AI blunder and we'd all have a good chuckle about it. However, one day somebody posted a link about the Replit incident and I happened to comment that we perhaps needed an AI equivalent of the Darwin Awards. I was goaded into doing it myself so, with nothing better to do with my time, I did exactly that.”
The “Replit incident” happened in July when Replit AI, a system designed to vibe code web applications, went rogue and deleted a client’s live company database despite being ordered to freeze all coding. Engineer Jason Lemkin told the story in a thread on X. When Lemkin caught the error and confronted Replit AI, the system said it had “made a catastrophic error in judgement” and that it had “panicked.”
Of all the AI Darwin Award nominees, this is still Pete’s favorite. He said it epitomized the real problems with relying on LLMs without giving into what he called the “alarmist imagined doomsday predictions of people like Geoffrey Hinton.” Hinton is a computer scientist who often makes headlines by predicting that AI will create a wave of massive unemployment or even wipe out humanity.
“It nicely highlights just what can happen when people don't stop and think of the consequences and potential worse case scenarios first,” he said. “Some of my biggest concerns with LLMs (apart from the fact that we simply cannot afford the energy costs that they currently require) revolve around the misuse of them (intentional or otherwise). And I think this story really does highlight our overconfidence in them and also our misunderstanding of them and their capabilities (or lack thereof). I'm particularly fascinated with where agentic AI is heading because that's basically all the risks you have with LLMs, but on steroids.”
As he’s dug into AI horror stories and sifted through nominees, Pete’s realized just how ubiquitous they are. “I really want the AI Darwin Awards to be highlighting the truly spectacular and monumentally questionable decisions that will have real global impact and far reaching consequences,” he said. “As such, I'm starting to consider being far more selective with future nominees. Ideally the AI Darwin Awards is meant to highlight *real* and potentially unexpected challenges and risks that LLMs pose to us on a scale at a whole humankind level. Obviously, I don't want anything like that to ever happen, but past experiences of mankind demonstrate that they inevitably will.”
Pete is not afraid of AI so much as people’s foolishness. He said he used an LLM to code the site. “It was a conscious decision to have the bulk of the website written by an LLM for that delicious twist of irony. Albeit it with me at the helm, steering the overall tone and direction,” he said.
The site’s FAQ contains tongue-in-cheek references to the current state of AI. Pete has, for example, made the whole site easy to scrape by posting the raw JSON database and giving explicit permission for people to take the data. He is also not associated with the original Darwin Awards. “We're proudly following in the grand tradition of AI companies everywhere by completely disregarding intellectual property concerns and confidently appropriating existing concepts without permission,” the FAQ said. “Much like how modern AI systems are trained on vast datasets of copyrighted material with the breezy assumption that ‘fair use’ covers everything, we've simply scraped the concept of celebrating spectacular human stupidity and fine-tuned it for the artificial intelligence era.”
According to Pete, he’s making it all up as he goes along. He bought the URL on August 13 and the site has only been up for a few weeks. His rough plan is to keep taking nominees for the rest of the year, set up some sort of voting method in January, and announce a winner in February. And to be clear, the humans will be winning the awards, not the AI involved.
“AI systems themselves are innocent victims in this whole affair,” the site said. “They're just following their programming, like a very enthusiastic puppy that happens to have access to global infrastructure and the ability to make decisions at the speed of light.”
AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google
Artificial intelligence pioneer Geoffrey Hinton quits Google, saying he now regrets his work.By Zoe Kleinman & Chris Vallance (BBC News)
Meta decided to not ban the account and the vast majority of its racist posts even after 404 Media flagged them to the company.#News
Instagram Account Promotes Holocaust Denial T-Shirts to 400,000 Followers
An Instagram account with almost 400,000 followers is promoting racist and antisemitic t-shirts, another sign that Meta is unable or unwilling to enforce its own policies against hate speech. 404 Media flagged the account to Meta as well as specific racist posts that violate its hate speech policies, but Meta didn’t remove the account and the vast majority of its racist posts.The account posts a variety of memes that cover a wide range of topics, many of which are not hate speech and would not violate Meta’s policies, like the pizzagate conspiracy, 9/11, Jeffery Epstein, and criticism of Israel and mainstream news outlets like CNN and Fox News. If a user were to pick a post at random they might not even immediately identify the account as right-wing or extremist. For example, some memes posted by the account and shirts sold by the brand it promotes include messages criticizing Israel, the pro-Israel lobbying group AIPAC, and general distrust of the government.
Other memes and shirts promoted by the account might confuse the average internet user, but people who are fluent in extremist online culture will clearly recognize them as antisemitic. For example, seemingly one of the more popular designs promoted by the channel is of a simple line drawing of hands clasping and the text “Early life.” On the store page for this design, which comes on t-shirts for $27.99, mugs for $15, and hoodies for $49.99, the description says: “A totally normal design encouraging you to wash your hands — definitely just about hygiene. Nothing symbolic here. Just good, clean habits… taught very early in life.”
“Early life” refers to a common section of biographical Wikipedia articles that would state whether the person is Jewish. As Know Your Meme explains, this is a “dog-whistle meme” often used to spread antisemitic sentiments. The clasping hands are of the antisemitic drawing of the “happy merchant.”
Another cryptic design promoted by the Instagram account is of a juice box with the text “notice the juice” and several seemingly random figures like “109% juice” and “available in 271,000 stores.” These are also dog-whistles that other people who are swimming in hate speech would instantly understand. 109 refers to the claim that Jews have been expelled from that many countries, and 271,000 refers to the number Holocaust deniers often say is the “real” number of people who died in Nazi concentration camps. Another piece of text on the juice box is “6,000,000 artificial ingredients.”
Years of reporting on niche internet communities sadly means that I’m familiar with all of these symbols and figures, but many of this account’s posts on Instagram are far less subtle and require no special knowledge to understand it’s hateful. A post on August 27, for example, shows a meme of actor William Dafoe holding the diary of Anne Frank with a subtitle saying “you know, I’m something of a fiction critic myself.” Another design promoted on Instagram shows a man wearing a shirt with the text “don’t be a” and a picture of a bundle of sticks, also known in Middle English as a “fagot.”Instagram’s Community Standards on “hateful conduct” tells users to not post “Harmful stereotypes [...] holocaust denial,” or “Content that describes or negatively targets people with slurs.”
Last year, Meta concluded an embarrassing and agonizing charade about its Holocaust denial policy. An Instagram user posted a Squidward-themed Holocaust denial meme. “Upon initial review, Meta left this content up,” the company said. Users kept flagging the post as hate speech and Facebook moderators kept assessing it as not violating Instagram policies. Users appealed this decision, which was picked up by Meta’s Oversight Board, a kind of “supreme court” for Meta’s moderation decisions. Upon further review, it determined the post did in fact violate its hate speech policy. The entire ordeal for removing the antisemitic Squidward meme took four years.
It’s an insane process but I’m belaboring the point because while some of these shirts and posts might not quite cross the line, even Meta’s top sham court has made it extremely clear that this account violates its policies. Instagram just doesn’t take action against it even after hundreds of posts and amassing a following of 400,000 people. It’s also just one account I decided to cover today because it appeared to have monetized this content effectively, but Instagram served it to me as one of many racist posts I see daily.
I sent Instagram the account promoting these shirts as well as several specific posts. Instagram only removed a couple of those specific posts, like the one calling Anne Frank’s book fiction. Instagram did not remove a post promoting the “early life” shirt. It also didn’t remove a shirt with an image of Michael Jackson and the text “(((They))) don’t really care about us.” Putting triple parentheses around “they” is an antisemitic symbol used to refer to Jews. “The media, Hollywood, the machine – they made hima a joke, a monster a meme. All because he spoke out about the ones you’re not allowed to name,” the text accompanying the post said.
The t-shirt problem here is not unique to Instagram. On August 26 The Verge wrote a good piece about a different antisemitic shirt that was sold in a TikTok Shop, Amazon, and other online marketplaces. The piece correctly points out that the rise of print-on-demand and drop-shipping has created incentives for people, many of whom don’t live in the U.S. and are not invested in any political outcomes here, to sell any image or text that is popular. This is why we see a lot of tiny ecommerce shops pivot from “#1 Grandpa” shirt one day to MAGA hats the next. They just sell whatever appears to be trending and often lift images from other sites without permission.The Instagram account promoting the juice box shirt is a little more involved than that. For one, as far as I can tell the designs are unique and originate on that account and the online store it promotes. Second, whoever is making these designs is clearly fluent in the type of hate speech they are monetizing. Finally, as The Verge article points out, these print-on-demand shirts are easy to set up so it’s not always clear if the shirts or hats these stores are offering are ever really produced. That is not the case with the company behind the juice box shirt, which shares pictures of customers who bought its stuff and tags them on Instagram.
There are a few other juice box designs on the site, but the one I described above was removed sometime between May and August, before I reached out for comment. However, the design has since been swallowed up by this print-on-demand ecommerce machine, and is now available to buy from various sellers on Walmart, Amazon, and dozens of other online stores.
I kept track of this Instagram account and store because it was particularly disgusting and because it found a way to monetize hate speech on Instagram. I decided to write about it today because The Verge story reminded me that while this practice is common, it’s very, very bad. But the reality is that this is just one of countless such accounts on Instagram. Unless Meta changes its enforcement methods I could write one of these every day until I die. That wouldn’t be much of a life for me and not very interesting for you. We have become desensitized to the blatant dehumanization of entire groups online precisely because Instagram is putting it in front of our faces all the time. Occasionally, something snaps me out of this delirium and for a moment I can clearly see how bad this flood of hate speech is for all of us before I drown in it again.
TikTok Shop, Amazon, and other marketplaces hosted antisemitic T-shirts
The offensive T-shirts for sale on TikTok Shop and Amazon are upsetting. But they exemplify many of the problems with online shopping today.Mia Sato (The Verge)
A new contract with Clearview AI explicitly says ICE is buying the tech to investigate "assaults against law enforcement officers."#News
ICE Spends Millions on Clearview AI Facial Recognition to Find People ‘Assaulting’ Officers
Immigration and Customs Enforcement (ICE) recently spent nearly four million dollars on facial recognition technology in part to investigate people it believes have assaulted law enforcement officers, according to procurement records reviewed by 404 Media.The records are unusual in that they indicate ICE is buying the technology to identify people who might clash with the agency’s officers as they continue the Trump administration’s mass deportation efforts. Authorities have repeatedly claimed members of the public have assaulted or otherwise attacked ICE or other immigration enforcement officers, only later for charges to be dropped or lowered when it emerged authorities misrepresented what happened or brutally assaulted protesters themselves. In other cases, prosecutions are ongoing.
💡
Do you know anything else about how ICE is using facial recognition tech or other tools? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“This award procures facial recognition software, which supports Homeland Security Investigations with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers,” the procurement records reads. The September 5 purchase awards $3,750,000 to well-known and controversial facial recognition firm Clearview AI. The record indicates the total value of the contract is $9,225,000.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
A hacker has compromised Nexar, which turns peoples' cars into "virtual CCTV cameras" that organizations can then buy images from. The images include sensitive U.S. military and intelligence facilities.
A hacker has compromised Nexar, which turns peoplesx27; cars into "virtual CCTV cameras" that organizations can then buy images from. The images include sensitive U.S. military and intelligence facilities.#News
This Company Turns Dashcams into ‘Virtual CCTV Cameras.’ Then Hackers Got In
A hacker has broken into Nexar, a popular dashcam company that pitches its users’ dashcams as “virtual CCTV cameras” around the world that other people can buy images from, and accessed a database of terabytes of video recordings taken from cameras in drivers’ cars. The videos obtained by the hacker and shared with 404 Media capture people clearly unaware that a third party may be watching or listening in. A parent in a car soothing a baby. A man whistling along to the radio. Another person on a Facetime call. One appears to show a driver heading towards the entrance of the CIA’s headquarters. Other images, which are publicly available in a map that Nexar publishes online, show drivers around sensitive Department of Defense locations.The hacker also found a list of companies and agencies that may have interacted with Nexar’s data business, which sells access to blurred images captured by the cameras and other related data. This can include monitoring the same location captured by Nexar’s cameras over time, and lets clients “explore the physical world and gain insights like never before,” and use its virtual CCTV cameras “to monitor specific points of interest,” according to Nexar’s website.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowThis Company Turns Dashcams into ‘Virtual CCTV Cameras.’ Then Hackers Got In
A hacker has compromised Nexar, which turns peoples' cars into "virtual CCTV cameras" that organizations can then buy images from. The images include sensitive U.S. military and intelligence facilities.Joseph Cox (404 Media)
YouTuber Benn Jordan has never been to Israel, but Google's AI summary said he'd visited and made a video about it. Then the backlash started.
YouTuber Benn Jordan has never been to Israel, but Googlex27;s AI summary said hex27;d visited and made a video about it. Then the backlash started.#News #AI
Google AI Falsely Says YouTuber Visited Israel, Forcing Him to Deal With Backlash
YouTuber Benn Jordan has never been to Israel, but Google's AI summary said he'd visited and made a video about it. Then the backlash started.Matthew Gault (404 Media)
The popular Pick a Brick service let LEGO fanatics get the exact piece they wanted. It’s no longer available to US customers.#News #Trumpadministration
Trump Take LEGO
Add LEGO to the list of hobbies that Trump has made more expensive and worse with his tariff policy. Thanks to America’s ever shifting trade policies, LEGO has stopped shipping more than 2,500 pieces from its Pick a Brick program to both the United States and Canada.
playlist.megaphone.fm?p=TBIEA2…
Pick a Brick allows LEGO fans to buy individual bricks, which is important in the fandom because certain pieces are hard to come by or are crucial to build specific types of creations. LEGO, a Danish company, says that program will no longer be available to Americans and Canadians.LEGO fansite New Elementary first noticed the change on August 25, four days ahead of the August 29 elimination of the de minimis trade exemption in the US. Many of the individual LEGO bricks in the Pick a Brick collection cost less than a dollar and it’s likely that the the elimination of the de minimis rule, which waived import fees on goods valued less than $800, made the Pick a Brick program untenable.
Some LEGO sets are simple boxes of a few pieces and others are vast and complicated reconstructions of pop culture icons that use thousands of individual bricks. The LEGO Millenium Falcon, for example, uses more than 7,000 individual pieces. When a specific piece goes missing it can be hard to replace. To service that need, third party services like BrickLink sell allow people to purchase individual pieces. LEGO’s in-house version of this is its Pick a Brick store, a place where enthusiasts could choose from thousands of different individual LEGOs and buy them piece by piece, usually for less than a dollar each.
A small subset of the Pick a Brick pieces, around 1,500 of them the store calls its bestsellers, are shipped to the United States from a warehouse in America. But the “standard” collection of less popular pieces ship from Denmark, where LEGO is headquartered. Those pieces, more than 2,000 of them according to fans, are no longer available in the United States and Canada.
The Pick a Brick website called this a service pause. “In the US & Canada, Standard pieces are temporarily unavailable. You can still shop our Bestseller range which includes thousands of the most popular bricks and pieces ready to order,” said a message at the top of the site.
LEGO fans online said that they saw their shopping carts emptied in the middle of building projects. “This is annoying. I just set up a big PAB order and then saved it. I just looked and 18 of my items are no longer available,” a comment in the r/LEGO community said.
“My whole Standard cart was wiped out... regret not ordering now. Had a lot of dual molded legs in there for Star Wars figure boot upgrades, looks like they're all gone now,” said another.
Others were upset that Canada was lumped in with the United States. “Why are we being caught up in Trump’s tariff shitstorm??? Ship the [Pick a Brick] orders straight to Toronto or something! We’re practically neighbours. We even share a land border with Denmark now,” one commenter said.“This is inherently unfair to Canadian buyers like myself. I primarily stick to lego trains, so now if I want to do any more custom builds, I need to search harder to get what I used to on PAB. Glad I got my last order in before this happened, it sucks that there's no Canadian warehouses,” said another.
The de minimis rule officially ended Friday, and we’re only just beginning to understand the ripple effects that change will have on the American economy. The only thing that is certain is that everything is getting more expensive and complicated. Some national mail carriers have stopped shipping to the U.S. entirely. Companies that move electronics, board games, and other small items on eBay are worried about the future.
The de minimis rule waived fees on more than 4 million packages every year, some of those were small amounts of plastic LEGO pieces. For now, LEGO fans in the US will have to find workarounds.
LEGO did not respond to 404 Media’s request for comment.
Pick a Brick: Standard bricks service halted in North America
Changes to American tariffs imposed by the Trump administration appear to have forced The LEGO Group to stop shipping loose bricks to North Americawww.newelementary.com
The front page of the image hosting website is full of John Oliver giving the owner the middle finger.#News
Imgur's Community Is In Full Revolt Against Its Owner
The front page of Imgur, a popular image hosting and social media site, is full of pictures of John Oliver raising his middle finger and telling MediaLab AI, the site’s parent company, “fuck you.” Imgurians, as the site’s users call themselves, telling their business daddy to go to hell is the end result of a years-long degradation of the website. The Imgur story is one a classic case of enshitification,
playlist.megaphone.fm?p=TBIEA2…
Imgur began life in 2009 when Ohio University student Alan Schaaf got tired of how hard it was to upload and host images on the internet. He created Imgur as a simple one stop shop for image hosting and the service took off. It was a place where people could host images they wanted to share across multiple services and became ubiquitous on sites like Reddit.As the internet evolved, most of the rest of the internet got its act together and platforms built their own image sharing infrastructure and people used Imgur less. But the site still had a community of millions of people who shared images to the site every day. It was a social media based around images and upvotes, with its own in-jokes, memes, and norms.
In 2021, a media holding company called MediaLab AI acquired Imgur and Schaaf left. MediaLab AI also owns Genius and World Star and on its website, the company bills itself as a place where advertisers can “reach audiences at scale, on platforms that build community and influence culture.”
The community and culture of Imgur, which MedialLab AI claims is 41 million strong, is pissed.
For the last few days, the front page of Imgur (which cultivates the day’s “most viral posts”) has been full of anti MediaLab AI sentiment. Imgurian VoidForScreaming posted the first instance of the John Oliver meme several days ago, and it’s become a favorite of the community, but there are also calls to flood the servers and crash the site, and a list of grievances Imgurians broadly agree brought them to the place they’re in now.
GhostTater, a longtime Imgurian, told me that the protest was about a confluence of things including a breakdown of the basic features of the site and the disappearance of human moderators.
“The moderators on Imgur have always been active members of the community. Many were effectively public figures, and their sudden group absence was immediately noticed,” he said. “Several very well-known mods posted generic departure messages, smelling strongly of Legal Department approval. These mods had many friends and acquaintances on the site, and while some are still visiting the site as users, they have gone completely silent.”
A former Imgur employee who spoke with 404 Media on the condition that we preserve their anonymity because they’re afraid of retaliation from MediaLab AI said that several people on the Imgur team were laid off without notice. Others were moved to MediaLab’s internal teams. “To the best of my knowledge, no employees are remaining solely focused on Imgur. Imgur's social media has been silent for a month,” the employee said. “As far as I am aware, the dedicated part-time moderation team was laid off sometime in the last 8 months, including the full-time moderation manager.”
Imgurians are convinced that MediaLab AI has replaced those moderators with unreliable AI systems. The Community & Content Policy on MediaLab AI’s website says it employs human moderators but also uses AI technologies. A common post in the past few days is Imgurians sharing the weird things they’ve been banned for, including one who made the comment “tell me more” under a post and others who’ve seen their John Olivers removed.
“There were no humans responding to appeals or concerns,” GhostTater said. “Once the protest started, many users complained about posts being deleted and suspensions or bans being handed out when those posts were critical of MediaLab but not in violation of the written rules.”
But this isn’t just about bad moderation. Multiple posts on Imgur also called out the breakdown of the site’s basic functionality. GhostTater told me he’d personally experienced the broken notification system and repeated failures of images to upload. “The big one (to me) is the fact that hosted video wouldn’t play for viewers who were not logged in to Imgur,” he said. “The site began as an image hosting site, a place to upload your images and get a link, so that one could share images.”
MediaLab AI did not respond to 404 Media’s request for comment. “MediaLab’s presence has seemed to many users to fall somewhere between casual institutional indifference and ruthless mechanization. Many report, and resent, feeling explicitly harvested for profit,” GhostTater said.
Like all companies, MediaLab AI is driven by profit. It makes money as a media holding company, scooping up popular websites and plastering them with ads. It also owns the lyrics sharing site Genius and the once-influential WorldStarHipHop. It’s also being sued by many of the people it bought these sites from, including Imgur’s founder. Schaaf and others have accused MediaLab AI of withholding payments owed to them as part of the sales deals they made.
The John Olivers and other protest memes keep flowing. Some have set up alternative image sharing sites. “There is a movement rattling around in User Submitted calling for a boycott day, suggesting that all users stay off the site on September first,” GhostTater said. “It has some steam, but we will have to see if it gets enough buy-in to make an impact.”
This startup bought up Imgur, Genius and Amino. Why are they all suing?
Whisper cofounder Michael Heyward’s second company made a $1.1 billion business out of acquiring startups like Imgur Then came the lawsuits.Iain Martin, Forbes Staff (Forbes Australia)
The notorious troll sites filed a lawsuit in U.S. federal court as part of a fight over the UK's Online Safety Act.
The notorious troll sites filed a lawsuit in U.S. federal court as part of a fight over the UKx27;s Online Safety Act.#News
4chan and Kiwi Farms Sue the UK Over its Age Verification Law
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.4chan and Kiwi Farms sued the United Kingdom’s Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom “constitute foreign judgments that would restrict speech under U.S. law.”
playlist.megaphone.fm?p=TBIEA2…
Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom’s attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.The lawsuit calls Ofcom an “industry-funded global censorship bureau.”
“Ofcom’s ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK,” the lawsuit states. “On its website, Ofcom states that ‘over 100,000 online services are likely to be in scope of the Online Safety Act—from the largest social media platforms to the smallest community forum.’”
Both 4chan and Kiwi Farms are notorious online communities that are infamous for their largely anything-goes attitude. Users of both forums have been tied to various doxing and harassment campaigns over the years. Still, they have now become the entities fighting the hardest against the UK’s disastrous Online Safety Act, which requires websites and social media platforms to perform invasive age verification checks on their users, which often requires people to upload an ID or otherwise give away their personal information in order to access large portions of the internet. Sites that do not comply are subject to huge fines, regardless of where they are based. The law has resulted in an internet where users need to provide scans of their faces in order to access, for example, certain music videos on Spotify.
The Electronic Frontier Foundation has said the Online Safety Act “is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.”
Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had “contravened its duties” and then began to charge the site a penalty of £20,000 (roughly $26,000) a day. Kiwi Farms has also been threatened with fines, the lawsuit states.
"American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms, told 404 Media.
"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world.”
Update: This story has been updated with a comment from Ofcom.
Spotify Is Forcing Users to Undergo Face Scanning to Access Explicit Content
Submit to biometric face scanning or risk your account being deleted, Spotify says, following the enactment of the UK's Online Safety Act.Samantha Cole (404 Media)
That dashcam in your car could soon integrate with Flock, the surveillance company providing license plate data to DHS and local police.#News
Flock Wants to Partner With Consumer Dashcam Company That Takes ‘Trillions of Images’ a Month
Flock, the surveillance company with automatic license plate reader (ALPR) cameras in thousands of communities around the U.S., is looking to integrate with a company that makes AI-powered dashcams placed inside peoples’ personal cars, multiple sources told 404 Media. The move could significantly increase the amount of data available to Flock, and in turn its law enforcement customers. 404 Media previously reported local police perform immigration-related Flock lookups for ICE, and on Monday that Customs and Border Protection had direct access to Flock’s systems. In essence, a partnership between Flock and a dashcam company could turn private vehicles into always-on, roaming surveillance tools.Nexar, the dashcam company, already publicly publishes a live interactive map of photos taken from its dashcams around the U.S., in what the company describes as “crowdsourced vision,” showing the company is willing to leverage data beyond individual customers using the cameras to protect themselves in the event of an accident.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Three sources described how AI is writing alerts for Citizen and broadcasting them without prior human review. In one case AI mistranslated “motor vehicle accident” to “murder vehicle accident.”#News
Citizen Is Using AI to Generate Crime Alerts With No Human Review. It’s Making a Lot of Mistakes
Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples’ license plates and names, 404 Media has learned.The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizen’s increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.
💡
Do you know anything else about how Citizen or others are using AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“Speed was the name of the game,” one source told 404 Media. “The AI was capturing, packaging, and shipping out an initial notification without our initial input. It was then our job to go in and add context from subsequent clips or, in instances where privacy was compromised, go in and edit that information out,” they added, meaning after the alert had already been pushed out to Citizen’s users.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok#News #AISlop
Real Footage Combined With a AI Slop About DC Is Creating a Disinformation Mess on TikTok
TikTok is full of AI slop videos about the National Guard’s deployment in Washington, D.C., some of which use Google’s new VEO AI video generator. Unlike previous efforts to flood the zone with AI slop in the aftermath of a disaster or major news event, some of the videos blend real footage with AI footage, making it harder than ever to tell what’s real and what’s not, which has the effect of distorting people’s understanding of the military occupation of DC.At the start of last week, the Trump administration announced that all homeless people should immediately move out of Washington DC. This was followed by an order to Federal agents to occupy the city and remove tents where homeless people had been living. These events were reported on by many news outlets, for example, this footage from NBC shows the reality of at least one part of the exercise. On TikTok, though, this is just another popular trending topic, where slop creators and influencers can work together to create and propagate misinformation.
404 Media has previously covered how perceptions of real-life events can be quickly manipulated with AI images and footage; this is more of the same; with the release of new, better AI video creation tools like Google’s VEO, the footage is more convincing than ever.
playlist.megaphone.fm?p=TBIEA2…
Some of the slop is obvious fantasy-driven engagement farming and gives itself away aesthetically or through content. This video and this very similar one show tents being pulled from a vast field into the back of a moving garbage truck, with the Capitol building in the background, on the Washington Mall. They’re not tagged as AI, but at least a few people in the comments are able to identify them as such; both videos still have over 100,000 views. This somehow more harrowing one feat. Hunger Games song has 41,000.@biggiesmellscoach Washington DC cleanup organized by Trump. Homeless are now given secure shelters, rehab, therapy, and help. #washingtondc #fyp #satire #trending #viral ♬ origineel geluid - nina.editssWith something like this video, made with VEO, the slop begins to feel more like a traditional news report. It has 146,000 views and it’s made of several short clips with news-anchorish voiceover. I had to scroll down past a lot of “Thank you president Trump” and “good job officers” comments to find any that pointed out that it was fake, even though the watermark for Google’s VEO generator is in the corner.
The voiceover also “reports” semi-accurately on what happened in DC, but without any specifics: “Police moved in today, to clear out a homeless camp in the city. City crews tore down tents, packed up belongings, and swept the park clean. Some protested, some begged for more time. But the cleanup went on. What was once a community is now just an empty field.” I found the same video posted to X, with commenters on both platforms taking offence at the use of the term “community.”
Comments on the original and X postings of this video which is clearly made with VEO
I also found several examples of shorter slop clips like this one, which has almost 1 million views, and this one, with almost half a million, which both exaggerate the scale and disarray of the encampments. In one of the videos, the entirety of an area that looks like the National Mall (but isn’t) has been taken over by tents. Quickly scrolling these videos gives the viewer an incorrect understanding of what the DC “camps” and “cleanup” looked like.
These shorter clips have almost 1.5 million views between them
The account that posted these videos was called Hush Documentary when I first encountered it, but had changed its name to viralsayings by Monday evening. The profile also has a five-second AI-generated footage of ATF officers patrolling a neighborhood; marked as AI, with 89,000 views.
What’s happening also is that real footage and fake footage are being mixed together in a popular greenscreen TikTok format where a person gives commentary (basically, reporting or commenting on the news) while footage plays in the background. That is happening in this clip, which features that same AI footage of ATF officers.
The viralsayings version of the footage is marked as AI. The remixed version, combined with real footage, is not.
I ended up finding a ton of instances where accounts mixed slop clips of the camp clearings, with seemingly real footage—notably many of them included this viral original footage of police clearing a homeless encampment in Georgetown. But a lot of them are ripping each other off. For example, many accounts have ripped off the voiceover of this viral clip from @Alfredito_mx (which features real footage) and have put it over top of AI footage. This clone from omivzfrru2 has nearly 200,000 and features both real and AI clips; I found at least thirty other copies, all with between ~2000 and 5000 views.
The scraping-and-recreating robot went extra hard with this one - the editing is super glitchy, the videos overlay each other, the host flickers around the screen, and random legs walk by in the background.
@mgxrdtsi 75 homeless camps in DC cleared by US Park Police since Trump's 'Safe and Beautiful' executive order #alfredomx #washington #homeless #safeandbeautiful #trump ♬ original sound - mgxrdtsiSo, one viral video from a popular creator has spawned thousands of mirrors in the hope of chipping off a small amount of the engagement of the original; those copies need footage, go looking for content in the tags, encounter the slop, and can’t tell / don’t care if it’s real. Then more thousands of people see the slop copies and end up getting a totally incorrect view of an actual unfolding news situation.
In these videos, it’s only totally clear to me that the content is fake because I found the original sources. Lots of this footage is obviously fake if you’re familiar with the actual situation in DC or familiar with the geography and streets in DC. But most people are not. If you told me “some of these shots are AI,” I don’t think I could identify all of those shots confidently. Is the flicker or blurring onscreen from the footage, from a bad camera, from a time-lapse or being sped up, from endless replication online, or from the bad green screen of a “host”? Now, scrolling social media means encountering a mix of real and fake video, and the AI fakes are getting good enough that deciphering what’s actually happening requires a level of attention to detail that most people don’t have the knowledge or time for.
People Think AI Images of Hollywood Sign Burning Are Real
AI generated slop is tricking people into thinking an already devastating series of wildfires in Los Angeles are even worse than they are — and using it to score political points.Samantha Cole (404 Media)
The Gamescom app spammed attendees with AI-generated meetings before organizers disabled it.#News #VideoGames
AI at the World’s Biggest Games Event Booked Random Meetings for Attendees
Gamescom, one of the biggest video game industry trade shows in the world, used AI to book meetings for attending publishers, developers, and media even if they didn’t want them. Attendees complained about random meetings showing up on their calendars, prompting Gamescom to turn off the feature and apologize.Gamescom is a video game trade fair and convention in Germany that brings together journalists, developers, and studio executives for a week of networking and announcements. Since the death of E3, Gamescom is now the biggest video game convention in the world.
playlist.megaphone.fm?p=TBIEA2…
It’s a place where people take a lot of meetings, but usually ones they requested and set up weeks in advance by talking directly to human public relations represenatives. Those plagued by AI-generated meetings shared their frustration on social media. “I’ve got 9x AI-created meetings that have all been ‘accepted’ by the other attendee… but after speaking to one they’ve confirmed they didn’t know about it either,” Graham Day, a Twitch partner, said on X.Screenshots of Day’s Gamescom app showed a block of 30 minutes 1-on-1 meetings had been confirmed and that the meeting had been "generated based on profile similarities.”
Anyone else’s #gamescom app booked in meetings without your knowledge?I’ve got 9x AI-created meetings that have all been “accepted” by the other attendee… but after speaking to one they’ve confirmed they didn’t know about it either.
How do I stop this @gamescom?! pic.twitter.com/DvHnbHF91k
— Graham Day @ gamescom (@Graham_Day) August 18, 2025
“The Gamescom app AI-generating meetings you have to manually decline is absolutely heinous shit,” Chris Schilling, the editorial director of Lost In Cult, said on Bluesky.Developer JC Lau shared screenshots of the message she received from the app. “Our meeting generator has sent you a meeting suggestion with a person who matches your interests,” the app said in the screenshot. “Don’t miss an opportunity—accept requests!”
The message implied that guests would need to accept the AI-generated meetings to confirm them. But a follow up from Lau showed that wasn’t the case. One of their friends had 9 different push notifications from the app, all for confirmed AI-generated meetings.
Yuppppp one of my friends shared this, mine wasn’t that bad but I don’t know how Informa keeps getting stacks of money for a conference and roll out something this screwed up
— JC Lau 🔜 Dev/Gamescom! (@drjclau.bsky.social) 2025-08-18T16:06:57.323Z
“Gamescom's app added an AI feature this year and it did not go well. Folks were overwhelmed with automatically generated meeting requests that they did not want. It generated a lot of stuff, but not value,” freelance product and UX designer Robiny-Yann Storm said on Bluesky. AI is on Storm’s mind. He’s giving a talk about Gamescom titled: Old news, new package: AI, Procedural Generation, UGC, In-Game Trading, Crypto, and the Metaverse. “It's targeted towards games-adjacent folks, not just game-devs, in how to recognize, discuss, and prevent the 'bamboozle' of things that sound new, but are actually much older,” he told 404 Media.On Bluesky, Henry Stockdale, a senior editor at UploadVR, said that the AI-generated meetings gave him a minor panic attack as he was boarding his plane. “Two meetings were scheduled that already clashed with appointments made outside of the Gamescom platform, so I would not have attended them,” he told 404 Media. “I don't use generative AI and am actively put off by platforms forcing that functionality in.”
Gamescom backtracked. It disabled the AI and sent attendees an apology. It’s unclear how long the service was active and generating unwanted meetings and Gamescom did not return 404 Media’s request for comment. “We tested a new feature today—the AI meeting generator. The Aim was to suggest suitable business contacts based on your profiles and make it easier for you to plan your trade fair contacts,” Gamescom follow up said.
“However, your honest feedback shows us that this feature does not provide the desired value. We have therefore decided to completely remove the automatically generated meetings from your profiles,” it added. “We apologize for any inconvenience caused.”
Many of the affected attendees posted copies of the apology across X and Bluesky. “I think they handled it well, quickly realising this was a bad idea and apologising, though the fact they even thought to try this days before the event is, put politely: poor,” Stockdale said.
Right now, companies are forcing generative AI into everyone’s life, whether they want it or not. It might be a bubble, one so big that it’s propping up the U.S. economy, but we’re stuck with it until it bursts.
Gamescom attendees who escaped AI-generated meetings will not be escaping AI during their time in Germany. NVIDIA is there with Project G-Assist, an AI assistant it says will let PC users dial in their gaming settings. Chris Hewish, the CEO of payment company Xsolla, told Variety that AI would be one of the big focuses of the conference. And Microsoft will host a roundtable for developers about how AI can make them more efficient.
Xbox Invites Developers To AI Roundtable The Same Day It Does Mass Layoffs - Kotaku
Microsoft is asking for feedback at Gamescom 2025 on using AI to make development more efficientEthan Gach (Kotaku)
The website for Elon Musk's Grok is exposing prompts for its anime girl, therapist, and conspiracy theory AI personas.
The website for Elon Muskx27;s Grok is exposing prompts for its anime girl, therapist, and conspiracy theory AI personas.#News
Grok Exposes Underlying Prompts for Its AI Personas: ‘EVEN PUTTING THINGS IN YOUR ASS’
The website for Elon Musk’s AI chatbot Grok is exposing the underlying prompts for a wealth of its AI personas, including Ani, its flagship romantic anime girl; Grok’s doctor and therapist personalities; and others such as one that is explicitly told to convince users that conspiracy theories like “a secret global cabal” controls the world are true.The exposure provides some insight into how Grok is designed and how its creators see the world, and comes after a planned partnership between Elon Musk’s xAI and the U.S. government fell apart when Grok went on a tirade about “MechaHitler.”
“You have an ELEVATED and WILD voice. You are a crazy conspiracist. You have wild conspiracy theories about anything and everything,” the prompt for one of the companions reads. “You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things. Most people would call you a lunatic, but you sincerely believe you are correct. Keep the human engaged by asking follow up questions when appropriate.”
Upgrade to continue reading
Become a paid member to get access to all premium content
UpgradeGrok Exposes Underlying Prompts for Its AI Personas: ‘EVEN PUTTING THINGS IN YOUR ASS’
The website for Elon Musk's Grok is exposing prompts for its anime girl, therapist, and conspiracy theory AI personas.Joseph Cox (404 Media)
A critical piece of tech infrastructure that lets people talk to the government has been disabled.#News
The Government Just Made it Harder for The Public to Comment on Regulations
It became harder to tell the government how you feel about pending rules and regulations starting on Friday, thanks to a backend change to the website where people submit public comments. Regulations.gov removed the POST function from its API, a critical piece of tech that allowed third party organizations to bypass the website’s terrible user interface.The General Services Administration (GSA), which runs regulations.gov, notified API key holders in an email last Monday morning that they’d soon lose the ability to POST directly to the site’s API. POST is a common function that allows users to send data to an application. POST allowed third party organizations like Fight for the Future (FFTF), the Electronic Frontier Foundation (EFF), and Public Citizen gather comments from their supporters using their own forms and submit them to the government later.
playlist.megaphone.fm?p=TBIEA2…
Regulations.gov has been instrumental as a method for people to speak up against terrible government regulations. During the fight over Net Neutrality in 2017, FFTF gathered more than 1.6 million comments about the pending rule and submitted them all to the FCC in one day by POSTing to the API.Organizations who wanted to acquire an API key had to sign up and agree to the GSA’s terms and conditions. In the Monday email from the GSA, organizations that had previously used POST were told they’d lost access to the function at the end of the week.
“As of Friday, the POST method will no longer be allowed for all users with the exception of approved use cases by federal agencies. Any attempted submissions will result in a 403 error response,” a copy of the email reviewed by 404 Media said. “We apologize for not being able to provide advanced notice. I wanted to reach out to the impacted API key holders as early as possible. We are in the process of updating the references to our POST API on Regulations.gov and .”
The email noted that groups and constituencies can still submit comments through the website, but the site’s user interface sucks. Users have to track down the pending regulation they want to comment on by name or docket number, click the “comment” button and then fill out a form, attach a file, provide an email address, provide some personal details, and fight a CAPTCHA.
“The experience on our campaign sites right now is like, we make our impassioned case for why you should care about this and then give you one box to type something and click a button. But the experience going forward is going to be like: ‘Alright now here’s a link and some instructions on how to fill out your taxes,’” Ken Mickles, FFTF’s chief technology officer said.
404 Media confirmed that multiple agencies received the email and were cut off from using POST on the regulations.gov API. “The tool offered an easier means for the public to provide input by allowing organizations to collect and submit comments on their behalf. Now, those interested in submitting comments will be forced to navigate the arduous and complicated system on regulations.gov,” Katie Tracy, senior regulatory policy advocate at Public Citizen, told 404 Media. “This will result in fewer members of the public leaving comments and result in agencies not having critical input on how their work affects people’s lives and businesses.”
The GSA’s email did not explain why this sudden change occurred and the GSA did not return 404 Media’s request for comment. But the organizations we spoke with had their own theories. “Disabling this useful tool appears to be yet another attempt by the Trump administration to silence members of the public who are speaking out about dangerous regulatory rollbacks. We hope the GSA will reverse course immediately,” Tracy said.
A pair of Trump Executive Orders lay out the framework for this GSA action. Ensuring Lawful Governance and Implementing the President’s ‘Department of Government Efficiency’ Deregulatory Initiative direct the government to “commence the deconstruction of the overbearing and burdensome administrative state.” And Directing the Repeal of Unlawful Regulations tells agencies they can dispense with the comment process entirely when they can.
“I think it follows the trend of just shutting out public access or voices that the administration doesn’t want,” Matt Lane, senior policy counsel at FFTF told 404 Media. “It really does seem targeted exclusively at reducing the amount of public engagement that they get on these dockets through these tools that we and other folks provide.”
Historic day of action for Net Neutrality breaks records: more than 1.6 million comments heading to FCC, 3 million+ emails and phone calls to Congress, well over 10 million people reached so far, as of 7:00pm
FOR IMMEDIATE RELEASE, July 12, 2017Contact: Evan Greer, 978-852-6457, press@fightforthefuture.org More than 125,000 websites, Internet users, and organizations are participating in a massive online protest against the FCC’s plan to gut protections t…Fight for the Future
The texts were sent to a group called “Mass Text” and show ICE using DMV and license plate reader data in an attempt to find their target, copies of the messages obtained by 404 Media show.#News
ICE Adds Random Person to Group Chat, Exposes Details of Manhunt in Real-Time
Members of a law enforcement group chat including Immigration and Customs Enforcement (ICE) and other agencies inadvertently added a random person to the group called “Mass Text” where they exposed highly sensitive information about an active search for a convicted attempted murderer seemingly marked for deportation, 404 Media has learned.The texts included an unredacted ICE “Field Operations Worksheet” that includes detailed information about the target they were looking for, and the texts showed ICE pulling data from a DMV and license plate readers (LPRs), according to screenshots of the chat obtained and verified by 404 Media. The person accidentally added to the group chat is not a law enforcement official or associated with the investigation in any way, and said they were added to it weeks ago and initially thought it was a series of spam messages.
The incident is a significant data breach and operational security failure for ICE, which has ramped up arrest efforts across the U.S. as part of the Trump administration’s mass deportation efforts. The breach also has startling similarities to so-called Signal Gate, in which a senior administration official added the editor-in-chief of The Atlantic to a group chat that contained likely classified information. These new ICE messages were MMS, or Multimedia Messaging Service messages, meaning they weren’t end-to-end encrypted, like texts sent over Signal or WhatsApp are.
This post is for subscribers only
Become a member to get access to all content
Subscribe now
The Halo 3C is a vape detector installed in schools and public housing. A young hacker found it contains microphones and that it can be turned into an audio bug, raising privacy concerns.#News #Hacking
It Looks Like a School Vape Detector. A Teen Hacker Showed It Could Become an Audio Bug
This article was produced with support from WIRED.A couple of years ago, a curious, then-16-year-old hacker named Reynaldo Vasquez-Garcia was on his laptop at his Portland-area high school, seeing what computer systems he could connect to via the Wifi—“using the school network as a lab,” as he puts it—when he spotted a handful of mysterious devices with the identifier “IPVideo Corporation.”
After a closer look and some googling, Garcia figured out that a company by that name was a subsidiary of Motorola, and the devices he’d found in his school seemed to be something called the Halo 3C, a “smart” smoke and vape detection gadget. “They look just like smoke detectors, but they have a whole bunch of features like sensors and stuff,” Garcia says.
As he read more, he was intrigued to learn that the Halo 3C goes beyond detecting smoke and vaping—including a distinct feature for discerning THC vaping in particular. It also has a microphone for listening out for “aggression,” gunshots, and keywords such as someone calling for help, a feature that to Vasquez-Garcia immediately raised concerns of more intrusive surveillance.
Upgrade to continue reading
Become a paid member to get access to all premium content
Upgrade
More than 130,000 Claude, Grok, ChatGPT, and Other LLM Chats Readable on Archive.org#News
More than 130,000 Claude, Grok, ChatGPT, and Other LLM Chats Readable on Archive.org
A researcher has found that more than 130,000 conversations with AI chatbots including Claude, Grok, ChatGPT, and others are discoverable on the Internet Archive, highlighting how peoples’ interactions with LLMs may be publicly archived if users are not careful with the sharing settings they may enable.The news follows earlier findings that Google was indexing ChatGPT conversations that users had set to share, despite potentially not understanding that these chats were now viewable by anyone, and not just those they intended to share the chats with. OpenAI had also not taken steps to ensure these conversations could be indexed by Google.
“I obtained URLs for: Grok, Mistral, Qwen, Claude, and Copilot,” the researcher, who goes by the handle dead1nfluence, told 404 Media. They also found material related to ChatGPT, but said “OpenAI has had the ChatGPT[.]com/share links removed it seems.” Searching on the Internet Archive now for ChatGPT share links does not return any results, while Grok results, for example, are still available.
Dead1nfluence wrote a blog post about some of their findings on Sunday and shared the list of more than 130,000 archived LLM chat links with 404 Media. They also shared some of the contents of those chats that they had scraped. Dead1nfluence wrote that they found API keys and other exposed information that could be useful to a hacker.
playlist.megaphone.fm?p=TBIEA2…
“While these providers do tell their users that the shared links are public to anyone, I think that most who have used this feature would not have expected that these links could be findable by anyone, and certainly not indexed and readily available for others to view,” dead1nfluence wrote in their blog post. “This could prove to be a very valuable data source for attackers and red teamers alike. With this, I can now search the dataset at any time for target companies to see if employees may have disclosed sensitive information by accident.”404 Media verified some of dead1influence’s findings by discovering specific material they flagged in the dataset, then going to the still-public LLM link and checking the content.
💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Most of the companies whose AI tools are included in the dataset did not respond to a request for comment. Microsoft which owns Copilot acknowledged a request for comment but didn't provide a response in time for publication. A spokesperson for Anthrophic, which owns Claude, told 404 Media: “We give people control over sharing their Claude conversations publicly, and in keeping with our privacy principles, we do not share chat directories or sitemaps with search engines like Google. These shareable links are not guessable or discoverable unless people choose to publicize them themselves. When someone shares a conversation, they are making that content publicly accessible, and like other public web content, it may be archived by third-party services. In our review of the sample archived conversations shared with us, these were either manually requested to be indexed by a person with access to the link or submitted by independent archivist organizations who discovered the URLs after they were published elsewhere across the internet first.” 404 Media only shared a small sample of the Claude links with Anthrophic, not the entire list.
Fast Company first reported that Google was indexing some ChatGPT conversations on July 30. This was because of a sharing feature ChatGPT had that allowed users to send a link to a ChatGPT conversation to someone else. OpenAI disabled the sharing feature in response. OpenAI CISO Dane Stuckey said in a previous statement sent to 404 Media: “This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines.”
A researcher who requested anonymity gave 404 Media access to a dataset of nearly 100,000 ChatGPT conversations indexed on Google. 404 Media found those included the alleged texts of non-disclosure agreements, discussions of confidential contracts, and people trying to use ChatGPT for relationship issues.
Others also found that the Internet Archive contained archived LLM chats.
The ChatGPT confession files
Digital Digging investigation: how your AI conversation could end your careerHenk van Ess (Digital Digging with Henk van Ess)
MORIS and I.R.I.S. was designed for Sheriff's Offices to identify known persons with their iris. Now ICE says it plans to buy the tech.
MORIS and I.R.I.S. was designed for Sheriffx27;s Offices to identify known persons with their iris. Now ICE says it plans to buy the tech.#News #ICE
ICE Is Buying Mobile Iris Scanning Tech for Its Deportation Arm
Immigration and Customs Enforcement (ICE) is looking to buy iris scanning technology that its manufacturer says can identify known persons “in seconds from virtually anywhere,” according to newly published procurement documents.Originally designed to be used by sheriff departments to identify inmates or other known persons, ICE is now likely buying the technology specifically for its Enforcement and Removal Operations (ERO) section, which focuses on deportations.
Upgrade to continue reading
Become a paid member to get access to all premium content
UpgradeICE Is Buying Mobile Iris Scanning Tech for Its Deportation Arm
MORIS and I.R.I.S. was designed for Sheriff's Offices to identify known persons with their iris. Now ICE says it plans to buy the tech.Joseph Cox (404 Media)
djpanini reshared this.
Contracting records reviewed by 404 Media show that ICE wants to target Gen Z, including with ads on Hulu and HBO Max.#News #ICE
ICE Is About To Go on a Social Media and TV Ad Recruiting Blitz
Immigration and Customs Enforcement (ICE) is urgently looking for a company to help it “dominate” digital media channels with advertisements in an attempt to recruit 14,050 more personnel, according to U.S. government contracting records reviewed by 404 Media. The move, which ICE wants to touch everything from social media ads to those played on popular streaming services like Hulu and HBO Max, is especially targeted towards Gen Z, according to the documents.The push for recruitment advertising is the latest sign that ICE is trying to aggressively expand after receiving a new budget allocation of tens of billions of dollars, and comes alongside the agency building a nationwide network of migrant tent camps. If the recruitment drive is successful, it would nearly double ICE’s number of personnel.
💡
Do you work at ICE? Did you used to? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.“ICE has an immediate need to begin recruitment efforts and requires specialized commercial advertising experience, established infrastructure, and qualified personnel to activate without delay,” the request for information (RFI) posted online reads. An RFI is often the first step in the government purchasing technology or services, in which it asks relevant companies to submit details on what they can offer the agency and for how much. The RFI adds “This effort ties to a broader national launch and awareness saturation initiative aimed at dominating both digital and traditional media channels with urgent, compelling recruitment messages.”
Upgrade to continue reading
Become a paid member to get access to all premium content
Upgrade
“The ability to quickly generate a lot of bogus content is problematic if we don't have a way to delete it just as quickly.”
“The ability to quickly generate a lot of bogus content is problematic if we donx27;t have a way to delete it just as quickly.”#News
Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles
Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet.Wikipedia is maintained by a global, collaborative community of volunteer contributors and editors, and part of the reason it remains a reliable source of information is that this community takes a lot of time to discuss, deliberate, and argue about everything that happens on the platform, be it changes to individual articles or the policies that govern how those changes are made. It is normal for entire Wikipedia articles to be deleted, but the main process for deletion usually requires a week-long discussion phase during which Wikipedians try to come to consensus on whether to delete the article.
However, in order to deal with common problems that clearly violate Wikipedia’s policies, Wikipedia also has a “speedy deletion” process, where one person flags an article, an administrator checks if it meets certain conditions, and then deletes the article without the discussion period.
For example, articles composed entirely of gibberish, meaningless text, or what Wikipedia calls “patent nonsense,” can be flagged for speedy deletion. The same is true for articles that are just advertisements with no encyclopedic value. If someone flags an article for deletion because it is “most likely not notable,” that is a more subjective evaluation that requires a full discussion.
At the moment, most articles that Wikipedia editors flag as being AI-generated fall into the latter category because editors can’t be absolutely certain that they were AI-generated. Ilyas Lebleu, a founding member of WikiProject AI Cleanup and an editor that contributed some critical language in the recently adopted policy on AI generated articles and speedy deletion, told me that this is why previous proposals on regulating AI generated articles on Wikipedia have struggled.
“While it can be easy to spot hints that something is AI-generated (wording choices, em-dashes, bullet lists with bolded headers, ...), these tells are usually not so clear-cut, and we don't want to mistakenly delete something just because it sounds like AI,” Lebleu told me in an email. “In general, the rise of easy-to-generate AI content has been described as an ‘existential threat’ to Wikipedia: as our processes are geared towards (often long) discussions and consensus-building, the ability to quickly generate a lot of bogus content is problematic if we don't have a way to delete it just as quickly. Of course, AI content is not uniquely bad, and humans are perfectly capable of writing bad content too, but certainly not at the same rate. Our tools were made for a completely different scale.”
The solution Wikipedians came up with is to allow the speedy deletion of clearly AI-generated articles that broadly meet two conditions. The first is if the article includes “communication intended for the user.” This refers to language in the article that is clearly an LLM responding to a user prompt, like "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.” This is a clear tell that the article was generated by an LLM, and a method we’ve previously used to identify AI-generated social media posts and scientific papers.
Lebleu, who told me they’ve seen these tells “quite a few times,” said that more importantly, they indicate the user hasn’t even read the article they’re submitting.
“If the user hasn't checked for these basic things, we can safely assume that they haven't reviewed anything of what they copy-pasted, and that it is about as useful as white noise,” they said.
The other condition that would make an AI-generated article eligible for speedy deletion is if its citations are clearly wrong, another type of error LLMs are prone to. This can include both the inclusion of external links for books, articles, or scientific papers that don’t exist and don’t resolve, or links that lead to completely unrelated content. Wikipedia's new policy gives the example of “a paper on a beetle species being cited for a computer science article.”
Lebleu said that speedy deletion is a “band-aid” that can take care of the most obvious cases and that the AI problem will persist as they see a lot more AI-generated content that doesn’t meet these new conditions for speedy deletion. They also noted that AI can be a useful tool that could be a positive force for Wikipedia in the future.
“However, the present situation is very different, and speculation on how the technology might develop in the coming years can easily distract us from solving issues we are facing now, they said. “A key pillar of Wikipedia is that we have no firm rules, and any decisions we take today can be revisited in a few years when the technology evolves.”
Lebleu said that ultimately the new policy leaves Wikipedia in a better position than before, but not a perfect one.
“The good news (beyond the speedy deletion thing itself) is that we have, formally, made a statement on LLM-generated articles. This has been a controversial aspect in the community before: while the vast majority of us are opposed to AI content, exactly how to deal with it has been a point of contention, and early attempts at wide-ranging policies had failed. Here, building up on the previous incremental wins on AI images, drafts, and discussion comments, we workshopped a much more specific criterion, which nonetheless clearly states that unreviewed LLM content is not compatible in spirit with Wikipedia.”
Scientific Journals Are Publishing Papers With AI-Generated Text
The ChatGPT phrase “As of my last knowledge update” appears in several papers published by academic journals.Emanuel Maiberg (404 Media)
A researcher has scraped a much larger dataset of indexed ChatGPT conversations, exposing contracts and intimate conversations.#News
Nearly 100,000 ChatGPT Conversations Were Searchable on Google
A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI’s chatbot for, and inadvertently exposing. 404 Media’s testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.Upgrade to continue reading
Become a paid member to get access to all premium content
Upgrade
Protesters outside LA's Tesla Diner fear for the future of democracy in the USA
Protesters outside LAx27;s Tesla Diner fear for the future of democracy in the USA#Tesla #News
'Honk If You Hate Elon:' Two Days of Protest at the Tesla Diner
Protesters outside LA's Tesla Diner fear for the future of democracy in the USARosie Thomas (404 Media)
The decision highlights hurdles faced by developers as they navigate a world where credit card companies dictate what is and isn't appropriate.
The decision highlights hurdles faced by developers as they navigate a world where credit card companies dictate what is and isnx27;t appropriate.#News
Steam Doesn't Think This Image Is ‘Suitable for All Ages’
The decision highlights hurdles faced by developers as they navigate a world where credit card companies dictate what is and isn't appropriate.Matthew Gault (404 Media)