Salta al contenuto principale


People failing to identify a video of adorable bunnies as AI slop has sparked worries that many more people could fall for online scams.#AISlop #TikTok


AI Bunnies on Trampoline Causing Crisis of Confidence on TikTok


A generation who thought they were immune from being fooled by AI has been tricked by this video of bunnies jumping on a trampoline:

@rachelthecatlovers Just checked the home security cam and… I think we’ve got guest performers out back! @[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url] #bunny #ringdoorbell #ring #bunnies #trampoline ♬ Bounce When She Walk - Ohboyprince

The video currently has 183 million views on TikTok and it is at first glance extremely adorable. The caption says “Just checked the home security cam and… I think we’ve got guest performers out back! @[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url]”

People were excited by this. The bunnies seem to be having a nice time. @[url=https://social.coop/users/Greg]Greg[/url] posted on X “Never knew how much I needed to see bunnies jumping on a trampoline”

Unfortunately, the bunnies are not real.

The video is AI generated. This becomes clear when, between the fifth and sixth seconds of the video, the back bunny vanishes.



The split second where the top left bunny vanishes

People want to believe, and the fact that it is AI generated is causing widespread crisis among people who thought that AI slop would only fool their parents. We are as a culture intensely attuned to the idea that animals might do cute things at night when we can’t see them, and there have been several real viral security camera videos lately of animals trepidatiously checking out trampolines.
playlist.megaphone.fm?p=TBIEA2…
This particular video was difficult to discern as AI in part because security camera footage is also famously the blurriest type of footage. The aesthetics of this particular video make it very difficult to tell that it’s AI at first glance, because we are used to looking at surveillance camera footage as being blurry and dark, which can hide some of the standard signs people look at when trying to determine if a video is AI generated. The background of the image is also static; newer AI video generators are getting pretty good at creating the foreground subject of a video, but the background often remains very surreal. In this video, that’s not the case because of the static nature of the background. Pretending to be nighttime security footage also helps to disguise the things AI is often bad at—accurate movement, correct blur and lighting, and fine details. Tagging “@[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url]” was also pretty smart by the uploader, because it gives a plausible place for the video to come from.

People are responding totally normally, embodying a very relatable arc; the confidence of youth to think “that will never happen to me,” followed by the crushing realization that eventually we all become old and susceptible to scams.

This guy sings that the video of the bunnies “might manufacture the way you made me feel - how do I know that the sky’s really sunny?”

@olivesongs11
7/29/25 - day 576 of writing a song every day
♬ original sound - olivesongs

While @OliviaDaytonn says “Now I feel like I’m gonna be one of those old people that get scammed”.

@oliviadaytonn I wanted them to be real so badly #bunnies #trampoline ♬ original sound - olivia dayton

Another TikToker says the bunnies were “The first AI video I believed was real - I am doomed when I’m old”

@catenstuff #duet with @rachelthecatlovers #bunny #AAALASPARATUCURRO #bunnyjumpingontrampoline ♬ Bounce When She Walk - Ohboyprince

And @sydney_benjamin offers a public apology to her best friend for sending her the video. “Guys, I fell for AI.. I’m quite ashamed, I think of myself as like an educated person.” She says that she felt good when she busted a previous AI video trend for her friends (Grandma Does Interviews On Street).

@sydney_benjamin
This one was hard to admit
♬ original sound - Sydney Benjamin

This video breaks down the animal-on-trampoline trend and explains how to spot a fake animal-on-trampoline video.

@showtoolsai How to spot AI videos - animals on trampolines #bunnies #dog #bear #bunny #ai ♬ original sound - showtools

Of course, because the bunny video went viral, there are now copycats. This video, published on YouTube shorts one day after the first, by a different account, is also AI generated.



Copycat AI-generated bunny trampoline video on YouTube shorts

This is a theme that has a long history of being explored in song; for a more authentic trampolining-bunny musical experience, there is this video which is from a comfortably pre-AI “9 years ago”.

The uploader, @Rachelthecatlovers, only has four other videos. The account posted its first video a year ago, then waited, then posted a second one this week, which is also somewhat unusual for AI slop. Most AI slop accounts post multiple times a day, and most of the accounts are newly created. @Rachelthecatlovers has one other AI bunny video (the flap to the door disappears) and a bird cam video. It also has a video of grapes being rehydrated with a needle, tagged #bunny.


@Rachelthecatlovers' previous AI bunny video

People are freaked out by being fooled by this video and are clearly confident that they can usually spot videos that have been generated. But maybe that’s just the toupee fallacy; you only see the bad ones. Trampolining bunnies have broken that facade.




Scientists have discovered chemosynthetic animals, which don’t rely on the Sun to live, nearly six miles under the ocean surface—deeper than any found to date.#TheAbstract #science



Submit to biometric face scanning or risk your account being deleted, Spotify says, following the enactment of the UK's Online Safety Act.

Submit to biometric face scanning or risk your account being deleted, Spotify says, following the enactment of the UKx27;s Online Safety Act.#spotify #ageverification



We talked to people living in the building whose views are being blocked by Tesla's massive four-story screen.

We talked to people living in the building whose views are being blocked by Teslax27;s massive four-story screen.#News #Tesla



The massive Tea breach; how the UK's age verification law is impacting access to information; and LeBron James' AI-related cease-and-desist.

The massive Tea breach; how the UKx27;s age verification law is impacting access to information; and LeBron Jamesx27; AI-related cease-and-desist.#Podcast



The Plaintiff claims Tea harmed her and ‘thousands of other similarity situated persons in the massive and preventable cyberattack.’#News
#News


The Sig Sauer P320 has a reputation for firing without pulling the trigger. The manufacturer says that's impossible, but the firearms community is showing the truth is more complicated.

The Sig Sauer P320 has a reputation for firing without pulling the trigger. The manufacturer says thatx27;s impossible, but the firearms community is showing the truth is more complicated.#News

#News #x27


“If visibility of r/IsraelCrimes is being restricted under the Online Safety Act, it’s only because the state fears accountability,” moderators say.#News
#News


404 Media first contacted Tea about the security issue on Saturday. The company disabled direct messages on Monday after our report.#News
#News


"This is more representative of the developer environment that our future employees will work in."#Meta #AI #wired


Meta Is Going to Let Job Candidates Use AI During Coding Tests


This article was produced with support from WIRED.

Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say.

It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and signals a broader move toward hiring employees who can vibe code as part of their jobs.

“AI-Enabled Interviews—Call for Mock Candidates,” a post from earlier this month on an internal Meta message board reads. “Meta is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”

“We need mock candidates,” the post continues. “If you would like to experience a mock AI-enabled interview, please sign up in this sheet. The questions are still in development; data from you will help shape the future of interviewing at Meta.”

Meta CEO Mark Zuckerberg has made clear at numerous all-hands and in public podcast interviews that he is not just pushing the company’s software engineers towards using AI in their work, but that he foresees human beings managing “AI coding agents” that will write code for the company.

“I think this year, probably in 2025, we at Meta as well as the other companies that are basically working on this, are going to have an AI that can effectively be a midlevel engineer that you have at your company that can write code,” Zuckerberg told Joe Rogan in January. “Over time we’ll get to a point where a lot of the code in our apps and including the AI that we generate is actually going to be built by AI engineers instead of people engineers […] in the future people are going to be so much more creative and they’re going to be freed up to do kind of crazy things.”

In April, Zuckerberg expanded on this slightly on a podcast with Dwarkesh Patel, where he said that “sometime in the next 12 to 18 months, we’ll reach the point where most of the code that’s going towards [AI] efforts is written by AI.”

While it’s true that many tech companies have pushed software engineers to use AI in their work, they have been slower to allow new applicants to use AI during the interview process. In fact, Anthropic, which makes the AI tool Claude, has specifically told job applicants that they cannot use AI during the interview process. To circumvent that type of ban, some AI tools promise to allow applicants to secretly use AI during coding interviews.The topic, in general, has been a controversial one in Silicon Valley. Established software engineers worry that the next batch of coders will be more AI “prompters” and “vibe coders” than software engineers, and that they may not know how to troubleshoot AI-written code when something goes wrong.

“We're obviously focused on using AI to help engineers with their day-to-day work, so it should be no surprise that we're testing how to provide these tools to applicants during interviews,” a Meta spokesperson told 404 Media.




The more than one million messages obtained by 404 Media are as recent as last week, discuss incredibly sensitive topics, and make it trivial to unmask some anonymous Tea users.#News
#News


“Without these safeguards, Mr. Barber eventually developed full-blown PTSD, which he is currently still being treated for,” the former mod's lawyer said.

“Without these safeguards, Mr. Barber eventually developed full-blown PTSD, which he is currently still being treated for,” the former modx27;s lawyer said.#ContentModeration



This Company Wants to Bring End-to-End Encrypted Messages to Bluesky’s AT Protocol#News
#News






An error message appears saying "The following are not allowed: no zionist, no zionists" when users try to add the phrase to their bios, but any number of other phrases about political and religious preferences are allowed.#grindr


The games were mentioned in a 2024 report and are now part of a new lawsuit in which a 11 year old girl was allegedly groomed and sexually assaulted after meeting a stranger on Roblox.#News
#News


LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him

Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop




Google’s AI Overview, which is easy to fool into stating nonsense as fact, is stopping people from finding and supporting small businesses and credible sources.#News
#News


The wiping commands probably wouldn't have worked, but a hacker who says they wanted to expose Amazon’s AI “security theater” was able to add code to Amazon’s popular ‘Q’ AI assistant for VS Code, which Amazon then pushed out to users.

The wiping commands probably wouldnx27;t have worked, but a hacker who says they wanted to expose Amazon’s AI “security theater” was able to add code to Amazon’s popular ‘Q’ AI assistant for VS Code, which Amazon then pushed out to users.#News #Hacking



Welcome to the era of ‘gaslight driven development.’ Soundslice added a feature the chatbot thought it existed after engineers kept finding screenshots from the LLM in its error logs.#News


ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It


In what might be a first, a programmer added a feature to a piece of software because ChatGPT hallucinated it, and customers kept attempting to force the software to do it.. The developers of the sheet music scanning app Soundslice, a site that lets people digitize and edit sheet music, added additional functionality to their site because the LLM kept telling people it existed. Rather than fight the LLM, Soundslice indulged the hallucination.

Adrian Holovaty, one of Soundslices’ developers, noticed something strange in the site's error logs a few months ago. Users kept uploading ASCII tablature—a basic system for notating music for guitar, despite the fact that Soundslice wasn’t set up to process it, and had never advertised that it could. The error logs included pictures of what users had uploaded, and many of them were screenshots of ChatGPT conversations where the LLM had churned out ASCII tabs and told the users to send them to Soundslice.
playlist.megaphone.fm?p=TBIEA2…
“It was around 5-10 images daily, for a period of a month or two. Definitely enough where I was like, ‘What the heck is going on here?’” Holovaty told 404 Media. Rather than fight the LLM, Soundslice decided to add the feature ChatGPT had hallucinated. Holovaty said it only took his team a few hours to write up the code, which was a major factor in adding the feature.

“The main reason we did this was to prevent disappointment,” he said. “I highly doubt many people are going to sign up for Soundslice purely to use our ASCII tab importer […] we were motivated by the, frankly, galling reality that ChatGPT was setting Soundslice users up for failure. I mean, from our perspective, here were the options:

“1. Ignore it, and endure the psychological pain of knowing people were getting frustrated by our product for reasons out of our control.

“2. Put annoying banners on our site saying: ‘On the off chance that you're using ChatGPT and it told you about a Soundslice ASCII tab feature, that doesn't exist.’ That's disproportional and lame.

“3. Just spend a few hours and develop the feature.”

There’s also no way to tell ChatGPT the feature doesn’t exist. In an ideal world, OpenAI would have a formal procedure for removing content from its model, similar to the ability to request the removal of a site from Google’s index. “Obviously with an LLM it's much harder to do this technically, but I'm sure they can figure it out, given the absurdly high salaries their researchers are earning,” Holovaty said.

He added that the situation made him realize how powerful ChatGPT has become as an influencer of consumer behavior. “It's making product recommendations—for existent and nonexistent features alike—to massive audiences, with zero transparency into why it made those particular recommendations. And zero recourse.”

This may be the first time that developers have added a feature to a piece of software because ChatGPT hallucinated it, but it won’t be the last. In a personal blog, developer Niki Tonsky dubbed this phenomenon “gaslight-driven development” and shared a recent experience that’s similar to Holovaty’s.

One of Tonsky’s projects is a database for frontends called Instant. An update method for the app used a text document called “update” but LLMs that interacted with Instant kept calling the file “create.” Tonsky told 404 Media that, rather than fight the LLMs, his team just added the text file with the name the systems wanted. “In general I agree `create` is more obvious, it’s just weird that we arrived at this through LLM,” he said.

He told 404 media that programmers will probably need to account for the “tastes” of LLMs in the future. “You kinda already have to. It’s not programming for AI, but AI as a tool changes how we do programming,” he said.

Holovaty doesn’t hate AI—Soundslice uses machine learning to do its magic—but is mixed on LLMs. He compared his experience with ChatGPT to dealing with an overzealous sales team selling a feature that doesn’t exist. He also doesn’t trust LLMs to write code. He experimented with it, but found it caused more problems than it solved.

“I don't trust it for my production Soundslice code,” he said. “Plus: writing code is fun! Why would I choose to deny myself fun? To appease the capitalism gods? No thanks.”


#News


Spotify is publishing AI-generated tracks of dead artists; a company is selling hacked data to debt collectors; and the Astronomer CEO episode shows the surveillance dystopia we live in.#Podcast


The Tesla Diner has two gigantic screens, a robot that serves popcorn, and owners hope it will be free from people who don't like Tesla.

The Tesla Diner has two gigantic screens, a robot that serves popcorn, and owners hope it will be free from people who donx27;t like Tesla.#News #Tesla



An internal memo obtained by 404 Media also shows the military ordered a review hold on "questionable content" at Stars and Stripes, the military's 'editorially independent' newspaper.

An internal memo obtained by 404 Media also shows the military ordered a review hold on "questionable content" at Stars and Stripes, the militaryx27;s x27;editorially independentx27; newspaper.#Pentagon #PeteHegseth



From ICE's facial recognition app to its Palantir contract, we've translated a spread of our ICE articles into Spanish and made them freely available.

From ICEx27;s facial recognition app to its Palantir contract, wex27;ve translated a spread of our ICE articles into Spanish and made them freely available.#Spanish



Correos internos del ICE obtenidos por 404 Media indican que el sistema CBP, normalmente usado para tomar fotos de personas al ingresar o salir de EE.UU., está siendo usado ahora por la agencia mediante una herramienta llamada Mobile Fortify.#Spanish


Chats de Slack y foros de discusión internos de la empresa muestran que el gigante de la vigilancia está colaborando activamente con el ICE para ubicar a personas con órdenes de deportación.#Spanish


Las cámaras lectoras de patentes de Flock están instaladas en más de 5000 comunidades en EE.UU. y las policías locales usan el sistema nacional para realizar búsquedas el ICE.#Spanish


¿Positivo o negativo? Esas son las opciones que tienen los analistas cuando la herramienta Giant Oak Search Technology desentierra el contenido publicado en redes sociales y otras fuentes para que el ICE lo analice.#Spanish


Información filtrada mediante hackeos y obtenida por 404 Media revela que en los vuelos de deportación a El Salvador hubo decenas de personas adicionales no registradas oficialmente.#Spanish


Documentos internos del DHS revelan su colaboración con Fivecast, una empresa que ofrece el servicio de “detección de términos y frases de riesgo encontrados en línea”.#Spanish


La base de datos permite crear filtros según cientos de categorías distintas, incluidos estatus migratorio, "características físicas específicas" (cicatrices, marcas, tatuajes), "afiliación criminal"; datos de lectores de patentes y más.#Spanish


404 Media obtuvo la lista de páginas y servicios desde donde el contratista ShadowDragon extrae datos. Su herramienta permite a analistas del gobierno analizar la información para encontrar vínculos entre personas.#Spanish


"They could fix this problem. One of their talented software engineers could stop this fraudulent practice in its tracks, if they had the will to do so."#News
#News


The NIH wrote that it has recently “observed instances of Principal Investigators submitting large numbers of applications, some of which may have been generated with AI tools."#AI #NIH


The NIH Is Capping Research Proposals Because It's Overwhelmed by AI Submissions


The National Institutes of Health claims it’s being strained by an onslaught of AI-generated research applications and is capping the number of proposals researchers can submit in a year.

In a new policy announcement on July 17, titled “Supporting Fairness and Originality in NIH Research Applications,” the NIH wrote that it has recently “observed instances of Principal Investigators submitting large numbers of applications, some of which may have been generated with AI tools,” and that this influx of submissions “may unfairly strain NIH’s application review process.”

“The percentage of applications from Principal Investigators submitting an average of more than six applications per year is relatively low; however, there is evidence that the use of AI tools has enabled Principal Investigators to submit more than 40 distinct applications in a single application submission round,” the NIH policy announcement says. “NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination.”

Starting on September 25, NIH will only accept six “new, renewal, resubmission, or revision applications” from individual principal investigators or program directors in a calendar year.

Earlier this year, 404 Media investigated AI used in published scientific papers by searching for the phrase “as of my last knowledge update” on Google Scholar, and found more than 100 results—indicating that at least some of the papers relied on ChatGPT, which updates its knowledge base periodically. And in February, a journal published a paper with several clearly AI-generated images, including one of a rat with a giant penis. In 2023, Nature reported that academic journals retracted 10,000 "sham papers," and the Wiley-owned Hindawi journals retracted over 8,000 fraudulent paper-mill articles. Wiley discontinued the 19 journals overseen by Hindawi. AI-generated submissions affect non-research publications, too: The science fiction and fantasy magazine Clarkesworld stopped accepting new submissions in 2023 because editors were overwhelmed by AI-generated stories.

According to an analysis published in the Journal of the American Medical Association, from February 28 to April 8, the Trump administration terminated $1.81 billion in NIH grants, in subjects including aging, cancer, child health, diabetes, mental health and neurological disorders, NBC reported.

Just before the submission limit announcement, on July 14, Nature reported that the NIH would “soon disinvite dozens of scientists who were about to take positions on advisory councils that make final decisions on grant applications for the agency,” and that staff members “have been instructed to nominate replacements who are aligned with the priorities of the administration of US President Donald Trump—and have been warned that political appointees might still override their suggestions and hand-pick alternative reviewers.”

The NIH Office of Science Policy did not immediately respond to a request for comment.


#ai #nih


In tests involving the Prisoner's Dilemma, researchers found that Google’s Gemini is “strategically ruthless,” while OpenAI is collaborative to a “catastrophic” degree.

In tests involving the Prisonerx27;s Dilemma, researchers found that Google’s Gemini is “strategically ruthless,” while OpenAI is collaborative to a “catastrophic” degree.#llms #OpenAI