Who could have possibly predicted this, besides everyone?#Meta


RIP Metaverse, an $80 Billion Dumpster Fire Nobody Wanted


A few things on the end of Horizon Worlds, the metaverse that Mark Zuckerberg believed in so much that he renamed his company:

1) It’s very sad that many of the people who worked on it have been unceremoniously laid off because their leaders appear to have no idea what they’re doing
2) lol
3) lmao, even

Who could have possibly predicted this?

When Zuckerberg announced Horizon Worlds not really all that long ago at a batshit livestream in October 2021, I wrote an article called “Zuckerberg Announces Fantasy World Where Facebook Is Not a Horrible Company.” During that livestream Zuckerberg said, “I believe technology can make our lives better. The future will be built by those willing to stand up and say ‘this is the future we want.’” The future Zuckerberg wanted, at that time, was not a future anyone else wanted. But he was bold enough to systematically light roughly $80 billion on fire, not because he was willing to stand up and paint a vision of the future, but because Facebook was mired in various horrendous scandals and because he needed to rebrand his company and needed something shiny to point at to keep Facebook’s stock price up. It is bad when actual economists say that money was thrown “into the toilet.”

Let’s check what I wrote then: “The future Zuckerberg went on to pitch was a delusional fever dream cribbed most obviously from dystopian science fiction and misleading or outright fabricated virtual reality product pitches from the last decade. In the ‘metaverse’—an ‘embodied’ internet where we are, basically, inside the computer via a headset or other reality-modifying technology of some sort—rather than hang out with people in real life you could meet up with them as Casper-the-friendly-ghost-style holograms to do historically fun and stimulating activities such as attend concerts or play basketball.”

Zuckerberg’s bold vision of the metaverse was a place where T-Pain would sell NFTs of imaginary sneakers at concerts attended by people sitting silently in their living rooms with computers strapped to their face, where Wendy’s could do integrated brand deals in which human-shaped avatars without legs could throw baconators at basketball hoops, and where Zuckerberg could pretend to know how to surf. Even on these pitiful metrics, the metaverse failed. “Whatever the metaverse does look like, it is virtually guaranteed to not look or feel anything like what Facebook showed us on Thursday,” I wrote at the time.

Over the last few years, Zuckerberg has found another thing he can ruin via his trademark process of pouring kerosene on huge piles of money and throwing matches at it (perhaps a fun metaverse game?). Zuckerberg’s current bold vision for the future is one in which social media is not social media at all but is instead a bunch of highly customized AI-generated ads delivered to you via an increasingly creepy algorithm. Alongside this, it is a future in which Reality Labs—the division of Meta that created Horizon Worlds—makes AI camera glasses whose main use appears to be harassing women, traumatizing the underpaid content moderators who watch the footage in developing countries, and fashion statements for federal officials whose current mission is kidnapping undocumented immigrants.

The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people, don’t know how or why people use their products, and very often have no idea what they’re doing.

I remember the metaverse, crypto, web3 Venn diagram of hype very well—in fact, I remember sitting in meetings where VICE executives proposed renting land in the crypto-focused Decentraland metaverse to build a virtual VICE headquarters (where we all worked before 404 Media). I noted at the time that Decentraland was stupid, and that far fewer people were on Decentraland at any given time than were reading even a failed blog post on the website of our failing company. It didn't matter. The people “willing to stand up and say ‘this is the future we want’” wanted a virtual building in a virtual dead mall, and they got it. Was it because they were so brave and forward looking? Or was it because they were rich and powerful and could say this is the future we, the business people, the business knowers, want?


#meta

Social Media Channel reshared this.

The creator of Nearby Glasses made the app after reading 404 Media's coverage of how people are using Meta's Ray-Bans smartglasses to film people without their knowledge or consent. “I consider it to be a tiny part of resistance against surveillance tech.”#Privacy #Meta #News


This App Warns You if Someone Is Wearing Smart Glasses Nearby


A new hobbyist developed app warns if people nearby may be wearing smart glasses, such as Meta’s Ray-Ban glasses, which stalkers and harassers have repeatedly used to film people without their knowledge or consent. The app scans for smart glasses’ distinctive Bluetooth signatures and sends a push alert if it detects a potential pair of glasses in the local area.

The app comes as companies such as Meta continue to add AI-powered features to their glasses. Earlier this month The New York Times reported Meta was working on adding facial recognition to its smart glasses. “Name Tag,” as the feature is called, would let smart glasses wearers identify people and get information about them from Meta’s AI assistant, the report said.

“I consider it to be a tiny part of resistance against surveillance tech,” Yves Jeanrenaud, the hobbyist developer and sociologist who made the app, told 404 Media.

To use the app, called Nearby Glasses, users download it from the Google Play Store or GitHub. They may need to tweak some settings such as “enable foreground service” to keep the app scanning. Then they press “Start Scanning” and a debug log will show the app’s activity. If it detects what it believes to be a pair of smart glasses, the app will send a notification: “⚠️ Smart Glasses are probably nearby,” it reads, according to a screenshot posted to the app’s Play Store page.

💡
Do you work at Meta or know anything else about its smart glasses? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The app works by looking for Bluetooth “advertising frames,” which are small bits of data devices regularly broadcast as part of their normal operation. Jeanrenaud said he referenced a directory of Bluetooth Low Energy (BLE) manufacturers, then made the app scan for Meta, Luxottica Group S.p.A which partners with Meta on its smart glasses, and Snap, which has its own smart glasses offering.

“If it sees an advertising frame of these manufacturers, it notifies you. That’s basically it,” Jeanrenaud said. The Play Store page says the app likely generates false positives, such as from VR headsets. That is what happened in 404 Media’s test too: We ran the app near a Meta Quest 2 headset; the app detected the device, with its debug log saying “Meta Quest 2,” and the app sent a notification saying smart glasses were nearby. Of course, when walking around in public, it is less likely that someone is going to be wearing a VR headset than a pair of smart glasses.

“This is a tech solution to a social problem exaggerated by tech. I do not want to promote techsolutionism nor do I want people to feel falsely secure. It's still imperfect,” Jeanrenaud added.
playlist.megaphone.fm?p=TBIEA2…
Jeanrenaud said he decided to make the app after reading some of 404 Media’s coverage of how people are using Meta’s Ray-Ban smart glasses. He specifically pointed to this article, about how men are filming women inside massage parlors seemingly without their consent. Jeanrenaud also referenced 404 Media’s coverage showing multiple Customs and Border Protection (CBP) officials wore the AI glasses during immigration raids, including with the recording light clearly illuminated.

“Obviously, surveillance tech is not only abused by government thugs, it's also a tech boosting misogynist behaviour and rape culture,” Jeanrenaud said.

404 Media has also reported how two students coupled Meta’s Ray-Bans with off-the-shelf facial recognition technology and people search sites to turn them into glasses that instantly doxed people; and shown how a $60 mod easily disables the privacy-protecting recording light in the glasses, making it easier for wearers to film people without them knowing.

Neither Meta nor Google responded to a request for comment about the new app.

When Google released Google Glass, the first substantive pair of consumer smart glasses more than ten years ago, some people heckled or ripped the glasses from wearers’ faces. Those glasses looked very distinct. Meta’s Ray-Ban glasses, meanwhile, are designed to look just like any other pair of glasses, making it more difficult for passersby to know if someone is wearing a smart device or not. Not impossible, though: in December, a woman on the New York subway allegedly broke a man’s pair of Meta's smart glasses while he was filming a piece of content.

The app’s Play Store page says after identifying a device, a user “may act accordingly.”

Jeanrenaud said he can imagine that including what the woman on the subway allegedly did. “Or people just tell them politely to fuck off.”


Researchers say Meta’s patent for simulating dead users could be a “turning point” in “AI resurrections.”#News #Meta #AI


Meta's AI Patent to Simulate Dead People Shows the Dangers of 'Spectral Labor'


Last week, Business Insider reported on a Meta patent describing a system that would simulate a user’s social media activity after their death.The patent imagines a world where you’d be able to chat with a deceased friend’s Facebook or Instagram account after their death, and have a large language model simulate their posting or chatting behavior.

Meta first filed the patent in 2023, but the patent made headlines this week because of its dystopian implications. And while Meta told Business Insider that “we have no plans to move forward with this example,” a recently published paper from researchers at the Hebrew University of Jerusalem and Leipzig University shows that generative AI is increasingly being used to puppeteer the likeness of dead people. The paper argues that the practice raises “urgent legal and ethical questions around posthumous appropriation, ownership, work, and control.”

“Meta’s patent is big, and might even be a turning point,” Tom Divon, the lead author on Artificially alive: An exploration of AI resurrections and spectral labor modes in a postmortal society, told me in an email. “What makes it different is the scale. In our research, most of the AI resurrections we examined were quite bespoke, projects started by families, advocacy groups, museums, or startups, usually tied to very specific emotional, political, or commercial contexts. Even when they existed as apps, they were optional and limited, not built into the core structure of a platform. Meta’s proposal feels different because it imagines posthumous simulation as something woven directly into social media infrastructure.”

Using technology to animate the dead or simulate communication with them is not new, but the practice is becoming more common because generative AI tools are more accessible. Divon and co-author Christian Pentzold analyzed more than 50 real-world cases from the United States, Europe, the Middle East, and East Asia where AI was used to recreate deceased people’s voices, likeness, and personality, to see how and why technology was used this way.

They say that the examples they studied fell into three categories:

  • Spectacularization: “the digital re-staging of famous figures for entertainment.” For example, a live tour of an AI-generated Whitney Houston.
  • Sociopoliticization: “the reanimation of victims of violence or injustice for political or commemorative purposes.” We recently covered an example of this with an AI-generated dead victim of a road rage incident giving testimony in court.
  • Mundanization: “the most intimate and fast-growing mode, in which everyday people use chatbots or synthetic media to ‘talk’ with deceased parents, partners, or children, keeping relationships alive through daily digital interaction.”

The paper raises questions about this growing practice more than it proposes solutions. How does the notion of identity change when multiple versions of oneself can exist simultaneously, and what safeguards do we need to prevent exploitation of people after their death?

“The legal and ethical frameworks governing issues such as consent, privacy, and end-of-life decision-making demand reevaluation to accommodate the challenges posed by afterlife personhood,” the paper says. “In particular, to date, there is no clear line for governing the intricate intertwining of an individual’s data traces and GenAI applications.”

Divon told me that thinking about these issues is especially relevant when it comes to Meta’s patent. “Spectral labor describes how the dead can be made to ‘work’ again through the extraction and reanimation of their data, likeness, and affect. At small scale, this already raises ethical concerns. But at platform scale, we think it risks turning posthumous presence into an ongoing source of engagement, content, and value within digital economies [...] Meta’s patent makes us wonder, will individuals be given the ability to define their post-life boundaries while still alive? Will there be mechanisms akin to a digital DNR [do not resuscitate]?”

Divon explained that the current legal frameworks are not well equipped to address this technology because “digital remains” are typically approached either as property to be inherited or privacy interests to be protected. AI turns those materials into something interactive that can change and generate revenue in the present. Legislators, he said, should focus on getting explicit and informed “pre-death” consent requirements for posthumous AI simulation. Some laws that address this issue are already in progress.

“At its core, we believe the primary concern here centers on authorization,” he said. “Most individuals have not provided explicit, informed consent for their digital traces to power interactive posthumous agents. If such systems become embedded in platform infrastructure, inaction could quietly function as implicit agreement [...] We believe it is crucial to ask whether individuals should continue to generate social and economic value after death without having meaningfully agreed to that form of use.”


#ai #News #meta

Meta Superintelligence Labs’ director of alignment called it a “rookie mistake.”#News #AI #Meta


Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox


Meta’s director of safety and alignment at its “superintelligence” lab, supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests, had to scramble to stop an AI agent from deleting her inbox against her wishes and called it a “rookie mistake.”

Summer Yue, the director of alignment at Meta Superintelligence Labs, a part of the company that is working on a hypothetical AI system that exceeds human intelligence, posted about the incident on X last night. Yue was experimenting with OpenClaw, an viral AI agent that can be empowered to perform certain tasks with little human supervision. OpenAI hired the creator of OpenClaw last week.

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb. pic.twitter.com/XAxyRwPJ5R
— Summer Yue (@summeryue0) February 23, 2026


“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Yue also shared screenshots of her WhatsApp chat with the OpenClaw agent, where she implores it to “not do that,” “stop, don’t do anything,” and “STOP OPENCLAW.”

Yue said she instructed the AI agent to “Check this inbox too and suggest what you would archive or delete, don’t action until I tell you to.” She said in an X post, “This has been working well for my toy inbox, but my real inbox was too huge and triggered compaction. During the compaction, it lost my original instruction.”

As we reported last month, OpenClaw, which was known as ClawdBot at the time, is not ready for prime time. Hacker Jamieson O'Reilly showed that it’s possible for bad actors to access someone’s AI agent through any of its processes connected to the public facing internet, and that it’s trivial to create a supply chain attack through a site where people share and download popular instructions for these AI agents.

OpenClaw is also subject to classic AI alignment problems, in which AI is technically following instructions, but is doing so in a way that is unexpected and harmful. For example, it could drain your wallet by spending $0.75 cents every 30 minutes to check if it’s daytime yet.

As countless people on X have said in response to her post, seeing the person in charge of making sure powerful AI tools are safe at one of the biggest tech companies in the world trust an AI agent that is known to pose several serious security risks, does not inspire a lot of confidence in what Meta and other big AI companies are doing.

“Rookie mistake tbh,” Yue said in another post. “Turns out alignment researchers aren’t immune to misalignment. Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”


#ai #News #meta

New videos and photos shared with 404 Media show a Border Patrol agent wearing Meta Ray-Bans glasses with the recording light clearly on. This is despite a DHS ban on officers recording with personal devices.#CBP #ICE #Meta


Border Patrol Agent Recorded Raid with Meta’s Ray-Ban Smart Glasses


On a recent immigration raid, a Border Patrol agent wore a pair of Meta’s Ray-Ban smart glasses, with the privacy light clearly on signaling he was recording the encounter, which agents are not permitted to do, according to photos and videos of the incident shared with 404 Media.

Previously when 404 Media covered Customs and Border Protection (CBP) officials’ use of Meta’s Ray-Bans, it wasn’t clear if the officials were using them to record raids because the recording lights were not on in any of the photos seen by 404 Media. In the new material from Charlotte, North Carolina, during the recent wave of immigration enforcement, the recording light is visibly illuminated.

That is significant because CBP says it does not allow employees to use personal recording devices. CBP told 404 Media it does not have an arrangement with Meta, indicating this official was wearing personally-sourced glasses.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#ice #meta #cbp

G. Gibson reshared this.

The media in this post is not displayed to visitors. To view it, please log in.

Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI


What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR


Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.

404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?

“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.

Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.

Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.

Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.

Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.

There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.

In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.

Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.

As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.

It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:

And this is what an iPhone looks like:
person holding space gray iPhone 7Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.


#ai #News #meta

Meta’s Ray-Ban glasses usually include an LED that lights up when the user is recording other people. One hobbyist is charging a small fee to disable that light, and has a growing list of customers around the country.#Privacy #Meta


A $60 Mod to Meta’s Ray-Bans Disables Its Privacy-Protecting Recording Light


The sound of power tools screech in what looks like a workshop with aluminum bubble wrap insulation plastered on the walls and ceiling. A shirtless man picks up a can of compressed air from the workbench and sprays it. He’s tinkering with a pair of Meta Ray-Ban smart glasses. At one point he squints at a piece of paper, as if he is reading a set of instructions.

Meta’s Ray-Ban glasses are the tech giant’s main attempt at bringing augmented reality to the masses. The glasses can take photos, record videos, and may soon use facial recognition to identify people. Meta’s glasses come with a bright LED light that illuminates whenever someone hits record. The idea is to discourage stalkers, weirdos, or just anyone from filming people without their consent. Or at least warn people nearby that they are. Meta has designed the glasses to not work if someone covers up the LED with tape.

That protection is what the man in the workshop is circumventing. This is Bong Kim, a hobbyist who modifies Meta Ray-Ban glasses for a small price. Eventually, after more screeching, he is successful: he has entirely disabled the white LED that usually shines on the side of Meta’s specs. The glasses’ functions remain entirely intact; the glasses look as-new. People just won’t know the wearer is recording.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Ikkle Gemz Universe+ reshared this.

Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from 'The Sorcerer's Stone' at a rate much higher than could happen by chance.

Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from x27;The Sorcererx27;s Stonex27; at a rate much higher than could happen by chance.#AI #Meta #llms

#ai #meta #x27 #LLMs

The media in this post is not displayed to visitors. To view it, please log in.

In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a worse actor than Meta, or a worse product that the AI Discover feed.#AI #Meta


Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI


I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media.

If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them.

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

In several minutes last week, I saved a series of these chats into a Slack channel I created and called “insanemetaAI.” These included:

  • entire conversations about “my current medical condition,” which I could tie back to a real human being with one click
  • details about someone’s life insurance plan
  • “At a point in time with cerebral palsy, do you start to lose the use of your legs cause that’s what it’s feeling like so that’s what I’m worried about”
  • details about a situationship gone wrong after a woman did not like a gift
  • an older disabled man wondering whether he could find and “afford” a young wife in Medellin, Colombia on his salary (“I'm at the stage in my life where I want to find a young woman to care for me and cook for me. I just want to relax. I'm disabled and need a wheelchair, I am severely overweight and suffer from fibromyalgia and asthma. I'm 5'9 280lb but I think a good young woman who keeps me company could help me lose the weight.”)
  • “What counties [sic] do younger women like older white men? I need details. I am 66 and single. I’m from Iowa and am open to moving to a new country if I can find a younger woman.”
  • “My boyfriend tells me to not be so sensitive, does that affect him being a feminist?”

Rachel Tobac, CEO of Social Proof Security, compiled a series of chats she saw on the platform and messaged them to me. These are even crazier and include people asking “What cream or ointment can be used to soothe a bad scarring reaction on scrotum sack caused by shaving razor,” “create a letter pleading judge bowser to not sentence me to death over the murder of two people” (possibly a joke?), someone asking if their sister, a vice president at a company that “has not paid its corporate taxes in 12 years,” could be liable for that, audio of a person talking about how they are homeless, and someone asking for help with their cancer diagnosis, someone discussing being newly sexually interested in trans people, etc.

Tobac gave me a list of the types of things she’s seen people posting in the Discover feed, including people’s exact medical issues, discussions of crimes they had committed, their home addresses, talking to the bot about extramarital affairs, etc.

“When a tool doesn’t work the way a person expects, there can be massive personal security consequences,” Tobac told me.

“Meta AI should pause the public Discover feed,” she added. “Their users clearly don’t understand that their AI chat bot prompts about their murder, cancer diagnosis, personal health issues, etc have been made public. [Meta should have] ensured all AI chat bot prompts are private by default, with no option to accidentally share to a social media feed. Don’t wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy.”

Since several journalists wrote about this issue, Meta has made it clearer to users when interactions with its bot will be shared to the Discover tab. Notopoulos reported Monday that Meta seemed to no longer be sharing text chats to the Discover tab. When I looked for prompts Monday afternoon, the vast majority were for images. But the text prompts were back Tuesday morning, including a full audio conversation of a woman asking the bot what the statute of limitations are for a woman to press charges for domestic abuse in the state of Indiana, which had taken place two minutes before it was shown to me. I was also shown six straight text prompts of people asking questions about the movie franchise John Wick, a chat about “exploring historical inconsistencies surrounding the Holocaust,” and someone asking for advice on “anesthesia for obstetric procedures.”

I was also, Tuesday morning, fed a lengthy chat where an identifiable person explained that they are depressed: “just life hitting me all the wrong ways daily.” The person then left a comment on the post “Was this posted somewhere because I would be horrified? Yikes?”

Several of the chats I saw and mentioned in this article are now private, but most of them are not. I can imagine few things on the internet that would be more invasive than this, but only if I try hard. This is like Google publishing your search history publicly, or randomly taking some of the emails you send and publishing them in a feed to help inspire other people on what types of emails they too could send. It is like Pornhub turning your searches or watch history into a public feed that could be trivially tied to your actual identity. Mistake or not, feature or not (and it’s not clear what this actually is), it is crazy that Meta did this; I still cannot actually believe it.

In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a more impactful, worse actor than Meta, whose platforms have been fully overrun with viral AI slop, AI-powered disinformation, AI scams, AI nudify apps, and AI influencers and whose impact is outsized because billions of people still use its products as their main entry point to the internet. Meta has shown essentially zero interest in moderating AI slop and spam and as we have reported many times, literally funds it, sees it as critical to its business model, and believes that in the future we will all have AI friends on its platforms. While reporting on the company, it has been hard to imagine what rock bottom will be, because Meta keeps innovating bizarre and previously unimaginable ways to destroy confidence in social media, invade people’s privacy, and generally fuck up its platforms and the internet more broadly.

If I twist myself into a pretzel, I can rationalize why Meta launched this feature, and what its idea for doing so is. Presented with an empty text box that says “Ask Meta AI,” people do not know what to do with it, what to type, or what to do with AI more broadly, and so Meta is attempting to model that behavior for people and is willing to sell out its users’ private thoughts to do so. I did not have “Meta will leak people’s sad little chats with robots to the entire internet” on my 2025 bingo card, but clearly I should have.


#ai #meta

A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerberg's decision to scale back moderation after Trump's election.

A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerbergx27;s decision to scale back moderation after Trumpx27;s election.#Meta

#meta #x27

Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law.#aitherapy #AI #AIbots #Meta

Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI

When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

When pushed for credentials, Instagramx27;s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you itx27;s qualified to help with your mental health.#chatbots #AI #Meta #Instagram

I've reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.

Ix27;ve reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.#Facebook #Meta #CarelessPeople #SarahWynn-Williams

Meta's decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.

Metax27;s decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.#Meta #Facebook #MarkZuckerberg

AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom#Meta #Facebook #AI