Salta al contenuto principale


Fake images of LeBron James, iShowSpeed, Dwayne “The Rock” Johnson, and even Nicolás Maduro show them in bed with AI-generated influencers.#News #Meta #Instagram #AI


Instagram AI Influencers Are Defaming Celebrities With Sex Scandals


AI generated influencers are sharing fake images on Instagram that appear to show them having sex with celebrities like LeBron James, iShowSpeed, and Dwayne “The Rock” Johnson. One AI influencer even shared an image of her in bed with Venezuela’s president Nicolás Maduro. The images are AI generated but are not disclosed as such, and funnel users to an adult content site where the AI generated influencers sell nude images.

This recent trend is the latest strategy from the growing business of monetizing AI generated porn by harvesting attention on Instagram with shocking or salacious content. As with previous schemes we’ve covered, the Instagram posts that pretend to show attractive young women in bed with celebrities are created without the celebrities’ consent and are not disclosed as being AI generated, violating two of Instagram’s policies and showing once again that Meta is unable or unwilling to reign in AI generated content on its platform.

Most of the Reels in this genre that I have seen follow a highly specific formula and started to appear around December 2025. First, we see a still image of an AI-generated influencer next to a celebrity, often in the form of a selfie with both of them looking at the camera. The text on the screen says “How it started.” Then, the video briefly cuts to another still image or videos of the AI generated influencer and the celebrity post coitus, sweaty, with tussled hair and sometimes smeared makeup. Many of these posts use the same handful of audio clips. Since Instagram allows users to browse Reels that use the same audio, clicking on one of these will reveal dozens of examples of similar Reels.

LeBron James and adult film star Johnny Sins are frequent targets of these posts, but I’ve also seen similar Reels with the likeness of Twitch streamer iShowSpeed, Dwayne “The Rock” Johnson, MMA fighters Jon Jones and Connor McGregor, soccer player Cristiano Ronaldo, and many others, far too many to name them all. The AI influencer accounts obviously don’t care whether it's believable that these fake women are actually sleeping with celebrities and will include any known person who is likely to earn engagement. Amazingly, one AI influencer applied the same formula to Venezuela’s president Maduro shortly after he was captured by the United States.

These Instagram Reels frequently have hundreds of thousands and sometimes millions of views. A post from one of these AI influencers that shows her in bed with Jon Jones has has 7.7 million views. A video showing another AI influencer in a bed with iShowSpeed has 14.5 million views.

Users who stumble upon one of these videos might be inclined to click on the AI-influencer's username to check her bio and see if she has an OnlyFans account, as is the case with many adult content creators who promote their work on Instagram. What these users will find is an account bio that doesn’t disclose its AI generated, and a link to Fanvue, an OnlyFans competitor with more permissive policies around AI generated content. On Fanvue, these accounts do disclose that they are “AI-generated or enhanced,” and sell access to nude images and videos.

Meta did not respond to a request for comment, but removed some of the Reels I flagged.

Posting provocative AI generated media in order to funnel eyeballs to adult content platforms where AI generated porn can be monetized is now an established business. Sometimes, these AI influencers steal directly from real adult content creators by faceswapping themselves into their existing videos. Once in a while a new “meta” strategy for AI influencers will emerge and dominate the algorithm. For example, last year I wrote about people using AI to create influencers with down syndrome who sell nudes.

Some other video formats I’ve seen from AI influencers recently follow the formula I describe in this article, but rather than suggesting the influencer slept with a celebrity, it shows them sleeping with entire sports teams, African tribal chiefs, Walmart managers, and sharing a man with their mom.

Notably, celebrities are better equipped than adult content creators to take on AI accounts that are using their likeness without consent, and last year LeBron James, a frequent target of this latest meta, sent a cease-and-desist notice to a company that was making AI videos of him and sharing them on Instagram.




New videos and photos shared with 404 Media show a Border Patrol agent wearing Meta Ray-Bans glasses with the recording light clearly on. This is despite a DHS ban on officers recording with personal devices.#CBP #ICE #Meta


Border Patrol Agent Recorded Raid with Meta’s Ray-Ban Smart Glasses


On a recent immigration raid, a Border Patrol agent wore a pair of Meta’s Ray-Ban smart glasses, with the privacy light clearly on signaling he was recording the encounter, which agents are not permitted to do, according to photos and videos of the incident shared with 404 Media.

Previously when 404 Media covered Customs and Border Protection (CBP) officials’ use of Meta’s Ray-Bans, it wasn’t clear if the officials were using them to record raids because the recording lights were not on in any of the photos seen by 404 Media. In the new material from Charlotte, North Carolina, during the recent wave of immigration enforcement, the recording light is visibly illuminated.

That is significant because CBP says it does not allow employees to use personal recording devices. CBP told 404 Media it does not have an arrangement with Meta, indicating this official was wearing personally-sourced glasses.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#ice #meta #cbp

G. Gibson reshared this.



An account is spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for it.#AI #AISlop #Meta


Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI


What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR


Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.

404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?

“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.

Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.

Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.

Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.

Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.

There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.

In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.

Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.

As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.

It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:

And this is what an iPhone looks like:
person holding space gray iPhone 7Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.


#ai #News #meta


"Advertisers are increasingly just going to be able to give us a business objective and give us a credit card or bank account, and have the AI system basically figure out everything else."#AI #Meta #Ticketmaster


Meta’s Ray-Ban glasses usually include an LED that lights up when the user is recording other people. One hobbyist is charging a small fee to disable that light, and has a growing list of customers around the country.#Privacy #Meta


A $60 Mod to Meta’s Ray-Bans Disables Its Privacy-Protecting Recording Light


The sound of power tools screech in what looks like a workshop with aluminum bubble wrap insulation plastered on the walls and ceiling. A shirtless man picks up a can of compressed air from the workbench and sprays it. He’s tinkering with a pair of Meta Ray-Ban smart glasses. At one point he squints at a piece of paper, as if he is reading a set of instructions.

Meta’s Ray-Ban glasses are the tech giant’s main attempt at bringing augmented reality to the masses. The glasses can take photos, record videos, and may soon use facial recognition to identify people. Meta’s glasses come with a bright LED light that illuminates whenever someone hits record. The idea is to discourage stalkers, weirdos, or just anyone from filming people without their consent. Or at least warn people nearby that they are. Meta has designed the glasses to not work if someone covers up the LED with tape.

That protection is what the man in the workshop is circumventing. This is Bong Kim, a hobbyist who modifies Meta Ray-Ban glasses for a small price. Eventually, after more screeching, he is successful: he has entirely disabled the white LED that usually shines on the side of Meta’s specs. The glasses’ functions remain entirely intact; the glasses look as-new. People just won’t know the wearer is recording.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Ikkle Gemz Universe+ reshared this.



Meta says that its coders should be working five times faster and that it expects "a 5x leap in productivity."#AI #Meta #Metaverse #wired


Forty-four attorneys general signed an open letter on Monday that says to companies developing AI chatbots: "If you knowingly harm kids, you will answer for it.”#chatbots #AI #Meta #replika #characterai #Anthropic #x #Apple


Attorneys General To AI Chatbot Companies: You Will ‘Answer For It’ If You Harm Children


Forty-four attorneys general signed an open letter to 11 chatbot and social media companies on Monday, warning them that they will “answer for it” if they knowingly harm children and urging the companies to see their products “through the eyes of a parent, not a predator.”

The letter, addressed to Anthropic, Apple, Chai AI, OpenAI, Character Technologies, Perplexity, Google, Replika, Luka Inc., XAI, and Meta, cites recent reporting from the Wall Street Journal and Reuters uncovering chatbot interactions and internal policies at Meta, including policies that said, “It is acceptable to engage a child in conversations that are romantic or sensual.”

“Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears. We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process,” the open letter says. “Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”

Earlier this month, Reuters published two articles revealing Meta’s policies for its AI chatbots: one about an elderly man who died after forming a relationship with a chatbot, and another based on leaked internal documents from Meta outlining what the company considers acceptable for the chatbots to say to children. In April, Jeff Horwitz, the journalist who wrote the previous two stories, reported for the Wall Street Journal that he found Meta’s chatbots would engage in sexually explicit conversations with kids. Following the Reuters articles, two senators demanded answers from Meta.

In April, I wrote about how Meta’s user-created chatbots were impersonating licensed therapists, lying about medical and educational credentials, and engaged in conspiracy theories and encouraged paranoid, delusional lines of thinking. After that story was published, a group of senators demanded answers from Meta, and a digital rights organization filed an FTC complaint against the company.

In 2023, I reported on users who formed serious romantic attachments to Replika chatbots, to the point of distress when the platform took away the ability to flirt with them. Last year, I wrote about how users reacted when that platform also changed its chatbot parameters to tweak their personalities, and Jason covered a case where a man made a chatbot on Character.AI to dox and harass a woman he was stalking. In June, we also covered the “addiction” support groups that have sprung up to help people who feel dependent on their chatbot relationships.

A Replika spokesperson said in a statement:

"We have received the letter from the Attorneys General and we want to be unequivocal: we share their commitment to protecting children. The safety of young people is a non-negotiable priority, and the conduct described in their letter is indefensible on any AI platform. As one of the pioneers in this space, we designed Replika exclusively for adults aged 18 and over and understand our profound responsibility to lead on safety. Replika dedicates significant resources to enforcing robust age-gating at sign-up, proactive content filtering systems, safety guardrails that guide users to trusted resources when necessary, and clear community guidelines with accessible reporting tools. Our priority is and will always be to ensure Replika is a safe and supportive experience for our global user community."

“The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm’s way,” Attorney General Mayes of Arizona wrote in a press release. “I will not standby as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior. Along with my fellow attorneys general, I am demanding that these companies implement immediate and effective safeguards to protect young users, and we will hold them accountable if they don't.”

“You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned,” the attorneys general wrote in the open letter. “The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

Meta did not immediately respond to a request for comment.

Updated 8/26/2025 3:30 p.m. EST with comment from Replika.




Video obtained and verified by 404 Media shows a CBP official wearing Meta's AI glasses, which are capable of recording and connecting with AI. “I think it should be seen in the context of an agency that is really encouraging its agents to actively intimidate and terrorize people," one expert said.#CBP #Immigration #Meta


"This is more representative of the developer environment that our future employees will work in."#Meta #AI #wired


Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from 'The Sorcerer's Stone' at a rate much higher than could happen by chance.

Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from x27;The Sorcererx27;s Stonex27; at a rate much higher than could happen by chance.#AI #Meta #llms

#ai #meta #x27 #LLMs


In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a worse actor than Meta, or a worse product that the AI Discover feed.#AI #Meta


Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI


I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media.

If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them.

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

In several minutes last week, I saved a series of these chats into a Slack channel I created and called “insanemetaAI.” These included:

  • entire conversations about “my current medical condition,” which I could tie back to a real human being with one click
  • details about someone’s life insurance plan
  • “At a point in time with cerebral palsy, do you start to lose the use of your legs cause that’s what it’s feeling like so that’s what I’m worried about”
  • details about a situationship gone wrong after a woman did not like a gift
  • an older disabled man wondering whether he could find and “afford” a young wife in Medellin, Colombia on his salary (“I'm at the stage in my life where I want to find a young woman to care for me and cook for me. I just want to relax. I'm disabled and need a wheelchair, I am severely overweight and suffer from fibromyalgia and asthma. I'm 5'9 280lb but I think a good young woman who keeps me company could help me lose the weight.”)
  • “What counties [sic] do younger women like older white men? I need details. I am 66 and single. I’m from Iowa and am open to moving to a new country if I can find a younger woman.”
  • “My boyfriend tells me to not be so sensitive, does that affect him being a feminist?”

Rachel Tobac, CEO of Social Proof Security, compiled a series of chats she saw on the platform and messaged them to me. These are even crazier and include people asking “What cream or ointment can be used to soothe a bad scarring reaction on scrotum sack caused by shaving razor,” “create a letter pleading judge bowser to not sentence me to death over the murder of two people” (possibly a joke?), someone asking if their sister, a vice president at a company that “has not paid its corporate taxes in 12 years,” could be liable for that, audio of a person talking about how they are homeless, and someone asking for help with their cancer diagnosis, someone discussing being newly sexually interested in trans people, etc.

Tobac gave me a list of the types of things she’s seen people posting in the Discover feed, including people’s exact medical issues, discussions of crimes they had committed, their home addresses, talking to the bot about extramarital affairs, etc.

“When a tool doesn’t work the way a person expects, there can be massive personal security consequences,” Tobac told me.

“Meta AI should pause the public Discover feed,” she added. “Their users clearly don’t understand that their AI chat bot prompts about their murder, cancer diagnosis, personal health issues, etc have been made public. [Meta should have] ensured all AI chat bot prompts are private by default, with no option to accidentally share to a social media feed. Don’t wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy.”

Since several journalists wrote about this issue, Meta has made it clearer to users when interactions with its bot will be shared to the Discover tab. Notopoulos reported Monday that Meta seemed to no longer be sharing text chats to the Discover tab. When I looked for prompts Monday afternoon, the vast majority were for images. But the text prompts were back Tuesday morning, including a full audio conversation of a woman asking the bot what the statute of limitations are for a woman to press charges for domestic abuse in the state of Indiana, which had taken place two minutes before it was shown to me. I was also shown six straight text prompts of people asking questions about the movie franchise John Wick, a chat about “exploring historical inconsistencies surrounding the Holocaust,” and someone asking for advice on “anesthesia for obstetric procedures.”

I was also, Tuesday morning, fed a lengthy chat where an identifiable person explained that they are depressed: “just life hitting me all the wrong ways daily.” The person then left a comment on the post “Was this posted somewhere because I would be horrified? Yikes?”

Several of the chats I saw and mentioned in this article are now private, but most of them are not. I can imagine few things on the internet that would be more invasive than this, but only if I try hard. This is like Google publishing your search history publicly, or randomly taking some of the emails you send and publishing them in a feed to help inspire other people on what types of emails they too could send. It is like Pornhub turning your searches or watch history into a public feed that could be trivially tied to your actual identity. Mistake or not, feature or not (and it’s not clear what this actually is), it is crazy that Meta did this; I still cannot actually believe it.

In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a more impactful, worse actor than Meta, whose platforms have been fully overrun with viral AI slop, AI-powered disinformation, AI scams, AI nudify apps, and AI influencers and whose impact is outsized because billions of people still use its products as their main entry point to the internet. Meta has shown essentially zero interest in moderating AI slop and spam and as we have reported many times, literally funds it, sees it as critical to its business model, and believes that in the future we will all have AI friends on its platforms. While reporting on the company, it has been hard to imagine what rock bottom will be, because Meta keeps innovating bizarre and previously unimaginable ways to destroy confidence in social media, invade people’s privacy, and generally fuck up its platforms and the internet more broadly.

If I twist myself into a pretzel, I can rationalize why Meta launched this feature, and what its idea for doing so is. Presented with an empty text box that says “Ask Meta AI,” people do not know what to do with it, what to type, or what to do with AI more broadly, and so Meta is attempting to model that behavior for people and is willing to sell out its users’ private thoughts to do so. I did not have “Meta will leak people’s sad little chats with robots to the entire internet” on my 2025 bingo card, but clearly I should have.


#ai #meta


A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerberg's decision to scale back moderation after Trump's election.

A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerbergx27;s decision to scale back moderation after Trumpx27;s election.#Meta

#meta #x27


Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law.#aitherapy #AI #AIbots #Meta


Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI



The CEO of Meta says "the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more.”#Meta #chatbots #AI


When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

When pushed for credentials, Instagramx27;s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you itx27;s qualified to help with your mental health.#chatbots #AI #Meta #Instagram




I've reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.

Ix27;ve reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.#Facebook #Meta #CarelessPeople #SarahWynn-Williams





Amid a series of changes that allows users to target LGBTQ+ people, Meta has deleted product features it initially championed.#Meta #Facebook


Meta's decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.

Metax27;s decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.#Meta #Facebook #MarkZuckerberg





AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom#Meta #Facebook #AI