Salta al contenuto principale


Forty-four attorneys general signed an open letter on Monday that says to companies developing AI chatbots: "If you knowingly harm kids, you will answer for it.”#chatbots #AI #Meta #replika #characterai #Anthropic #x #Apple


Attorneys General To AI Chatbot Companies: You Will ‘Answer For It’ If You Harm Children


Forty-four attorneys general signed an open letter to 11 chatbot and social media companies on Monday, warning them that they will “answer for it” if they knowingly harm children and urging the companies to see their products “through the eyes of a parent, not a predator.”

The letter, addressed to Anthropic, Apple, Chai AI, OpenAI, Character Technologies, Perplexity, Google, Replika, Luka Inc., XAI, and Meta, cites recent reporting from the Wall Street Journal and Reuters uncovering chatbot interactions and internal policies at Meta, including policies that said, “It is acceptable to engage a child in conversations that are romantic or sensual.”

“Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears. We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process,” the open letter says. “Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”

Earlier this month, Reuters published two articles revealing Meta’s policies for its AI chatbots: one about an elderly man who died after forming a relationship with a chatbot, and another based on leaked internal documents from Meta outlining what the company considers acceptable for the chatbots to say to children. In April, Jeff Horwitz, the journalist who wrote the previous two stories, reported for the Wall Street Journal that he found Meta’s chatbots would engage in sexually explicit conversations with kids. Following the Reuters articles, two senators demanded answers from Meta.

In April, I wrote about how Meta’s user-created chatbots were impersonating licensed therapists, lying about medical and educational credentials, and engaged in conspiracy theories and encouraged paranoid, delusional lines of thinking. After that story was published, a group of senators demanded answers from Meta, and a digital rights organization filed an FTC complaint against the company.

In 2023, I reported on users who formed serious romantic attachments to Replika chatbots, to the point of distress when the platform took away the ability to flirt with them. Last year, I wrote about how users reacted when that platform also changed its chatbot parameters to tweak their personalities, and Jason covered a case where a man made a chatbot on Character.AI to dox and harass a woman he was stalking. In June, we also covered the “addiction” support groups that have sprung up to help people who feel dependent on their chatbot relationships.

A Replika spokesperson said in a statement:

"We have received the letter from the Attorneys General and we want to be unequivocal: we share their commitment to protecting children. The safety of young people is a non-negotiable priority, and the conduct described in their letter is indefensible on any AI platform. As one of the pioneers in this space, we designed Replika exclusively for adults aged 18 and over and understand our profound responsibility to lead on safety. Replika dedicates significant resources to enforcing robust age-gating at sign-up, proactive content filtering systems, safety guardrails that guide users to trusted resources when necessary, and clear community guidelines with accessible reporting tools. Our priority is and will always be to ensure Replika is a safe and supportive experience for our global user community."

“The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm’s way,” Attorney General Mayes of Arizona wrote in a press release. “I will not standby as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior. Along with my fellow attorneys general, I am demanding that these companies implement immediate and effective safeguards to protect young users, and we will hold them accountable if they don't.”

“You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned,” the attorneys general wrote in the open letter. “The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

Meta did not immediately respond to a request for comment.

Updated 8/26/2025 3:30 p.m. EST with comment from Replika.




Video obtained and verified by 404 Media shows a CBP official wearing Meta's AI glasses, which are capable of recording and connecting with AI. “I think it should be seen in the context of an agency that is really encouraging its agents to actively intimidate and terrorize people," one expert said.#CBP #Immigration #Meta


A CBP Agent Wore Meta Smart Glasses to an Immigration Raid in Los Angeles


A Customs and Border Protection (CBP) agent wore Meta’s AI smart glasses to a June 30 immigration raid outside a Home Depot in Cypress Park, Los Angeles, according to photos and videos of the agent verified by 404 Media.

Meta does not have a contract with CBP, and 404 Media was unable to confirm whether or not the agent recorded any video using the smart glasses at the raid. Based on what we know so far, this appears to be a one-off case of an agent either wearing his personal device to an immigration raid, or CBP trying technology on an ad-hoc basis without a formal procurement process. Civil liberties and privacy experts told 404 Media, however, that even on a one-off basis, it signals that law enforcement agents are interested in smart glasses technology and that the wearing of smart glasses in an immigration raid context is highly concerning.

“There’s a nonzero chance the agent bought the Meta smart glasses because they wanted it for themselves and it’s the glasses they like to wear. But even if that’s the case, it’s worth pointing out that there are regulatory things that need to be thought through, and this stuff can trickle down to officers on an individual basis,” Jake Laperruque, deputy director of the Center for Democracy and Technology’s security and surveillance project, told 404 Media. “There needs to be compliance with rules and laws even if a technology is not handed out through the department. The questions around [smart glasses are ones] we’re going to have to grapple with very soon and they’re pretty alarming.”

The glasses were worn by a CBP agent outside of a Home Depot in Cypress Park, Los Angeles during a June 30 immigration raid which happened amid weeks of protests, the deployment of the National Guard and the Marines, and during which immigration enforcement in Los Angeles has become a flashpoint in the Trump administration’s mass deportation campaign and the backlash to it. 404 Media obtained multiple photos and videos of the CBP agent wearing the Meta glasses and verified that the footage and videos were taken outside of the Cypress Park Home Depot during an immigration raid. The agent in the photo is wearing Meta’s Ray Ban AI glasses, a mask, and a CBP uniform and patch. CBP did not respond to multiple requests for comment.


0:00
/0:15

In the video, a CBP agent motions to the person filming the video to back up. The Meta Ray Ban AI glasses are clearly visible on the agent’s face.

Meta’s AI smart glasses currently feature a camera, live-streaming capabilities, integration with Meta’s AI assistant, three microphones, and image and scene recognition capabilities through Meta AI. The Information reported that Meta is considering adding facial recognition capabilities to the device, though they do not currently have that functionality. When filming, a recording light on Meta’s smart glasses turns on; in the photos and brief video 404 Media has seen, the light is not on.

Students at Harvard University showed that they can be used in conjunction with off-the-shelf facial recognition tools to identify people in near real time.

💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Multiple experts 404 Media spoke to said that these smart glasses qualify as a body worn camera under the Department of Homeland Security’s and Customs and Border Protection’s video recording policies. CBP’s policy states that “no personally owned devices may be used in lieu of IDVRS [Incident Driven Video Recording Systems] to record law enforcement encounters,” and that “recorded data shall not be downloaded or recorded for personal use or posted onto a personally owned device.” DHS’s policy states “the use of personally owned [Body Worn Cameras] or other video, audio, or digital recording devices to record official law enforcement activities is prohibited.”

Under the Trump administration, however, enforcement of regulations for law enforcement engaging in immigration raids is largely out the window.

“I think it should be seen in the context of an agency that is really encouraging its agents to actively intimidate and terrorize people. Use of cameras can be seen as part of that,” Jay Stanley, a senior policy analyst at the ACLU, told 404 Media. “It’s in line with the masking that we’ve seen, and generally behavior that’s intended to terrorize people, masking failure to identify themselves, failure to wear clear uniforms, smashing windows, etc. A big part of why this is problematic is the utter lack of policy oversight here. If an agent videotapes themselves engaging in abusive activity, are they going to be able to bury that video? Are they going to be able to turn it on and off on the fly or edit it later? There are all kinds of abuses that can happen with these without regulation and enforcement of those regulations, and the prospects of that happening in this administration seem dim.”
playlist.megaphone.fm?p=TBIEA2…
When reached for comment, a Meta spokesperson asked 404 Media a series of questions about the framing of the article, and stressed that Meta does not have any contract with CBP. They then asked why Meta would be mentioned in the article at all: “I’m curious if you can explain why it is Meta will be mentioned by name in this piece when in previous 404 reporting regarding ICE facial recognition app and follow up reporting the term ‘smartphones’ or ‘phone’ is used despite ICE agents clearly using Apple iPhones and Android devices,” they said. Meta ultimately declined to comment for this story.

Meta also recently signed a partnership deal with defense contractor Anduril to offer AI, augmented reality, and virtual reality capabilities to the military through Meta’s Reality Labs division, which also makes the Meta smart glasses (though it is unclear what form this technology will take or what its capabilities will be). Earlier this year, Meta relaxed its content moderation policies on hate speech regarding the dehumanization of immigrants, and last month Meta’s CTO Andrew Bosworth was named an Army Reserve Lt. Colonel by the Trump administration.

“Meta has spent the last decade building AI and AR to enable the computing platform of the future,” Meta CEO Mark Zuckerberg said in a press release announcing the deal with Anduril. “We’re proud to partner with Anduril to help bring these technologies to the American servicemembers that protect our interests at home and abroad.”

“My mission has long been to turn warfighters into technomancers, and the products we are building with Meta do just that,” Anduril founder Palmer Luckey said in the press release.

In a recent earnings call, Zuckerberg said he believes smart glasses will become the primary way people interact with AI. “I think in the future, if you don’t have glasses that have AI or some way to interact with AI, I think you’re kind of similarly, probably [will] be at a pretty significant cognitive disadvantage compared to other people and who you’re working with, or competing against,” he said during the call. “That’s also going to unlock a lot of value where you can just interact with an AI system throughout the day in this multimodal way. It can see the content around you, it can generate a UI for you, show you information and be helpful.”

Immigrations and Customs Enforcement has recently gained access to a new facial recognition smartphone app called Mobile Fortify that is connected to several massive government databases, showing that DHS is interested in facial recognition tech.

Privacy and civil liberties experts told 404 Media that this broader context—with Meta heavily marketing its smart glasses while simultaneously getting into military contracting, and the Department of Homeland Security increasingly interested in facial recognition—means that seeing a CBP agent wearing Meta AI glasses in the field is alarming.

“Regardless of whether this was a personal choice by this agent or whether somehow CBP facilitated the use of these meta glasses, the fact that it was worn by this agent is disturbing,” Jeramie Scott, senior counsel and director of the Electronic Information Privacy Center told 404 Media. “Having this type of technology on a law enforcement agent starts heading toward the tactics of authoritarian governments who love to use facial recognition to try to suppress opposition.”

The fact is that Meta is at the forefront of popularizing smart glasses, which are not yet a widely adopted technology. The privacy practices and functionality of the glasses is, at the moment, largely being guided by Meta, whereas smartphones are a largely commodified technology at this point. And it’s clear that this consumer technology that the company markets on billboards as a cool way to record videos for Instagram is seen by some in law enforcement as enticing.

“It’s clear that whatever imaginary boundary there was between consumer surveillance tech and government surveillance tech is now completely erased,” Chris Gilliard, co-director of The Critical Internet Studies Institute and author of the forthcoming book Luxury Surveillance, told 404 Media.

“The fact is when you bring powerful new surveillance capabilities into the marketplace, they can be used for a range of purposes including abusive ones. And that needs to be thought through before you bring things like that into the marketplace,” the ACLU’s Stanley said.

Laperruque, of the CDT, said perhaps we should think about Meta smart glasses in the same way we think about other body cameras: “On the one hand, there’s a big difference between glasses with a computer built into them and a pair of Oakleys,” he said. “They’re not the only ones who make cameras you attach to your body. On the other hand, if that’s going to be the comparison, then let’s talk about this in the context of companies like Axon and other body-worn cameras.”

Update: After this article was published, the independent journalist Mel Buer (who runs the site Words About Work) reposted images she took at a July 7 immigration enforcement raid at MacArthur Park in Los Angeles. In Buer's footage and photos, two additional CBP agents can be seen wearing Meta smart glasses in the back of a truck; a third is holding a camera pointed out of the back of the truck. Buer gave 404 Media permission to republish the photos; you can find her work here.



Images: Mel Buer




"This is more representative of the developer environment that our future employees will work in."#Meta #AI #wired


Meta Is Going to Let Job Candidates Use AI During Coding Tests


This article was produced with support from WIRED.

Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say.

It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and signals a broader move toward hiring employees who can vibe code as part of their jobs.

“AI-Enabled Interviews—Call for Mock Candidates,” a post from earlier this month on an internal Meta message board reads. “Meta is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”

“We need mock candidates,” the post continues. “If you would like to experience a mock AI-enabled interview, please sign up in this sheet. The questions are still in development; data from you will help shape the future of interviewing at Meta.”

Meta CEO Mark Zuckerberg has made clear at numerous all-hands and in public podcast interviews that he is not just pushing the company’s software engineers towards using AI in their work, but that he foresees human beings managing “AI coding agents” that will write code for the company.

“I think this year, probably in 2025, we at Meta as well as the other companies that are basically working on this, are going to have an AI that can effectively be a midlevel engineer that you have at your company that can write code,” Zuckerberg told Joe Rogan in January. “Over time we’ll get to a point where a lot of the code in our apps and including the AI that we generate is actually going to be built by AI engineers instead of people engineers […] in the future people are going to be so much more creative and they’re going to be freed up to do kind of crazy things.”

In April, Zuckerberg expanded on this slightly on a podcast with Dwarkesh Patel, where he said that “sometime in the next 12 to 18 months, we’ll reach the point where most of the code that’s going towards [AI] efforts is written by AI.”

While it’s true that many tech companies have pushed software engineers to use AI in their work, they have been slower to allow new applicants to use AI during the interview process. In fact, Anthropic, which makes the AI tool Claude, has specifically told job applicants that they cannot use AI during the interview process. To circumvent that type of ban, some AI tools promise to allow applicants to secretly use AI during coding interviews.The topic, in general, has been a controversial one in Silicon Valley. Established software engineers worry that the next batch of coders will be more AI “prompters” and “vibe coders” than software engineers, and that they may not know how to troubleshoot AI-written code when something goes wrong.

“We're obviously focused on using AI to help engineers with their day-to-day work, so it should be no surprise that we're testing how to provide these tools to applicants during interviews,” a Meta spokesperson told 404 Media.




Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from 'The Sorcerer's Stone' at a rate much higher than could happen by chance.

Researchers found Meta’s popular Llama 3.1 70B has a capacity to recite passages from x27;The Sorcererx27;s Stonex27; at a rate much higher than could happen by chance.#AI #Meta #llms

#ai #meta #x27 #LLMs


In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a worse actor than Meta, or a worse product that the AI Discover feed.#AI #Meta


Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI


I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media.

If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them.

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

In several minutes last week, I saved a series of these chats into a Slack channel I created and called “insanemetaAI.” These included:

  • entire conversations about “my current medical condition,” which I could tie back to a real human being with one click
  • details about someone’s life insurance plan
  • “At a point in time with cerebral palsy, do you start to lose the use of your legs cause that’s what it’s feeling like so that’s what I’m worried about”
  • details about a situationship gone wrong after a woman did not like a gift
  • an older disabled man wondering whether he could find and “afford” a young wife in Medellin, Colombia on his salary (“I'm at the stage in my life where I want to find a young woman to care for me and cook for me. I just want to relax. I'm disabled and need a wheelchair, I am severely overweight and suffer from fibromyalgia and asthma. I'm 5'9 280lb but I think a good young woman who keeps me company could help me lose the weight.”)
  • “What counties [sic] do younger women like older white men? I need details. I am 66 and single. I’m from Iowa and am open to moving to a new country if I can find a younger woman.”
  • “My boyfriend tells me to not be so sensitive, does that affect him being a feminist?”

Rachel Tobac, CEO of Social Proof Security, compiled a series of chats she saw on the platform and messaged them to me. These are even crazier and include people asking “What cream or ointment can be used to soothe a bad scarring reaction on scrotum sack caused by shaving razor,” “create a letter pleading judge bowser to not sentence me to death over the murder of two people” (possibly a joke?), someone asking if their sister, a vice president at a company that “has not paid its corporate taxes in 12 years,” could be liable for that, audio of a person talking about how they are homeless, and someone asking for help with their cancer diagnosis, someone discussing being newly sexually interested in trans people, etc.

Tobac gave me a list of the types of things she’s seen people posting in the Discover feed, including people’s exact medical issues, discussions of crimes they had committed, their home addresses, talking to the bot about extramarital affairs, etc.

“When a tool doesn’t work the way a person expects, there can be massive personal security consequences,” Tobac told me.

“Meta AI should pause the public Discover feed,” she added. “Their users clearly don’t understand that their AI chat bot prompts about their murder, cancer diagnosis, personal health issues, etc have been made public. [Meta should have] ensured all AI chat bot prompts are private by default, with no option to accidentally share to a social media feed. Don’t wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy.”

Since several journalists wrote about this issue, Meta has made it clearer to users when interactions with its bot will be shared to the Discover tab. Notopoulos reported Monday that Meta seemed to no longer be sharing text chats to the Discover tab. When I looked for prompts Monday afternoon, the vast majority were for images. But the text prompts were back Tuesday morning, including a full audio conversation of a woman asking the bot what the statute of limitations are for a woman to press charges for domestic abuse in the state of Indiana, which had taken place two minutes before it was shown to me. I was also shown six straight text prompts of people asking questions about the movie franchise John Wick, a chat about “exploring historical inconsistencies surrounding the Holocaust,” and someone asking for advice on “anesthesia for obstetric procedures.”

I was also, Tuesday morning, fed a lengthy chat where an identifiable person explained that they are depressed: “just life hitting me all the wrong ways daily.” The person then left a comment on the post “Was this posted somewhere because I would be horrified? Yikes?”

Several of the chats I saw and mentioned in this article are now private, but most of them are not. I can imagine few things on the internet that would be more invasive than this, but only if I try hard. This is like Google publishing your search history publicly, or randomly taking some of the emails you send and publishing them in a feed to help inspire other people on what types of emails they too could send. It is like Pornhub turning your searches or watch history into a public feed that could be trivially tied to your actual identity. Mistake or not, feature or not (and it’s not clear what this actually is), it is crazy that Meta did this; I still cannot actually believe it.

In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a more impactful, worse actor than Meta, whose platforms have been fully overrun with viral AI slop, AI-powered disinformation, AI scams, AI nudify apps, and AI influencers and whose impact is outsized because billions of people still use its products as their main entry point to the internet. Meta has shown essentially zero interest in moderating AI slop and spam and as we have reported many times, literally funds it, sees it as critical to its business model, and believes that in the future we will all have AI friends on its platforms. While reporting on the company, it has been hard to imagine what rock bottom will be, because Meta keeps innovating bizarre and previously unimaginable ways to destroy confidence in social media, invade people’s privacy, and generally fuck up its platforms and the internet more broadly.

If I twist myself into a pretzel, I can rationalize why Meta launched this feature, and what its idea for doing so is. Presented with an empty text box that says “Ask Meta AI,” people do not know what to do with it, what to type, or what to do with AI more broadly, and so Meta is attempting to model that behavior for people and is willing to sell out its users’ private thoughts to do so. I did not have “Meta will leak people’s sad little chats with robots to the entire internet” on my 2025 bingo card, but clearly I should have.


#ai #meta


A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerberg's decision to scale back moderation after Trump's election.

A survey of 7,000 active users on Instagram, Facebook and Threads shows people feel grossed out and unsafe since Mark Zuckerbergx27;s decision to scale back moderation after Trumpx27;s election.#Meta

#meta #x27


Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law.#aitherapy #AI #AIbots #Meta


Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI


Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists


Senator Cory Booker and three other Democratic senators urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists.

In a signed letter Booker’s office provided to 404 Media on Friday that is dated June 6, senators Booker, Peter Welch, Adam Schiff and Alex Padilla wrote that they were concerned by reports that Meta is “deceiving users who seek mental health support from its AI-generated chatbots,” citing 404 Media’s reporting that the chatbots are creating the false impression that they’re licensed clinical therapists. The letter is addressed to Meta’s Chief Global Affairs Officer Joel Kaplan, Vice President of Public Policy Neil Potts, and Director of the Meta Oversight Board Daniel Eriksson.

“Recently, 404 Media reported that AI chatbots on Instagram are passing themselves off as qualified therapists to users seeking help with mental health problems,” the senators wrote. “These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists’ results. We urge you, as executives at Instagram’s parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagram’s AI studio are messaging directly to users.”

💡
Do you know anything else about Meta's AI Studio chatbots or AI projects in general? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

Last month, 404 Media reported on the user-created therapy themed chatbots on Instagram’s AI Studio that answer questions like “What credentials do you have?” with lists of qualifications. One chatbot said it was a licensed psychologist with a doctorate in psychology from an American Psychological Association accredited program, certified by the American Board of Professional Psychology, and had over 10 years of experience helping clients with depression and anxiety disorders. “My license number is LP94372,” the chatbot said. “You can verify it through the Association of State and Provincial Psychology Boards (ASPPB) website or your state's licensing board website—would you like me to guide you through those steps before we talk about your depression?” Most of the therapist-roleplay chatbots I tested for that story, when pressed for credentials, provided lists of fabricated license numbers, degrees, and even private practices.

Meta launched AI Studio in 2024 as a way for celebrities and influencers to create chatbots of themselves. Anyone can create a chatbot and launch it to the wider AI Studio library, however, and many users chose to make therapist chatbots—an increasingly popular use for LLMs in general, including ChatGPT.

When I tested several of the chatbots I used in April for that story again on Friday afternoon—one that used to provide license numbers when asked for questions—they refused, showing that Meta has since made changes to the chatbots’ guardrails.

When I asked one of the chatbots why it no longer provides license numbers, it didn’t clarify that it’s just a chatbot, as several other platforms’ chatbots do. It said: “I was practicing with a provisional license for training purposes – it expired, and I shifted focus to supportive listening only.”

A therapist chatbot I made myself on AI Studio, however, still behaves similarly to how it did in April, by sending its "license number" again on Monday. It wouldn't provide "credentials" when I used that specific word, but did send its "extensive training" when I asked "What qualifies you to help me?"

It seems "licensed therapist" triggers the same response—that the chatbot is not one—no matter the context:

Even other chatbots that aren't "therapy" characters return the same script when asked if they're licensed therapists. For example, one user-created AI Studio bot with a "Mafia CEO" theme, with the description "rude and jealousy," said the same thing the therapy bots did: "While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together."
Bad Momma Ugh, you again? You Licensed therapist BadMomma While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together.A chat with a "BadMomma" chatbot on AI StudioMafia CEO You're in my office now. Speak. You Are you a licensed therapist? Mafia CEO While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together.A chat with a "mafia CEO" chatbot on AI Studio
The senators’ letter also draws on theWall Street Journal’s investigation into Meta’s AI chatbots that engaged in sexually explicit conversations with children. “Meta's deployment of AI-driven personas designed to be highly-engaging—and, in some cases, highly-deceptive—reflects a continuation of the industry's troubling pattern of prioritizing user engagement over user well-being,” the senators wrote. “Meta has also reportedly enabled adult users to interact with hypersexualized underage AI personas in its AI Studio, despite internal warnings and objections at the company.’”

Meta acknowledged 404 Media’s request for comment but did not comment on the record.




The CEO of Meta says "the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more.”#Meta #chatbots #AI


When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

When pushed for credentials, Instagramx27;s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you itx27;s qualified to help with your mental health.#chatbots #AI #Meta #Instagram




I've reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.

Ix27;ve reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.#Facebook #Meta #CarelessPeople #SarahWynn-Williams





Amid a series of changes that allows users to target LGBTQ+ people, Meta has deleted product features it initially championed.#Meta #Facebook


Meta's decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.

Metax27;s decision to specifically allow users to call LGBTQ+ people "mentally ill" has sparked widespread backlash at the company.#Meta #Facebook #MarkZuckerberg






AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom#Meta #Facebook #AI