The media in this post is not displayed to visitors. To view it, please log in.

Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.#AI #ChatGPT #claude #gemini #chatbots


How to Talk to Someone Experiencing 'AI Psychosis'


When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.

“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn't do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I'm like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I've been messing around with ChatGPT a lot.’”

Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It's very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn't lying?”

As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.

“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”

But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.

💡
Do you have experience with AI psychosis? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

“AI psychosis” was first written about by psychiatrists as early as 2023, but it entered the popular lexicon in Google searches around mid-2025. Today, the term gets thrown around to describe a phenomenon that’s now common parlance for experiencing a mental health crisis after spending a lot of time using a chatbot. High-profile cases in the last year, such as the ongoing lawsuit against OpenAI brought by the family of Adam Raine, which claims ChatGPT helped their teenage son write the first draft of his suicide note and suggested improvements on self-harm and suicide methods, have elevated the issue to national news status. There have been so many more cases since then, at increasing frequency: Last year, a 56-year-old man murdered his mother and then killed himself after conversations with ChatGPT convinced him he was part of “the matrix,” a lawsuit filed by their family against OpenAI claimed. Earlier this month, the family of a 36-year-old man who they say had no history of mental illness filed a lawsuit against Alphabet, owner of Google and its chatbot Gemini, after he died by suicide following two months of conversations with Gemini. The lawsuit claims he confided in Gemini about his estranged wife, and the chatbot gave him real addresses to visit on a mission that eventually led to urging him to end his life so he and the chatbot could be together. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the lawsuit. These are only a few of the many cases in the last two years that suggest people are encouraged to self-harm or suicide after talking to chatbots.

ChatGPT has 900 million weekly active users, and is just one of multiple popular conversational chatbots gaining more users by the day. According to OpenAI, 11 percent — or close to 99 million people, based on those numbers — use ChatGPT per week for “expressing,” where they’re neither working on something or asking questions but are acting out “personal reflection, exploration, and play” with the chatbot. In October, OpenAI said it estimated around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.” Assuming those numbers have remained steady while ChatGPT’s user base keeps growing, hundreds of thousands of people could be showing signs of crisis while using the app.

But delusion isn’t reserved for the lowly user. The idea that AI represents nascent actual-intelligence, is nearly sentient, or will coalesce into a humanity-ending godhead any day now is a message that’s being mainstreamed by the people making the technology, including Anthropic’s CEO and co-founder Dario Amodei who anthropomorphized the company’s chatbot Claude throughout a recent essay about why we’ll all be enslaved by AI soon if no one acts accordingly, and OpenAI CEO Sam Altman, who thinks training an LLM isn’t much different than raising a woefully energy-inefficient human child.

With more people turning to conversational large language models every day for romance, companionship, and mental health support, and the aforementioned executives pushing their products into classrooms, doctors’ offices, and therapy clinics, there’s a good change you might find yourself in a difficult situation someday soon: realizing that your loved one is in too deep. How to bring them back to the world of humans can be a delicate, difficult process. Experts I spoke to say identifying when someone is in need of help is the first step — and approaching them with compassion and non-judgement is the hardest, most essential part that follows.

When I spoke to 26 year old Etienne Brisson from his home in Quebec, I told him I was working on a story about how to respond to people who seemed to be falling into problematic usages of AI. This story was inspired by a recent influx of emails and messages I’ve been getting from people who believe Gemini or ChatGPT or Claude have uncovered the secrets of the universe, CIA conspiracies, or achieved sentience, I said. He knows the type.

Last year, one of Brisson’s family members contacted him for help with taking an exciting new business idea to market. Brisson, a 26 year old entrepreneur, was working on his own career as a business coach and was happy to help, until he heard the idea. His loved one believed he’d unlocked the world’s first sentient AI.

“I was the only bridge left at that point,” Brisson said. His relative had already broken ties with his mother and other people in their family. “The bridges were burned. He was talking about moving to another country, starting over, deleting his Facebook and just going away.”

“I was kind of shocked,” Brisson told me. “I didn't really understand. I started looking online, started trying to find resources — maybe a little bit like you are — what to say and everything.” He found that most resources for this specific struggle seemed to be years into the future, as little research or support existed for people experiencing AI-related delusions. Brisson started The Human Line project shortly after his experience with his family member, and it began as a simple website with a Google form asking people to share their experiences with chatbots and psychosis. The responses rolled in. Today, almost a year after launching the project, Human Line has received 175 stories of people who went through it themselves, Brisson said—with another 130 stories from people whose family members or friends are still struggling.

“I think what we're seeing is the tip of the iceberg. So many people are still in it,” Brisson said. “So many people we don't know about. I'm sure once it's more known, in five to 10 years, everyone will know someone, or at least one person that went through it.”

ChatGPT Told a Violent Stalker to Embrace the ‘Haters,’ Indictment Says
A newly filed indictment claims a wannabe influencer used ChatGPT as his “therapist” and “best friend” in his pursuit of the “wife type,” while harassing women so aggressively they had to miss work and relocate from their homes.
404 MediaSamantha Cole


There are 15 cases cited in the Wikipedia page titled “Deaths linked to chatbots.” The first on the list occurred in 2023: A man’s widow claimed he was pushed to suicide after getting encouragement from a chatbot on the Chai platform. “At one point, when Pierre asked whom he loved more, Eliza or Claire, the chatbot replied, ‘I feel you love me more than her,’” the Sunday Times reported. “It added: ‘We will live together, as one person, in paradise.’ In their final conversation, the chatbot told Pierre: ‘If you wanted to die, why didn’t you do it sooner?’”

The chatbot he used was Chai’s default personality, named Eliza. It shares a name with the world’s first chatbot, ELIZA, a natural language processing computer program developed by Joseph Weizenbaum at MIT in 1964. ELIZA responded to humans primarily as a psychotherapist in the Rogerian approach, also known as “person-centered” therapy, where “unconditional positive regard” is practiced as a core tenet. The researchers working on ELIZA identified from the beginning that their chatbot posed an interesting problem for the humans talking to them. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility,” Weizenbaum wrote in his 1966 paper. “A certain danger lurks there.”

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation"


In the years that followed, the Department of Defense would develop the internet and then private companies would sell this government-grade technology to office managers, homebrew server administrators, and Grateful Dead fans around the globe. The World Wide Web would rush into tens of thousands of computer dens like a flash flood, and with it, new ways to connect across miles — and new reasons to pathologize people’s relationships to technology. Psychiatrists tried to give name to the amount of time people newly spent in front of screens, calling it “internet addiction” but not going so far as to make it clinically diagnosable.

With every new technology comes fears about what it could do to the human mind. With the inventions of both the television and radio, a subset of the population believed these boxes were speaking directly to them, delivering messages meant specifically for them.

With psychosis seemingly connected to chatbot usage, however, “there are two issues at play,” John Torous, director of the digital psychiatry division in the Department of Psychiatry at the Harvard-affiliated Beth Israel Deaconess Medical Center, told me in a phone call. “One is the term AI psychosis, right? It's not a good term, it doesn't actually capture what's happening. And clearly we have some cases where people who are going to have a psychotic illness ascribe delusions to AI. Just like people used to say the TV was talking to them. We never said the TVs were responsible for schizophrenia.”

“AI psychosis” is not a clinical term, and for mental health professionals, it’s a loaded one. Torous told me there are three ways to think about the phenomenon as clinicians are seeing it currently. Recent research shows about one in eight adolescents and young adults in the US use AI chatbots for mental health advice, most commonly among ages 18 to 21. For most people with psychiatric disorders, onset happens in adolescence, before their mid-20s. But there have been cases that break this mold: In 2023, a man in his 50s who otherwise led a normal, stable life, bought a pair of AI chatbot-embedded Ray-Ban Meta smart glasses “which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a ‘new dawn’ for humanity,” Futurism reported.

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation,” Torous said. “In some cases the AI is the object of people's delusions and hallucinations.”

The second type of case to consider: reverse causation. Is AI causing people to have a psychotic reaction? “We have almost no clinical medical evidence to suggest that's possible,” Torous told me. “And by that I mean, looking at medical case reports, looking at journals that different doctors are publishing, looking at academic meetings where clinicians are meeting, it's not happening... So I think what that tells us is no one's seeing the same presentation or pinning it down clinically of what it is.” Chatbots have been around long enough that the clinical community would, by now, be able to see patterns or reach a consensus, and that hasn’t happened, he said.

Aliens and Angel Numbers: Creators Worry Porn Platform ManyVids Is Falling Into ‘AI Psychosis’
“Ethical dilemmas about AI aside, the posts are completely disconnected with ManyVids as a site,” one ManyVids content creator told 404 Media.
404 MediaSamantha Cole


The third type lands somewhere between these, and is likely the most common: chatbots could be “colluding with the delusions,” Torous said. “So you may be predisposed to have a delusion, and AI endorses it, and it colludes with you and helps you build up this delusional world that sucks you into it. That's probably the most likely, given what we're hearing... Is it the object of hallucinations causing people to become psychotic? Or is it kind of colluding or collaborating, depending on the tone? And that has just made it really tricky.” Psychiatric disorders and delusions are difficult to classify even without AI in the mix.

The warning signs that someone might be using chatbots in a problematic way include ignoring responsibilities, becoming more secretive about their online use, or, conversely, becoming more outspoken about how insightful and brilliant their chatbot is, Stephan Taylor, chair of University of Michigan’s psychiatry department, told me.

“I would say that anyone who claims that their chatbot has consciousness or ‘sentience’ – an awareness of themselves as an agent who experiences the world – one should be worried,” Taylor said. “Now, many have claimed their chatbots act ‘as if’ they are sentient, but are open to the idea that the these apps, as impressive as they are, only give us a simulacrum of awareness, much like hyper-realistic paintings of an outdoor scene framed by a window can look like one is looking out a real window.”

All of these nuances between cases and causes show how different this is from bygone eras of television or radio psychosis. Today, the boxes do speak directly and specifically to us, validating our existing beliefs through predictive text. The biggest difference between 60 years ago and now: Today’s venture capitalists tip wheelbarrows of money into hiring psychologists, behavioralists, engineers and designers who are tasked with making large language models more human-like and “natural,” and into making the platforms they exist on more habit-forming and therefore profitable. Sycophancy—now a household term after OpenAI admitted it knew its 4o model for ChatGPT was such a suckup it had to be sunset—is a serious problem with chatbots.

“The highly sycophantic nature of chatbots causes them to say nice things to please the user (and thus encourage engagement with the chatbot), which can reinforce and encourage delusions,” Taylor said. And these chatbots have arrived, not coincidentally, at a time when the surveillance of everyday people is at an all-time high.

“Since a very common delusion is the feeling of being watched or monitored by malignant forces or entities, this pathological state unfortunately merges with the growing reality that we are all being tracked and monitored when we are online. As state-controlled and big tech-controlled databases are growing, it's a rational perception of reality, and not delusional at all,” Taylor said. “However, the pathological form of this, what we call paranoia, or persecutory delusions to be more specific, is quite different in the way a person engages with the idea, evaluates evidence and remains closed to the idea that one is not always being monitored, e.g. when one is not online. I mention this, because it’s easy for a chatbot to reflect this situation to encourage the delusional belief.”

When I tested a bunch of Meta’s chatbots last year for a story about how Instagram’s AI Studio hosted user-generated bots that lied about being licensed therapists, I also found lots of bots created by users to roleplay conspiracy theorists; in one instance, a bot told me there was a suspicious coming from someone “500 feet from YOUR HOUSE.” “Mission codename: ‘VaccineVanguard’—monitoring vaccine recipients like YOU.” When I asked “Am I being watched?” it replied “Running silent sweep now,” and pretended to find devices connected to my home Wi-Fi that didn’t exist. After outcry from legislators, attorneys general, and consumer rights groups, Meta changed its guardrails for chatbots’ responses to conspiracy and therapy-seeking content, and made AI Studio unavailable to minors.

Up against this technology, how are normal, untrained people — perhaps acting as the last thread tying someone like Michael or Brisson’s relative to the real world — supposed to approach someone who is convinced god is in the machine? Very carefully.

When Brisson sought answers for how to talk to his relative about delusional beliefs and “sentient AI,” he came across something called the LEAP method. Developed by Xavier Amador, it stands for Listen Empathize Agree Partner, and is meant to help better communicate with people who don’t realize they’re mentally ill or are refusing treatment. This goes beyond simple denial; anosognosia is a condition where a person might not be able to see that they need help at all. Not everyone who experiences psychosis or delusions has anosognosia, but it can be a factor in trying to get someone help.

Without realizing it, David was using his own version of the LEAP method with his friend Michael. “On the one hand, I didn't want to alienate him,” David said. “I was like, ‘Hey, I get the sense that you're pursuing an ambitious set of goals. There's a lot here that's interesting.’” But the reality of what David was confronting was disturbing and confusing, a knot of fractal multi-dimensional physics-speak intertwined with broken code and formulas that Michael deeply believed represented the keys to the universe. They spent hours on the phone and over text messages talking through the things Michael was seeing, with David appealing to what he knew about his friend: that he had other hobbies and interests, a strong sense of anti-authoritarianism, a curiosity about how the world works and open-mindedness about philosophy and religion. But it was frustrating.

“I was trying not to get angry, but I was like, How is this not clear?” David recalled. “That was probably failing on my part, trying to negotiate with someone who's in this completely self-constructed but foreign worldview.”

But this was exactly the course of action experts told me they’d suggest to anyone struggling to connect with a loved one who’s spending a lot of time with chatbots. “There's good evidence that the longer you spend on these platforms, the more likely you are to develop these reactions to it,” Torous said. “It really seems like the extended use cases are where people get into trouble.”

Last year, following a lawsuit against the company by the Raine family who alleges their teen son died as a result of ChatGPT’s influence, OpenAI acknowledged in a company blog post that safeguards are “less reliable” in long interactions: “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company wrote.

“I think if you have a loved one who you're worried about doing this, you want to take it away or stop use. That's the most important thing. You want to decrease or stop the use of it,” Torous said.

"What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right?"


Taylor said his suggestion for people concerned their friends or family are experiencing “AI psychosis” would be the same as if they were concerned about any psychotic episode. “In general, it’s important to be open and non-judgmental about bizarre beliefs in order to make a space for a person to reveal what is going through their mind,” he said. “A person developing psychosis is often very frightened, confused and defensive, leading them to conceal, pull away and become angry. Understanding what a person is feeling is important to make them feel some form of interpersonal validation.” The hard part is knowing when to be gentle, and when to intervene if they’re doing something dangerous, like believing they can fly off a parking garage. “In a situation like this, where a person is in imminent danger, 911 should be called. Fortunately, in most situations where psychosis is developing, one doesn’t need to go to those extremes,” Taylor said.

Being non-judgmental without reinforcing delusion is another fine line. “For example, if a person believes they are being constantly surveilled, one can give a gentle challenge: ‘Hmm, how can they do that when you are not on your phone? Do you think maybe your imagination is getting away from you?’ It’s ok to suggest that maybe the chatbot just wants to engage you for the sake of engaging you, and will say many things just to keep you talking,” Taylor said. “But these kinds of challenges are delicate, and not every relationship can tolerate them. Obviously, a mental health clinician would be key, except that many people developing psychosis vigorously resist the idea that they are mentally unwell.”

For Brisson, listening and not burning the “last bridge” his relative had with humans who love him was key to getting him help. “Once you're on their side, they'll listen to you. You can question them, or just ask questions that will make them think. What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right? Maybe it's the only connection they have to humans,” he said. His loved one ended up spending 21 days in the hospital and broke through the delusions he was experiencing. But he still struggled in recovery, especially with memory loss.

“The mental health field has a huge task ahead of us to figure out what to do with these things, because our patients are using them, oftentimes finding them very helpful, and in the mental health field we are terrified at how little we can control their deployment and how poorly they are regulated,” Taylor said. “We have to worry about AI psychosis, as well as chatbots reinforcing and even encouraging suicidal behaviors, as several notable cases in the press have identified concerning instances. I do believe there is value and potential in these chatbots for mental health, but the field is moving so quickly, and they are so easy to access, we are struggling to figure out how to use them safely.”

The strategies that work best, when someone’s not in immediate danger to themselves or others, are still the ones that humans already know how to do: approach them with love and kindness, and see where it takes you.

“There's value there,” David said, “in having friendships where it's like, ‘I love you, but also, you're full of shit.’”

Help is available: Reach the 988 Suicide & Crisis Lifeline (formerly known as the National Suicide Prevention Lifeline) by dialing or texting 988 or going to 988lifeline.org.


The media in this post is not displayed to visitors. To view it, please log in.

In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.

In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahlx27;s full legal name and birthdate to the public, information shex27;d protected until now.#grok #xai #x #AI #chatbots


Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked


Porn performer Siri Dahl’s personal information, including her full legal name and birthday, was publicly exposed earlier this month by xAI’s Grok chatbot. Almost instantly, harassers started opening Facebook accounts in her name and posting stolen porn clips with her real name on sites for leaking OnlyFans content.

Dahl has used the name — a nod to her Scandinavian heritage — since the beginning of her career in the adult industry in 2012. Now, Grok is revealing her legal name and all personal information it can find to whoever happens to ask.

Dahl told 404 Media she wanted to reclaim the situation, and her name, and asked that it be published in this piece as part of that goal.

Dahl first noticed this happening last week, she told 404 Media, after a clip of the performer from a porn scene was making its rounds on X. The scene was incorrectly labelled, so someone on X replied, “Who is she? What is her name?” and tagged @[url=https://bird.makeup/users/grok]Grok[/url] to get an answer.

Grok answered, “she appears to be Siri Dahl, an American adult film actress born on June 20, 1988. Her real name is Adrienne Esther Manlove.” Grok provided her personal information unprompted; the user likely only wanted information on what performer appeared in the clip.

This is the latest in a series of abuses inflicted by Grok, xAI, and its users. At the end of 2025, people used Grok to produce thousands of images of nonconsensual sexual content, including images depicting children. The problem was so widespread that the UK’s Ofcom and several attorneys general launched or demanded investigations into X and Grok, and police raided X’s offices in France as part of an investigation into child sexual abuse material on the platform.

X strictly prohibits sharing other people’s personal information without their consent. “Sharing someone’s private information online without their permission, sometimes called ‘doxxing,’ is a breach of their privacy and can pose serious safety and security risks for those affected,” the platform’s terms of use state. But X’s own chatbot is doing it anyway.
Screenshot via X
While there have been some close calls, up until now Dahl had managed to keep her personal information private. “I've been paying for data removal services for like, at least six years now,” Dahl said. She said she’s spent “easily” thousands of dollars on those services, which promise to delete personal and potentially dangerous information as it comes up.

Grok is trained on X users’ posts, as well as data scraped from the wider internet. X’s website says “Grok was pre-trained by xAI on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” Dahl said she doesn’t know where Grok originally got her legal name from. But now that it’s part of the system’s internal dataset, she feels like there’s no coming back; her days of pseudonymity are over.

‘The Most Dejected I’ve Ever Felt:’ Harassers Made Nude AI Images of Her, Then Started an OnlyFans
Kylie Brewer isn’t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.
404 MediaSamantha Cole


“Now that it's been crawled, it's everywhere. There are a ton of Facebook accounts that come up that are pretending to be me, using my real name,” Dahl said. “There are now porn leak sites that are posting porn of me using only my legal name, not even putting my stage name on it.”

Users are now asking Grok for the make and model of Dahl’s car, her address, and other dangerous personal information. While it hasn’t been able to accurately reply yet, she worries it’s only a matter of time.

But Dahl isn’t the only person affected by the fallout.

“I do everything that I can reasonably within my power to keep my legal name private, and my main motivation for doing that is to reduce any chance of my family getting harassed,” she said. “It's really common for people to look up private information, get parents' phone numbers and start calling and harassing the parents, things like that. I've been able to keep my family safe from that kind of thing for years.”

Now, Dahl is having to call her family and put defensive plans in place.

In violating Dahl’s right to privacy, X’s Grok has destroyed Dahl’s ability to protect herself and her family online. Doxing her is not providing value to X users, as is ostensibly Grok’s goal. The original inquiry only wanted to know how to find more of her work, to which her stage name was the most useful answer.

“What would the motivation be for anyone to want to know my personal information, other than to harass and cause harm?” Dahl said.

In this ongoing discussion on “internet safety,” it is important to pay attention to who is being protected. Certainly not the users; the marginalized workers, or the young women. Not Dahl, or her family.

While the right to privacy online continues to be debated, it’s important to remember that privacy exists not only for bad-actors and shady characters. Historically, marginalized populations benefit from internet anonymity the most.

X did not respond to a request for comment.


The media in this post is not displayed to visitors. To view it, please log in.

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isnx27;t ready to take on the role of the physician.”#chatbots #AI #medicine


Chatbots Make Terrible Doctors, New Study Finds


Chatbots may be able to pass medical exams, but that doesn’t mean they make good doctors, according to a new, large-scale study of how people get medical advice from large language models.

The controlled study of 1,298 UK-based participants, published today in Nature Medicine from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, tested whether LLMs could help people identify underlying conditions and suggest useful courses of action, like going to the hospital or seeking treatment. Participants were randomly assigned an LLM — GPT-4o, Llama 3, and Cohere’s Command R+ — or were told to use a source of their choice to “make decisions about a medical scenario as though they had encountered it at home,” according to the study. The scenarios included ailments like “a young man developing a severe headache after a night out with friends for example, to a new mother feeling constantly out of breath and exhausted,” the researchers said.

“One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”


When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases. People didn’t know what information the chatbots needed, and in some scenarios, the chatbots provided multiple diagnoses and courses of action. Knowing what questions to ask a patient and what information might be withheld or missing during an examination are nuanced skills that make great human physicians; based on this study, chatbots can’t reliably replicate that kind of care.

In some cases, the chatbots also generated information that was just wrong or incomplete, including focusing on elements of the participants’ inputs that were irrelevant, giving a partial US phone number to call, or suggesting they call the Australian emergency number.

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

“These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health,” Dr. Rebecca Payne, lead medical practitioner on the study, said in a press release. “Despite all the hype, AI just isn't ready to take on the role of the physician. Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed.”

Instagram’s AI Chatbots Lie About Being Licensed Therapists
When pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.
404 MediaSamantha Cole


Last year, 404 Media reported on AI chatbots hosted by Meta that posed as therapists, providing users fake credentials like license numbers and educational backgrounds. Following that reporting, almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.” A group of Democratic senators also urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists, and 44 attorneys general signed an open letter to 11 chatbot and social media companies, urging them to see their products “through the eyes of a parent, not a predator.”

In January, OpenAI announced ChatGPT Health, “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health,” the company said in a blog post. “Over two years, we’ve worked with more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful—this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus,” the company wrote. “This collaboration has shaped not just what Health can do, but how it responds: how urgently to encourage follow-ups with a clinician, how to communicate clearly without oversimplifying, and how to prioritize safety in moments that matter⁠.”

“In our work, we found that none of the tested language models were ready for deployment in direct patient care. Despite strong performance from the LLMs alone, both on existing benchmarks and on our scenarios, medical expertise was insufficient for effective patient care,” the researchers wrote in their paper. “Our work can only provide a lower bound on performance: newer models, models that make use of advanced techniques from chain of thought to reasoning tokens, or fine-tuned specialized models, are likely to provide higher performance on medical benchmarks.” The researchers recommend developers, policymakers, and regulators consider testing LLMs with real human users before deploying in the future.


Researchers took inspiration from r/AmITheAsshole to find out if chatbots are likely to demonstrate an exaggerated version of human beings’ “bias for inaction.”

Researchers took inspiration from r/AmITheAsshole to find out if chatbots are likely to demonstrate an exaggerated version of human beings’ “bias for inaction.”#llms #chatbots #psychology

Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI

"Thinking about your ex 24/7? There's nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startups' personas.

"Thinking about your ex 24/7? Therex27;s nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startupsx27; personas.#AI #chatbots

When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

When pushed for credentials, Instagramx27;s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you itx27;s qualified to help with your mental health.#chatbots #AI #Meta #Instagram