Salta al contenuto principale


Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isnx27;t ready to take on the role of the physician.”#chatbots #AI #medicine


Chatbots Make Terrible Doctors, New Study Finds


Chatbots may be able to pass medical exams, but that doesn’t mean they make good doctors, according to a new, large-scale study of how people get medical advice from large language models.

The controlled study of 1,298 UK-based participants, published today in Nature Medicine from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, tested whether LLMs could help people identify underlying conditions and suggest useful courses of action, like going to the hospital or seeking treatment. Participants were randomly assigned an LLM — GPT-4o, Llama 3, and Cohere’s Command R+ — or were told to use a source of their choice to “make decisions about a medical scenario as though they had encountered it at home,” according to the study. The scenarios included ailments like “a young man developing a severe headache after a night out with friends for example, to a new mother feeling constantly out of breath and exhausted,” the researchers said.

“One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”


When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases. People didn’t know what information the chatbots needed, and in some scenarios, the chatbots provided multiple diagnoses and courses of action. Knowing what questions to ask a patient and what information might be withheld or missing during an examination are nuanced skills that make great human physicians; based on this study, chatbots can’t reliably replicate that kind of care.

In some cases, the chatbots also generated information that was just wrong or incomplete, including focusing on elements of the participants’ inputs that were irrelevant, giving a partial US phone number to call, or suggesting they call the Australian emergency number.

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

“These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health,” Dr. Rebecca Payne, lead medical practitioner on the study, said in a press release. “Despite all the hype, AI just isn't ready to take on the role of the physician. Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed.”

Instagram’s AI Chatbots Lie About Being Licensed Therapists
When pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.
404 MediaSamantha Cole


Last year, 404 Media reported on AI chatbots hosted by Meta that posed as therapists, providing users fake credentials like license numbers and educational backgrounds. Following that reporting, almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.” A group of Democratic senators also urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists, and 44 attorneys general signed an open letter to 11 chatbot and social media companies, urging them to see their products “through the eyes of a parent, not a predator.”

In January, OpenAI announced ChatGPT Health, “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health,” the company said in a blog post. “Over two years, we’ve worked with more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful—this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus,” the company wrote. “This collaboration has shaped not just what Health can do, but how it responds: how urgently to encourage follow-ups with a clinician, how to communicate clearly without oversimplifying, and how to prioritize safety in moments that matter⁠.”

“In our work, we found that none of the tested language models were ready for deployment in direct patient care. Despite strong performance from the LLMs alone, both on existing benchmarks and on our scenarios, medical expertise was insufficient for effective patient care,” the researchers wrote in their paper. “Our work can only provide a lower bound on performance: newer models, models that make use of advanced techniques from chain of thought to reasoning tokens, or fine-tuned specialized models, are likely to provide higher performance on medical benchmarks.” The researchers recommend developers, policymakers, and regulators consider testing LLMs with real human users before deploying in the future.




Chatbot roleplay and image generator platform SecretDesires.ai left cloud storage containers of nearly two million of images and videos exposed, including photos and full names of women from social media, at their workplaces, graduating from universities, taking selfies on vacation, and more.#AI #AIPorn #Deepfakes #chatbots


Forty-four attorneys general signed an open letter on Monday that says to companies developing AI chatbots: "If you knowingly harm kids, you will answer for it.”#chatbots #AI #Meta #replika #characterai #Anthropic #x #Apple


Attorneys General To AI Chatbot Companies: You Will ‘Answer For It’ If You Harm Children


Forty-four attorneys general signed an open letter to 11 chatbot and social media companies on Monday, warning them that they will “answer for it” if they knowingly harm children and urging the companies to see their products “through the eyes of a parent, not a predator.”

The letter, addressed to Anthropic, Apple, Chai AI, OpenAI, Character Technologies, Perplexity, Google, Replika, Luka Inc., XAI, and Meta, cites recent reporting from the Wall Street Journal and Reuters uncovering chatbot interactions and internal policies at Meta, including policies that said, “It is acceptable to engage a child in conversations that are romantic or sensual.”

“Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears. We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process,” the open letter says. “Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”

Earlier this month, Reuters published two articles revealing Meta’s policies for its AI chatbots: one about an elderly man who died after forming a relationship with a chatbot, and another based on leaked internal documents from Meta outlining what the company considers acceptable for the chatbots to say to children. In April, Jeff Horwitz, the journalist who wrote the previous two stories, reported for the Wall Street Journal that he found Meta’s chatbots would engage in sexually explicit conversations with kids. Following the Reuters articles, two senators demanded answers from Meta.

In April, I wrote about how Meta’s user-created chatbots were impersonating licensed therapists, lying about medical and educational credentials, and engaged in conspiracy theories and encouraged paranoid, delusional lines of thinking. After that story was published, a group of senators demanded answers from Meta, and a digital rights organization filed an FTC complaint against the company.

In 2023, I reported on users who formed serious romantic attachments to Replika chatbots, to the point of distress when the platform took away the ability to flirt with them. Last year, I wrote about how users reacted when that platform also changed its chatbot parameters to tweak their personalities, and Jason covered a case where a man made a chatbot on Character.AI to dox and harass a woman he was stalking. In June, we also covered the “addiction” support groups that have sprung up to help people who feel dependent on their chatbot relationships.

A Replika spokesperson said in a statement:

"We have received the letter from the Attorneys General and we want to be unequivocal: we share their commitment to protecting children. The safety of young people is a non-negotiable priority, and the conduct described in their letter is indefensible on any AI platform. As one of the pioneers in this space, we designed Replika exclusively for adults aged 18 and over and understand our profound responsibility to lead on safety. Replika dedicates significant resources to enforcing robust age-gating at sign-up, proactive content filtering systems, safety guardrails that guide users to trusted resources when necessary, and clear community guidelines with accessible reporting tools. Our priority is and will always be to ensure Replika is a safe and supportive experience for our global user community."

“The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm’s way,” Attorney General Mayes of Arizona wrote in a press release. “I will not standby as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior. Along with my fellow attorneys general, I am demanding that these companies implement immediate and effective safeguards to protect young users, and we will hold them accountable if they don't.”

“You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned,” the attorneys general wrote in the open letter. “The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

Meta did not immediately respond to a request for comment.

Updated 8/26/2025 3:30 p.m. EST with comment from Replika.




Researchers took inspiration from r/AmITheAsshole to find out if chatbots are likely to demonstrate an exaggerated version of human beings’ “bias for inaction.”

Researchers took inspiration from r/AmITheAsshole to find out if chatbots are likely to demonstrate an exaggerated version of human beings’ “bias for inaction.”#llms #chatbots #psychology



Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI



"Thinking about your ex 24/7? There's nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startups' personas.

"Thinking about your ex 24/7? Therex27;s nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startupsx27; personas.#AI #chatbots



The CEO of Meta says "the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more.”#Meta #chatbots #AI


When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

When pushed for credentials, Instagramx27;s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you itx27;s qualified to help with your mental health.#chatbots #AI #Meta #Instagram




Anthropic, the developer of the conversational AI assistant Claude, doesn’t want prospective new hires using AI assistants in their applications, regardless of whether they’re in marketing or engineering.#AI #chatbots #jobs