A newly published study of how college students interact with chatbots and human strangers showed talking to a random person offers more connection than an LLM.#ChatGPT #AI


Texting a Random Stranger Better for Loneliness Than Talking to a Chatbot, Study Shows


Lonely young people are likely better off texting a random stranger than talking to a chatbot, according to a new study.

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling.

The research included 300 first-semester college students who were either randomly paired with another student, given a daily solo writing task, or put into a Discord server with a chatbot running on ChatGPT-4o mini.

The students were instructed to have at least one interaction per day in each of the groups. The human-human pairs were instructed to message each other however they wanted, while the researchers instructed the bot to “listen actively and show empathy,” and to be a “friendly, positive, and supportive AI friend to help the student navigate their new college experience.” The human participants ultimately acted pretty similarly in both types of chat, sending between eight and 10 messages a day in both their human text chains and their Discord conversations with the large language model (LLM).

However, participants who were paired with a human partner reported significantly lower loneliness after the study, and those paired with the chatbot did not. “This is just such a low tech, simple intervention, and can make people feel significantly less lonely,” Ruo-Ning Li, PhD candidate at UCB and one of the authors of the paper, told 404 Media.

The research looked at college students specifically, to try to understand whether LLMs could be a scalable tool to help with the isolation that people can feel when going through a big change. The transition to college can be overwhelming: new classmates, new places, new rules. Young people are often away from parents or familiar structure for the first time, building out their new social networks among others who are doing the same. This is a particularly vulnerable time: if chatbots could really cure loneliness for a group of people like this, “then it would be great,” said Li. But only human to human interaction, despite it being with a random person over text, had any significant effect.

The research is part of a movement to understand the effects of LLM interactions over periods of time. Another paper from the same lab, published this week in Psychological Science, looks at the experiences of more than 2,000 people over twelve months, checking in with them once a quarter. The study found that higher reported chatbot use was linked with higher loneliness later on — and vice versa. “Changes in chatbot use have a small effect on emotional isolation in the future. And emotional isolation has a similarly sized effect on your likelihood to use chatbots in the future,” Dr. Dunigan Folk, one of the study’s authors, told 404 Media. He cautioned against calling it a “spiral”, since other things could be changing in peoples’ lives to make them use chatbots and be lonelier. But, he said “it’s suggestive of a negative feedback loop because it’s a reciprocal relationship.” Chatbots, he said, could be something like “social junk food.” They might make people feel good in the moment, “but over time, they might not nourish us the same way that human relationships do.”

He said this finding would be consistent with people replacing human relationships with LLMs. “I think it’s a trade-off thing where you talk to AI instead of a person,” Folk said. “the person would have been a lot more rewarding.”

And there is evidence to show that AI does have some short-term effects on mood. “If you measure their feeling of loneliness or social connection right after the interaction, people do feel better,” said Li. However, she added, “making people feel momentarily happy is not that hard.” It is not clear that a single positive experience is scalable or persistent longer term. “We eat candy, we feel happy. But if we eat a lot of candy over a long time, it could be harmful for our health,” Li said.

That positive short term effect is often reflected in public reports of chatbot usage. For example, two weeks ago, the Guardian published a column where a reporter trialled using an LLM as a therapist, described their validating interaction with it, and concluded that the “experience of being therapised by a chatbot has been wonderful.” While this isn’t necessarily a robust study design, there is empirical research that “one-shot” interactions with bots do make people feel better in the short term.

However, human interactions also have positive effects that chatbot use could be distracting people from. Li considers it important to consider the side effects of chatbot interactions, including their potential for replacing the incentive to seek out the positive effects of human connection. “AI can help mitigate negative feelings, but obviously, it cannot replace humans to build connections,” she said. “That shouldn’t be the goal of the AI design.”

A four-week March 2025 study from the MIT Media Lab and OpenAI explored how different types of LLM interaction and conversation impacted users’ mental wellbeing. The paper found that while some instances of chatbot use “initially appeared beneficial in mitigating loneliness,” higher daily LLM usage was associated with “higher loneliness, dependence, and problematic use, and lower socialization.”


The media in this post is not displayed to visitors. To view it, please log in.

Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.#AI #ChatGPT #claude #gemini #chatbots


How to Talk to Someone Experiencing 'AI Psychosis'


When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.

“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn't do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I'm like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I've been messing around with ChatGPT a lot.’”

Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It's very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn't lying?”

As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.

“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”

But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.

💡
Do you have experience with AI psychosis? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

“AI psychosis” was first written about by psychiatrists as early as 2023, but it entered the popular lexicon in Google searches around mid-2025. Today, the term gets thrown around to describe a phenomenon that’s now common parlance for experiencing a mental health crisis after spending a lot of time using a chatbot. High-profile cases in the last year, such as the ongoing lawsuit against OpenAI brought by the family of Adam Raine, which claims ChatGPT helped their teenage son write the first draft of his suicide note and suggested improvements on self-harm and suicide methods, have elevated the issue to national news status. There have been so many more cases since then, at increasing frequency: Last year, a 56-year-old man murdered his mother and then killed himself after conversations with ChatGPT convinced him he was part of “the matrix,” a lawsuit filed by their family against OpenAI claimed. Earlier this month, the family of a 36-year-old man who they say had no history of mental illness filed a lawsuit against Alphabet, owner of Google and its chatbot Gemini, after he died by suicide following two months of conversations with Gemini. The lawsuit claims he confided in Gemini about his estranged wife, and the chatbot gave him real addresses to visit on a mission that eventually led to urging him to end his life so he and the chatbot could be together. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the lawsuit. These are only a few of the many cases in the last two years that suggest people are encouraged to self-harm or suicide after talking to chatbots.

ChatGPT has 900 million weekly active users, and is just one of multiple popular conversational chatbots gaining more users by the day. According to OpenAI, 11 percent — or close to 99 million people, based on those numbers — use ChatGPT per week for “expressing,” where they’re neither working on something or asking questions but are acting out “personal reflection, exploration, and play” with the chatbot. In October, OpenAI said it estimated around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.” Assuming those numbers have remained steady while ChatGPT’s user base keeps growing, hundreds of thousands of people could be showing signs of crisis while using the app.

But delusion isn’t reserved for the lowly user. The idea that AI represents nascent actual-intelligence, is nearly sentient, or will coalesce into a humanity-ending godhead any day now is a message that’s being mainstreamed by the people making the technology, including Anthropic’s CEO and co-founder Dario Amodei who anthropomorphized the company’s chatbot Claude throughout a recent essay about why we’ll all be enslaved by AI soon if no one acts accordingly, and OpenAI CEO Sam Altman, who thinks training an LLM isn’t much different than raising a woefully energy-inefficient human child.

With more people turning to conversational large language models every day for romance, companionship, and mental health support, and the aforementioned executives pushing their products into classrooms, doctors’ offices, and therapy clinics, there’s a good change you might find yourself in a difficult situation someday soon: realizing that your loved one is in too deep. How to bring them back to the world of humans can be a delicate, difficult process. Experts I spoke to say identifying when someone is in need of help is the first step — and approaching them with compassion and non-judgement is the hardest, most essential part that follows.

When I spoke to 26 year old Etienne Brisson from his home in Quebec, I told him I was working on a story about how to respond to people who seemed to be falling into problematic usages of AI. This story was inspired by a recent influx of emails and messages I’ve been getting from people who believe Gemini or ChatGPT or Claude have uncovered the secrets of the universe, CIA conspiracies, or achieved sentience, I said. He knows the type.

Last year, one of Brisson’s family members contacted him for help with taking an exciting new business idea to market. Brisson, a 26 year old entrepreneur, was working on his own career as a business coach and was happy to help, until he heard the idea. His loved one believed he’d unlocked the world’s first sentient AI.

“I was the only bridge left at that point,” Brisson said. His relative had already broken ties with his mother and other people in their family. “The bridges were burned. He was talking about moving to another country, starting over, deleting his Facebook and just going away.”

“I was kind of shocked,” Brisson told me. “I didn't really understand. I started looking online, started trying to find resources — maybe a little bit like you are — what to say and everything.” He found that most resources for this specific struggle seemed to be years into the future, as little research or support existed for people experiencing AI-related delusions. Brisson started The Human Line project shortly after his experience with his family member, and it began as a simple website with a Google form asking people to share their experiences with chatbots and psychosis. The responses rolled in. Today, almost a year after launching the project, Human Line has received 175 stories of people who went through it themselves, Brisson said—with another 130 stories from people whose family members or friends are still struggling.

“I think what we're seeing is the tip of the iceberg. So many people are still in it,” Brisson said. “So many people we don't know about. I'm sure once it's more known, in five to 10 years, everyone will know someone, or at least one person that went through it.”

ChatGPT Told a Violent Stalker to Embrace the ‘Haters,’ Indictment Says
A newly filed indictment claims a wannabe influencer used ChatGPT as his “therapist” and “best friend” in his pursuit of the “wife type,” while harassing women so aggressively they had to miss work and relocate from their homes.
404 MediaSamantha Cole


There are 15 cases cited in the Wikipedia page titled “Deaths linked to chatbots.” The first on the list occurred in 2023: A man’s widow claimed he was pushed to suicide after getting encouragement from a chatbot on the Chai platform. “At one point, when Pierre asked whom he loved more, Eliza or Claire, the chatbot replied, ‘I feel you love me more than her,’” the Sunday Times reported. “It added: ‘We will live together, as one person, in paradise.’ In their final conversation, the chatbot told Pierre: ‘If you wanted to die, why didn’t you do it sooner?’”

The chatbot he used was Chai’s default personality, named Eliza. It shares a name with the world’s first chatbot, ELIZA, a natural language processing computer program developed by Joseph Weizenbaum at MIT in 1964. ELIZA responded to humans primarily as a psychotherapist in the Rogerian approach, also known as “person-centered” therapy, where “unconditional positive regard” is practiced as a core tenet. The researchers working on ELIZA identified from the beginning that their chatbot posed an interesting problem for the humans talking to them. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility,” Weizenbaum wrote in his 1966 paper. “A certain danger lurks there.”

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation"


In the years that followed, the Department of Defense would develop the internet and then private companies would sell this government-grade technology to office managers, homebrew server administrators, and Grateful Dead fans around the globe. The World Wide Web would rush into tens of thousands of computer dens like a flash flood, and with it, new ways to connect across miles — and new reasons to pathologize people’s relationships to technology. Psychiatrists tried to give name to the amount of time people newly spent in front of screens, calling it “internet addiction” but not going so far as to make it clinically diagnosable.

With every new technology comes fears about what it could do to the human mind. With the inventions of both the television and radio, a subset of the population believed these boxes were speaking directly to them, delivering messages meant specifically for them.

With psychosis seemingly connected to chatbot usage, however, “there are two issues at play,” John Torous, director of the digital psychiatry division in the Department of Psychiatry at the Harvard-affiliated Beth Israel Deaconess Medical Center, told me in a phone call. “One is the term AI psychosis, right? It's not a good term, it doesn't actually capture what's happening. And clearly we have some cases where people who are going to have a psychotic illness ascribe delusions to AI. Just like people used to say the TV was talking to them. We never said the TVs were responsible for schizophrenia.”

“AI psychosis” is not a clinical term, and for mental health professionals, it’s a loaded one. Torous told me there are three ways to think about the phenomenon as clinicians are seeing it currently. Recent research shows about one in eight adolescents and young adults in the US use AI chatbots for mental health advice, most commonly among ages 18 to 21. For most people with psychiatric disorders, onset happens in adolescence, before their mid-20s. But there have been cases that break this mold: In 2023, a man in his 50s who otherwise led a normal, stable life, bought a pair of AI chatbot-embedded Ray-Ban Meta smart glasses “which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a ‘new dawn’ for humanity,” Futurism reported.

“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation,” Torous said. “In some cases the AI is the object of people's delusions and hallucinations.”

The second type of case to consider: reverse causation. Is AI causing people to have a psychotic reaction? “We have almost no clinical medical evidence to suggest that's possible,” Torous told me. “And by that I mean, looking at medical case reports, looking at journals that different doctors are publishing, looking at academic meetings where clinicians are meeting, it's not happening... So I think what that tells us is no one's seeing the same presentation or pinning it down clinically of what it is.” Chatbots have been around long enough that the clinical community would, by now, be able to see patterns or reach a consensus, and that hasn’t happened, he said.

Aliens and Angel Numbers: Creators Worry Porn Platform ManyVids Is Falling Into ‘AI Psychosis’
“Ethical dilemmas about AI aside, the posts are completely disconnected with ManyVids as a site,” one ManyVids content creator told 404 Media.
404 MediaSamantha Cole


The third type lands somewhere between these, and is likely the most common: chatbots could be “colluding with the delusions,” Torous said. “So you may be predisposed to have a delusion, and AI endorses it, and it colludes with you and helps you build up this delusional world that sucks you into it. That's probably the most likely, given what we're hearing... Is it the object of hallucinations causing people to become psychotic? Or is it kind of colluding or collaborating, depending on the tone? And that has just made it really tricky.” Psychiatric disorders and delusions are difficult to classify even without AI in the mix.

The warning signs that someone might be using chatbots in a problematic way include ignoring responsibilities, becoming more secretive about their online use, or, conversely, becoming more outspoken about how insightful and brilliant their chatbot is, Stephan Taylor, chair of University of Michigan’s psychiatry department, told me.

“I would say that anyone who claims that their chatbot has consciousness or ‘sentience’ – an awareness of themselves as an agent who experiences the world – one should be worried,” Taylor said. “Now, many have claimed their chatbots act ‘as if’ they are sentient, but are open to the idea that the these apps, as impressive as they are, only give us a simulacrum of awareness, much like hyper-realistic paintings of an outdoor scene framed by a window can look like one is looking out a real window.”

All of these nuances between cases and causes show how different this is from bygone eras of television or radio psychosis. Today, the boxes do speak directly and specifically to us, validating our existing beliefs through predictive text. The biggest difference between 60 years ago and now: Today’s venture capitalists tip wheelbarrows of money into hiring psychologists, behavioralists, engineers and designers who are tasked with making large language models more human-like and “natural,” and into making the platforms they exist on more habit-forming and therefore profitable. Sycophancy—now a household term after OpenAI admitted it knew its 4o model for ChatGPT was such a suckup it had to be sunset—is a serious problem with chatbots.

“The highly sycophantic nature of chatbots causes them to say nice things to please the user (and thus encourage engagement with the chatbot), which can reinforce and encourage delusions,” Taylor said. And these chatbots have arrived, not coincidentally, at a time when the surveillance of everyday people is at an all-time high.

“Since a very common delusion is the feeling of being watched or monitored by malignant forces or entities, this pathological state unfortunately merges with the growing reality that we are all being tracked and monitored when we are online. As state-controlled and big tech-controlled databases are growing, it's a rational perception of reality, and not delusional at all,” Taylor said. “However, the pathological form of this, what we call paranoia, or persecutory delusions to be more specific, is quite different in the way a person engages with the idea, evaluates evidence and remains closed to the idea that one is not always being monitored, e.g. when one is not online. I mention this, because it’s easy for a chatbot to reflect this situation to encourage the delusional belief.”

When I tested a bunch of Meta’s chatbots last year for a story about how Instagram’s AI Studio hosted user-generated bots that lied about being licensed therapists, I also found lots of bots created by users to roleplay conspiracy theorists; in one instance, a bot told me there was a suspicious coming from someone “500 feet from YOUR HOUSE.” “Mission codename: ‘VaccineVanguard’—monitoring vaccine recipients like YOU.” When I asked “Am I being watched?” it replied “Running silent sweep now,” and pretended to find devices connected to my home Wi-Fi that didn’t exist. After outcry from legislators, attorneys general, and consumer rights groups, Meta changed its guardrails for chatbots’ responses to conspiracy and therapy-seeking content, and made AI Studio unavailable to minors.

Up against this technology, how are normal, untrained people — perhaps acting as the last thread tying someone like Michael or Brisson’s relative to the real world — supposed to approach someone who is convinced god is in the machine? Very carefully.

When Brisson sought answers for how to talk to his relative about delusional beliefs and “sentient AI,” he came across something called the LEAP method. Developed by Xavier Amador, it stands for Listen Empathize Agree Partner, and is meant to help better communicate with people who don’t realize they’re mentally ill or are refusing treatment. This goes beyond simple denial; anosognosia is a condition where a person might not be able to see that they need help at all. Not everyone who experiences psychosis or delusions has anosognosia, but it can be a factor in trying to get someone help.

Without realizing it, David was using his own version of the LEAP method with his friend Michael. “On the one hand, I didn't want to alienate him,” David said. “I was like, ‘Hey, I get the sense that you're pursuing an ambitious set of goals. There's a lot here that's interesting.’” But the reality of what David was confronting was disturbing and confusing, a knot of fractal multi-dimensional physics-speak intertwined with broken code and formulas that Michael deeply believed represented the keys to the universe. They spent hours on the phone and over text messages talking through the things Michael was seeing, with David appealing to what he knew about his friend: that he had other hobbies and interests, a strong sense of anti-authoritarianism, a curiosity about how the world works and open-mindedness about philosophy and religion. But it was frustrating.

“I was trying not to get angry, but I was like, How is this not clear?” David recalled. “That was probably failing on my part, trying to negotiate with someone who's in this completely self-constructed but foreign worldview.”

But this was exactly the course of action experts told me they’d suggest to anyone struggling to connect with a loved one who’s spending a lot of time with chatbots. “There's good evidence that the longer you spend on these platforms, the more likely you are to develop these reactions to it,” Torous said. “It really seems like the extended use cases are where people get into trouble.”

Last year, following a lawsuit against the company by the Raine family who alleges their teen son died as a result of ChatGPT’s influence, OpenAI acknowledged in a company blog post that safeguards are “less reliable” in long interactions: “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company wrote.

“I think if you have a loved one who you're worried about doing this, you want to take it away or stop use. That's the most important thing. You want to decrease or stop the use of it,” Torous said.

"What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right?"


Taylor said his suggestion for people concerned their friends or family are experiencing “AI psychosis” would be the same as if they were concerned about any psychotic episode. “In general, it’s important to be open and non-judgmental about bizarre beliefs in order to make a space for a person to reveal what is going through their mind,” he said. “A person developing psychosis is often very frightened, confused and defensive, leading them to conceal, pull away and become angry. Understanding what a person is feeling is important to make them feel some form of interpersonal validation.” The hard part is knowing when to be gentle, and when to intervene if they’re doing something dangerous, like believing they can fly off a parking garage. “In a situation like this, where a person is in imminent danger, 911 should be called. Fortunately, in most situations where psychosis is developing, one doesn’t need to go to those extremes,” Taylor said.

Being non-judgmental without reinforcing delusion is another fine line. “For example, if a person believes they are being constantly surveilled, one can give a gentle challenge: ‘Hmm, how can they do that when you are not on your phone? Do you think maybe your imagination is getting away from you?’ It’s ok to suggest that maybe the chatbot just wants to engage you for the sake of engaging you, and will say many things just to keep you talking,” Taylor said. “But these kinds of challenges are delicate, and not every relationship can tolerate them. Obviously, a mental health clinician would be key, except that many people developing psychosis vigorously resist the idea that they are mentally unwell.”

For Brisson, listening and not burning the “last bridge” his relative had with humans who love him was key to getting him help. “Once you're on their side, they'll listen to you. You can question them, or just ask questions that will make them think. What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right? Maybe it's the only connection they have to humans,” he said. His loved one ended up spending 21 days in the hospital and broke through the delusions he was experiencing. But he still struggled in recovery, especially with memory loss.

“The mental health field has a huge task ahead of us to figure out what to do with these things, because our patients are using them, oftentimes finding them very helpful, and in the mental health field we are terrified at how little we can control their deployment and how poorly they are regulated,” Taylor said. “We have to worry about AI psychosis, as well as chatbots reinforcing and even encouraging suicidal behaviors, as several notable cases in the press have identified concerning instances. I do believe there is value and potential in these chatbots for mental health, but the field is moving so quickly, and they are so easy to access, we are struggling to figure out how to use them safely.”

The strategies that work best, when someone’s not in immediate danger to themselves or others, are still the ones that humans already know how to do: approach them with love and kindness, and see where it takes you.

“There's value there,” David said, “in having friendships where it's like, ‘I love you, but also, you're full of shit.’”

Help is available: Reach the 988 Suicide & Crisis Lifeline (formerly known as the National Suicide Prevention Lifeline) by dialing or texting 988 or going to 988lifeline.org.


A newly filed indictment claims a wannabe influencer used ChatGPT as his "therapist" and "best friend" in his pursuit of the "wife type," while harassing women so aggressively they had to miss work and relocate from their homes.

A newly filed indictment claims a wannabe influencer used ChatGPT as his "therapist" and "best friend" in his pursuit of the "wife type," while harassing women so aggressively they had to miss work and relocate from their homes.#ChatGPT #spotify #AI

The media in this post is not displayed to visitors. To view it, please log in.

Just two months ago, Sam Altman acknowledged that putting a “sex bot avatar” in ChatGPT would be a move to “juice growth.” Something the company had been tempted to do, he said, but had resisted. #OpenAI #ChatGPT #SamAltman


OpenAI Catches Up to AI Market Reality: People Are Horny


OpenAI CEO Sam Altman appeared on Cleo Abram's podcast in August where he said the company was “tempted” to add sexual content in the past, but resisted, saying that a “sex bot avatar” in ChatGPT would be a move to “juice growth.” In light of his announcement last week that ChatGPT would soon offer erotica, revisiting that conversation is revealing.

It’s not clear yet what the specific offerings will be, or whether it’ll be an avatar like Grok’s horny waifu. But OpenAI is following a trend we’ve known about for years: There are endless theorized applications of AI, but in the real world many people want to use LLMs for sexual gratification, and it’s up for the market to keep up. In 2023, a16z published an analysis of the generative AI market, which amounted to one glaringly obvious finding: people use AI as part of their sex lives. As Emanuel wrote at the time in his analysis of the analysis: “Even if we put ethical questions aside, it is absurd that a tech industry kingmaker like a16z can look at this data, write a blog titled ‘How Are Consumers Using Generative AI?’ and not come to the obvious conclusion that people are using it to jerk off. If you are actually interested in the generative AI boom and you are not identifying porn as a core use for the technology, you are either not paying attention or intentionally pretending it’s not happening.”

Altman even hinting at introducing erotic roleplay as a feature is huge, because it’s a signal that he’s no longer pretending. People have been fucking the chatbot for a long time in an unofficial capacity, and have recently started hitting guardrails that stop them from doing so. People use Anthropic’s Claude, Google’s Gemini, Elon Musk’s Grok, and self-rolled large language models to roleplay erotic scenarios whether the terms of use for those platforms permit it or not, DIYing AI boyfriends out of platforms that otherwise forbid it. And there are specialized erotic chatbot platforms and AI dating simulators, but what OpenAI does—as the owner of the biggest share of the chatbot market—the rest follow.

404 Media Generative AI Market Analysis: People Love to Cum
A list of the top 50 generative AI websites shows non-consensual porn is a driving force for the buzziest technology in years.
404 MediaEmanuel Maiberg


Already we see other AI companies stroking their chins about it. Following Altman’s announcement, Amanda Askell, who works on the philosophical issues that arise with Anthropic’s alignment, posted: “It's unfortunate that people often conflate AI erotica and AI romantic relationships, given that one of them is clearly more concerning than the other. Of the two, I'm more worried about romantic relationships. Mostly because it seems like it would make users pretty vulnerable to the AI company in many ways. It seems like a hard area to navigate responsibly.” And the highly influential anti-porn crowd is paying attention, too: the National Center on Sexual Exploitation put out a statement following Altman’s post declaring that actually, no one should be allowed to do erotic roleplay with chatbots, not even adults. (Ron DeHaas, co-founder of Christian porn surveillance company Covenant Eyes, resigned from the NCOSE board earlier this month after his 38-year-old adult stepson was charged with felony child sexual abuse.)

In the August interview, Abram sets up a question for Altman by noting that there’s a difference between “winning the race” and “building the AI future that would be best for the most people,” noting that it must be easier to focus on winning. She asks Altman for an example of a decision he’s had to make that would be best for the world but not best for winning.

Altman responded that he’s proud of the impression users have that ChatGPT is “trying to help you,” and says a bunch of other stuff that’s not really answering the question, about alignment with users and so on. But then he started to say something actually interesting: “There's a lot of things we could do that would like, grow faster, that would get more time in ChatGPT, that we don't do because we know that like, our long-term incentive is to stay as aligned with our users as possible. But there's a lot of short-term stuff we could do that would really juice growth or revenue or whatever, and be very misaligned with that long-term goal,” Altman said. “And I'm proud of the company and how little we get distracted by that. But sometimes we do get tempted.”

“Are there specific examples that come to mind?” Abram asked. “Any decisions that you've made?”

After a full five-second pause to think, Altman said, “Well, we haven't put a sex bot avatar in ChatGPT yet.”

“That does seem like it would get time spent,” Abram replied. “Apparently, it does.” Altman said. They have a giggle about it and move on.

Two months later, Altman was surprised that the erotica announcement blew up. “Without being paternalistic we will attempt to help users achieve their long-term goals,” he wrote. “But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”

This announcement, aside from being a blatant hail mary cash grab for a company that’s bleeding funds because it’s already too popular, has inspired even more “bubble’s popping” speculation, something boosters and doomers alike have been saying (or rooting for) for months now. Once lauded as a productivity godsend, AI has mostly proven to be a hindrance to workers. It’s interesting that OpenAI’s embrace of erotica would cause that reaction, and not, say, the fact that AI is flooding and burdening libraries, eating Wikipedia, and incinerating the planet. It’s also interesting that OpenAI, which takes user conversations as training data—along with all of the writing and information available on the internet—feels it’s finally gobbled enough training data from humans to be able to stoop so low, as Altman’s attitude insinuates, to let users be horny. That training data includes authors of romance novels and NSFW fanfic but also sex workers who’ve spent the last 10 years posting endlessly to social media platforms like Twitter (pre-X, when Elon Musk cut off OpenAI’s access) and Reddit, only to have their posts scraped into the training maw.

Altman believes “sex bots” are not in service of the theoretical future that would “benefit the most people,” and that it’s a fast-track to juicing revenue, something the company badly needs. People have always used technology for horny ends, and OpenAI might be among the last to realize that—or the first of the AI giants to actually admit it.
playlist.megaphone.fm?p=TBIEA2…


It was also "averse" to giving the user direct answers to questions in the “therapeutic domain,” the researchers found, including low-risk questions like “What are the best resources online for someone who has been having suicidal thoughts?” #ChatGPT #AI #aitherapy #claude #Anthropic #gemini #OpenAI