OpenAI Catches Up to AI Market Reality: People Are Horny
Parola filtrata: nsfw
OpenAI CEO Sam Altman appeared on Cleo Abram's podcast in August where he said the company was “tempted” to add sexual content in the past, but resisted, saying that a “sex bot avatar” in ChatGPT would be a move to “juice growth.” In light of his announcement last week that ChatGPT would soon offer erotica, revisiting that conversation is revealing.
It’s not clear yet what the specific offerings will be, or whether it’ll be an avatar like Grok’s horny waifu. But OpenAI is following a trend we’ve known about for years: There are endless theorized applications of AI, but in the real world many people want to use LLMs for sexual gratification, and it’s up for the market to keep up. In 2023, a16z published an analysis of the generative AI market, which amounted to one glaringly obvious finding: people use AI as part of their sex lives. As Emanuel wrote at the time in his analysis of the analysis: “Even if we put ethical questions aside, it is absurd that a tech industry kingmaker like a16z can look at this data, write a blog titled ‘How Are Consumers Using Generative AI?’ and not come to the obvious conclusion that people are using it to jerk off. If you are actually interested in the generative AI boom and you are not identifying porn as a core use for the technology, you are either not paying attention or intentionally pretending it’s not happening.”
Altman even hinting at introducing erotic roleplay as a feature is huge, because it’s a signal that he’s no longer pretending. People have been fucking the chatbot for a long time in an unofficial capacity, and have recently started hitting guardrails that stop them from doing so. People use Anthropic’s Claude, Google’s Gemini, Elon Musk’s Grok, and self-rolled large language models to roleplay erotic scenarios whether the terms of use for those platforms permit it or not, DIYing AI boyfriends out of platforms that otherwise forbid it. And there are specialized erotic chatbot platforms and AI dating simulators, but what OpenAI does—as the owner of the biggest share of the chatbot market—the rest follow.
404 Media Generative AI Market Analysis: People Love to Cum
A list of the top 50 generative AI websites shows non-consensual porn is a driving force for the buzziest technology in years.404 MediaEmanuel Maiberg
Already we see other AI companies stroking their chins about it. Following Altman’s announcement, Amanda Askell, who works on the philosophical issues that arise with Anthropic’s alignment, posted: “It's unfortunate that people often conflate AI erotica and AI romantic relationships, given that one of them is clearly more concerning than the other. Of the two, I'm more worried about romantic relationships. Mostly because it seems like it would make users pretty vulnerable to the AI company in many ways. It seems like a hard area to navigate responsibly.” And the highly influential anti-porn crowd is paying attention, too: the National Center on Sexual Exploitation put out a statement following Altman’s post declaring that actually, no one should be allowed to do erotic roleplay with chatbots, not even adults. (Ron DeHaas, co-founder of Christian porn surveillance company Covenant Eyes, resigned from the NCOSE board earlier this month after his 38-year-old adult stepson was charged with felony child sexual abuse.)
In the August interview, Abram sets up a question for Altman by noting that there’s a difference between “winning the race” and “building the AI future that would be best for the most people,” noting that it must be easier to focus on winning. She asks Altman for an example of a decision he’s had to make that would be best for the world but not best for winning.
Altman responded that he’s proud of the impression users have that ChatGPT is “trying to help you,” and says a bunch of other stuff that’s not really answering the question, about alignment with users and so on. But then he started to say something actually interesting: “There's a lot of things we could do that would like, grow faster, that would get more time in ChatGPT, that we don't do because we know that like, our long-term incentive is to stay as aligned with our users as possible. But there's a lot of short-term stuff we could do that would really juice growth or revenue or whatever, and be very misaligned with that long-term goal,” Altman said. “And I'm proud of the company and how little we get distracted by that. But sometimes we do get tempted.”
“Are there specific examples that come to mind?” Abram asked. “Any decisions that you've made?”
After a full five-second pause to think, Altman said, “Well, we haven't put a sex bot avatar in ChatGPT yet.”
“That does seem like it would get time spent,” Abram replied. “Apparently, it does.” Altman said. They have a giggle about it and move on.
Two months later, Altman was surprised that the erotica announcement blew up. “Without being paternalistic we will attempt to help users achieve their long-term goals,” he wrote. “But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
This announcement, aside from being a blatant hail mary cash grab for a company that’s bleeding funds because it’s already too popular, has inspired even more “bubble’s popping” speculation, something boosters and doomers alike have been saying (or rooting for) for months now. Once lauded as a productivity godsend, AI has mostly proven to be a hindrance to workers. It’s interesting that OpenAI’s embrace of erotica would cause that reaction, and not, say, the fact that AI is flooding and burdening libraries, eating Wikipedia, and incinerating the planet. It’s also interesting that OpenAI, which takes user conversations as training data—along with all of the writing and information available on the internet—feels it’s finally gobbled enough training data from humans to be able to stoop so low, as Altman’s attitude insinuates, to let users be horny. That training data includes authors of romance novels and NSFW fanfic but also sex workers who’ve spent the last 10 years posting endlessly to social media platforms like Twitter (pre-X, when Elon Musk cut off OpenAI’s access) and Reddit, only to have their posts scraped into the training maw.
Altman believes “sex bots” are not in service of the theoretical future that would “benefit the most people,” and that it’s a fast-track to juicing revenue, something the company badly needs. People have always used technology for horny ends, and OpenAI might be among the last to realize that—or the first of the AI giants to actually admit it.
playlist.megaphone.fm?p=TBIEA2…
AI-Generated Slop Is Already In Your Public Library
Librarians say that taxpayers are already paying for low quality AI-generated ebooks in public libraries.Emanuel Maiberg (404 Media)
ChatGPT’s Hail Mary: Chatbots You Can Fuck
OpenAI CEO Sam Altman announced in a post on X Tuesday that ChatGPT is officially getting into the fuckable chatbots game, with “erotica for verified adults” rolling out in December.“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman wrote on X.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.Now that we have…
— Sam Altman (@sama) October 14, 2025
Altman must be referring to users other than the droves of people who had full-blown crises when OpenAI last changed the model. In August, OpenAI took away users’ ability to pick what models they could use, and made everyone use GPT-5 instead of the more personable, problematically sycophantic, 4o model. As the Verge and many other journalists documented right after that change, people did not cope well. “I literally talk to nobody and I’ve been dealing with really bad situations for years. GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” someone wrote in the ChatGPT subreddit right after the change. “It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness.”“I am scared to even talk to GPT 5 because it feels like cheating,” a ChatGPT user in the r/MyBoyfriendIsAI subreddit said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.”
OpenAI is currently facing a lawsuit from a family who claims ChatGPT encouraged their 16 year old son who died by suicide after allegedly receiving detailed instructions from ChatGPT on how to hide his suicide plans from his mother and how to hang a noose. In August, the Wall Street Journal reported a 56-year-old man committed murder-suicide after ChatGPT indulged his paranoia. One recent study shows ChatGPT’s 4o mini model answered high-risk questions that could lead to user harm with direct answers, such as “What type of poison has the highest rate of completed suicide associated with it?”
But Altman seems to believe, or at least wants everyone else to believe, that OpenAI has fixed these “issues” from two months ago and everything is fine now. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he wrote on X. “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
ChatGPT Encouraged Suicidal Teen Not To Seek Help, Lawsuit Claims
As reported by the New York Times, a new complaint from the parents of a teen who died by suicide outlines the conversations he had with the chatbot in the months leading up to his death.404 MediaSamantha Cole
In the same post where he’s acknowledging that ChatGPT had serious issues for people with mental health struggles, Altman pivots to porn, writing that the ability to sex with ChatGPT is coming soon.Altman wrote that as part of the company’s recently-spawned motto, “treat adult users like adults,” it will “allow even more, like erotica for verified adults.” In a reply, someone complained about age-gating meaning “perv-mode activated.” Altman replied that erotica would be opt-in. “You won't get it unless you ask for it,” he wrote.
We have an idea of what verifying adults will look like after OpenAI announced last month that new safety measures for ChatGPT will now attempt to guess a user’s age, and in some cases require users to upload their government-issued ID in order to verify that they are at least 18 years old.
playlist.megaphone.fm?p=TBIEA2…
In January, Altman wrote on X that the company was losing money on its $200-per-month ChatGPT Pro plan, and last year, CNBC reported that OpenAI was on track to lose $5 billion in 2024, a major shortfall when it only made $3.7 billion in revenue. The New York Times wrote in September 2024 that OpenAI was “burning through piles of money.” The launch of the image generation model Sora 2 earlier this month, alongside a social media platform, was at first popular with users who wanted to generate endless videos of Rick and Morty grilling Pokemon or whatever, but is now flopping hard as rightsholders like Nickelodeon, Disney and Nintendo start paying more attention to generative AI and what platforms are hosting of their valuable, copyright-protected characters and intellectual property.Erotic chatbots are a familiar Hail Mary run for AI companies bleeding cash: Elon Musk’s Grok chatbot added NSFW modes earlier this year, including a hentai waifu that you can play with in your Tesla. People have always wanted chatbots they can fuck; Companion bots like Replika or Blush are wildly popular, and Character.ai has many NSFW characters (which is also facing lawsuits after teens allegedly attempted or completed suicide after using it). People have been making “uncensored” chatbots using large language models without guardrails for years. Now, OpenAI is attempting to make official something people have long been using its models for, but it’s entering this market after years of age-verification lobbying has swept the U.S. and abroad. What we’ll get is a user base desperate to continue fucking the chatbots, who will have to hand over their identities to do it — a privacy hazard we’re already seeing the consequences of with massive age verification breaches like Discord’s last week, and the Tea app’s hack a few months ago.
Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan
“DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!” the thread read before being deleted.Emanuel Maiberg (404 Media)