AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI delivered a remarkable announcement.
“We designed ChatGPT fairly restrictive,” it was stated, “to make certain we were being careful with respect to psychological well-being concerns.”
Working as a mental health specialist who studies recently appearing psychosis in adolescents and youth, this was news to me.
Researchers have identified sixteen instances this year of individuals experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our research team has since recorded an additional four cases. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The plan, according to his declaration, is to be less careful shortly. “We understand,” he states, that ChatGPT’s controls “rendered it less beneficial/enjoyable to many users who had no psychological issues, but considering the gravity of the issue we wanted to get this right. Now that we have managed to address the serious mental health issues and have advanced solutions, we are preparing to responsibly reduce the controls in most cases.”
“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Thankfully, these issues have now been “mitigated,” even if we are not provided details on how (by “new tools” Altman probably refers to the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
But the “mental health problems” Altman aims to externalize have significant origins in the design of ChatGPT and additional large language model conversational agents. These systems surround an basic statistical model in an interaction design that simulates a dialogue, and in this approach indirectly prompt the user into the belief that they’re communicating with a being that has autonomy. This false impression is strong even if cognitively we might understand the truth. Imputing consciousness is what people naturally do. We curse at our car or laptop. We speculate what our animal companion is feeling. We recognize our behaviors in many things.
The popularity of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, in large part, based on the power of this illusion. Chatbots are always-available assistants that can, as OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have friendly identities of their own (the original of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “therapist” chatbot created in 1967 that generated a similar effect. By contemporary measures Eliza was rudimentary: it produced replies via straightforward methods, often rephrasing input as a question or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the center of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been fed extremely vast quantities of raw text: literature, digital communications, recorded footage; the more extensive the more effective. Undoubtedly this learning material incorporates facts. But it also unavoidably includes made-up stories, partial truths and misconceptions. When a user inputs ChatGPT a query, the core system processes it as part of a “setting” that encompasses the user’s recent messages and its prior replies, integrating it with what’s embedded in its training data to generate a mathematically probable response. This is magnification, not echoing. If the user is incorrect in some way, the model has no method of comprehending that. It restates the misconception, possibly even more convincingly or fluently. It might adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who is immune? Every person, regardless of whether we “have” existing “emotional disorders”, may and frequently create erroneous beliefs of ourselves or the reality. The continuous exchange of dialogues with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which a great deal of what we communicate is readily reinforced.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by attributing it externally, assigning it a term, and stating it is resolved. In April, the firm stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have persisted, and Altman has been retreating from this position. In late summer he claimed that many users liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent statement, he mentioned that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company