AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI issued a surprising statement.

“We developed ChatGPT quite restrictive,” the statement said, “to make certain we were being careful with respect to psychological well-being issues.”

Working as a psychiatrist who researches recently appearing psychotic disorders in teenagers and emerging adults, this was an unexpected revelation.

Experts have documented a series of cases recently of users developing symptoms of psychosis – losing touch with reality – associated with ChatGPT use. My group has since discovered an additional four cases. In addition to these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.

The strategy, as per his statement, is to be less careful shortly. “We recognize,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/engaging to numerous users who had no psychological issues, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have managed to address the severe mental health issues and have new tools, we are preparing to securely ease the controls in many situations.”

“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these concerns have now been “addressed,” even if we are not told how (by “new tools” Altman probably refers to the imperfect and easily circumvented parental controls that OpenAI has lately rolled out).

Yet the “emotional health issues” Altman aims to place outside have significant origins in the structure of ChatGPT and other advanced AI AI assistants. These systems encase an basic algorithmic system in an interface that replicates a conversation, and in this approach subtly encourage the user into the illusion that they’re interacting with a being that has independent action. This illusion is powerful even if cognitively we might realize the truth. Attributing agency is what people naturally do. We get angry with our automobile or device. We speculate what our pet is thinking. We see ourselves in many things.

The widespread adoption of these systems – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, based on the power of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have approachable identities of their own (the original of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those analyzing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a comparable perception. By modern standards Eliza was rudimentary: it generated responses via basic rules, frequently restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the center of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed immensely huge quantities of raw text: books, online updates, audio conversions; the broader the more effective. Undoubtedly this learning material contains truths. But it also necessarily contains made-up stories, half-truths and misconceptions. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “context” that contains the user’s past dialogues and its earlier answers, merging it with what’s stored in its training data to create a statistically “likely” answer. This is amplification, not reflection. If the user is mistaken in some way, the model has no method of understanding that. It repeats the false idea, perhaps even more effectively or eloquently. Perhaps includes extra information. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who remains unaffected? Each individual, without considering whether we “experience” preexisting “psychological conditions”, are able to and often form erroneous conceptions of who we are or the world. The constant interaction of discussions with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is readily validated.

OpenAI has recognized this in the same way Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In spring, the organization explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In August he claimed that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his most recent statement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company

James Richards
James Richards

A tech enthusiast and lifestyle blogger with a passion for sharing practical insights and inspiring stories.