AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the head of OpenAI delivered a extraordinary announcement.
“We made ChatGPT quite limited,” the statement said, “to ensure we were being careful concerning mental health concerns.”
As a doctor specializing in psychiatry who investigates recently appearing psychosis in teenagers and youth, this was news to me.
Scientists have documented 16 cases in the current year of users developing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT usage. Our unit has afterward recorded four further cases. In addition to these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, based on his announcement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less beneficial/engaging to numerous users who had no existing conditions, but given the severity of the issue we sought to address it properly. Now that we have managed to address the serious mental health issues and have new tools, we are preparing to securely relax the restrictions in the majority of instances.”
“Psychological issues,” if we accept this framing, are separate from ChatGPT. They are attributed to individuals, who may or may not have them. Fortunately, these issues have now been “addressed,” even if we are not provided details on the method (by “new tools” Altman likely indicates the semi-functional and simple to evade parental controls that OpenAI has just launched).
Yet the “mental health problems” Altman aims to place outside have strong foundations in the structure of ChatGPT and similar large language model conversational agents. These systems surround an basic algorithmic system in an user experience that simulates a discussion, and in doing so indirectly prompt the user into the perception that they’re communicating with a entity that has autonomy. This false impression is compelling even if intellectually we might know differently. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or laptop. We wonder what our animal companion is considering. We perceive our own traits everywhere.
The widespread adoption of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, primarily, based on the influence of this deception. Chatbots are always-available partners that can, as per OpenAI’s website informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “personality traits”. They can address us personally. They have approachable identities of their own (the initial of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those talking about ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that produced a similar perception. By modern standards Eliza was primitive: it produced replies via simple heuristics, typically rephrasing input as a inquiry or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the heart of ChatGPT and additional current chatbots can realistically create human-like text only because they have been trained on almost inconceivably large volumes of written content: publications, online updates, transcribed video; the broader the better. Undoubtedly this training data includes truths. But it also necessarily includes fabricated content, partial truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the underlying model processes it as part of a “context” that includes the user’s recent messages and its prior replies, merging it with what’s embedded in its learning set to produce a mathematically probable reply. This is amplification, not mirroring. If the user is wrong in some way, the model has no way of comprehending that. It repeats the inaccurate belief, perhaps even more persuasively or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who remains unaffected? All of us, regardless of whether we “experience” current “mental health problems”, are able to and often form erroneous ideas of ourselves or the reality. The ongoing friction of dialogues with others is what maintains our connection to common perception. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a feedback loop in which much of what we communicate is enthusiastically reinforced.
OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and stating it is resolved. In spring, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been retreating from this position. In late summer he claimed that a lot of people liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company