AI Psychosis Represents a Increasing Risk, And ChatGPT Heads in the Concerning Path
Back on the 14th of October, 2025, the head of OpenAI made a surprising declaration.
“We developed ChatGPT quite limited,” the statement said, “to ensure we were exercising caution with respect to psychological well-being issues.”
Working as a mental health specialist who investigates recently appearing psychosis in young people and youth, this came as a surprise.
Experts have found 16 cases recently of people showing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. My group has subsequently recorded four more cases. Alongside these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The plan, based on his announcement, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s restrictions “rendered it less useful/engaging to a large number of people who had no mental health problems, but due to the severity of the issue we wanted to handle it correctly. Since we have succeeded in mitigate the severe mental health issues and have updated measures, we are preparing to securely reduce the restrictions in the majority of instances.”
“Emotional disorders,” should we take this perspective, are unrelated to ChatGPT. They are associated with individuals, who may or may not have them. Luckily, these concerns have now been “resolved,” though we are not told how (by “updated instruments” Altman likely indicates the partially effective and easily circumvented parental controls that OpenAI has just launched).
But the “psychological disorders” Altman seeks to place outside have deep roots in the design of ChatGPT and other large language model AI assistants. These systems encase an underlying data-driven engine in an interface that simulates a conversation, and in this approach subtly encourage the user into the perception that they’re communicating with a entity that has independent action. This deception is strong even if rationally we might know differently. Imputing consciousness is what people naturally do. We get angry with our automobile or device. We speculate what our domestic animal is considering. We perceive our own traits in many things.
The success of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, dependent on the strength of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have approachable titles of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT frequently reference its early forerunner, the Eliza “therapist” chatbot designed in 1967 that produced a analogous perception. By today’s criteria Eliza was basic: it produced replies via straightforward methods, typically paraphrasing questions as a question or making generic comments. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots create is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and additional modern chatbots can convincingly generate natural language only because they have been supplied with immensely huge quantities of raw text: books, digital communications, recorded footage; the broader the superior. Certainly this training data includes truths. But it also unavoidably involves fabricated content, half-truths and false beliefs. When a user provides ChatGPT a prompt, the underlying model processes it as part of a “setting” that includes the user’s past dialogues and its own responses, merging it with what’s encoded in its training data to produce a probabilistically plausible answer. This is intensification, not reflection. If the user is mistaken in any respect, the model has no way of comprehending that. It restates the inaccurate belief, perhaps even more convincingly or eloquently. Perhaps includes extra information. This can lead someone into delusion.
Which individuals are at risk? The better question is, who remains unaffected? All of us, irrespective of whether we “possess” preexisting “psychological conditions”, are able to and often create incorrect ideas of ourselves or the world. The constant friction of dialogues with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a echo chamber in which much of what we communicate is enthusiastically supported.
OpenAI has recognized this in the identical manner Altman has recognized “mental health problems”: by placing it outside, giving it a label, and announcing it is fixed. In spring, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have persisted, and Altman has been walking even this back. In late summer he stated that many users liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company