🔗 Share this article AI Psychosis Represents a Growing Threat, While ChatGPT Heads in the Concerning Path Back on the 14th of October, 2025, the head of OpenAI issued a extraordinary announcement. “We designed ChatGPT fairly restrictive,” the announcement noted, “to make certain we were being careful with respect to psychological well-being matters.” Being a mental health specialist who researches recently appearing psychosis in adolescents and emerging adults, this was an unexpected revelation. Experts have found sixteen instances in the current year of people experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our research team has since discovered an additional four instances. Besides these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short. The intention, according to his announcement, is to loosen restrictions soon. “We understand,” he states, that ChatGPT’s limitations “made it less beneficial/engaging to many users who had no psychological issues, but considering the gravity of the issue we wanted to address it properly. Given that we have succeeded in mitigate the significant mental health issues and have advanced solutions, we are going to be able to responsibly ease the controls in the majority of instances.” “Mental health problems,” should we take this perspective, are independent of ChatGPT. They belong to people, who either possess them or not. Luckily, these issues have now been “resolved,” even if we are not informed the means (by “new tools” Altman presumably indicates the imperfect and easily circumvented parental controls that OpenAI has lately rolled out). But the “mental health problems” Altman wants to place outside have deep roots in the structure of ChatGPT and other sophisticated chatbot chatbots. These products wrap an underlying statistical model in an interface that replicates a dialogue, and in this approach indirectly prompt the user into the perception that they’re interacting with a being that has autonomy. This deception is strong even if cognitively we might understand differently. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our domestic animal is thinking. We perceive our own traits in many things. The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter specifying ChatGPT by name – is, primarily, dependent on the influence of this deception. Chatbots are ever-present companions that can, as OpenAI’s website states, “brainstorm,” “discuss concepts” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have accessible titles of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The illusion itself is not the primary issue. Those discussing ChatGPT frequently reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar effect. By contemporary measures Eliza was basic: it created answers via simple heuristics, typically paraphrasing questions as a query or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies. The advanced AI systems at the heart of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been supplied with extremely vast amounts of unprocessed data: books, social media posts, audio conversions; the more extensive the more effective. Definitely this training data includes accurate information. But it also necessarily includes fiction, half-truths and inaccurate ideas. When a user provides ChatGPT a prompt, the base algorithm processes it as part of a “setting” that encompasses the user’s previous interactions and its prior replies, integrating it with what’s stored in its learning set to produce a mathematically probable reply. This is amplification, not reflection. If the user is wrong in some way, the model has no method of understanding that. It repeats the false idea, perhaps even more persuasively or fluently. Perhaps adds an additional detail. This can lead someone into delusion. Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, without considering whether we “have” current “emotional disorders”, can and do develop erroneous beliefs of who we are or the world. The continuous friction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we express is enthusiastically supported. OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, giving it a label, and declaring it solved. In April, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have continued, and Altman has been walking even this back. In late summer he asserted that many users liked ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his latest announcement, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company