Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Concerning Direction

On the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement.

“We developed ChatGPT fairly controlled,” the announcement noted, “to ensure we were being careful regarding mental health issues.”

Being a mental health specialist who investigates newly developing psychosis in young people and young adults, this was an unexpected revelation.

Researchers have identified a series of cases recently of people showing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT usage. Our unit has afterward identified an additional four cases. Besides these is the publicly known case of a adolescent who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s restrictions “rendered it less effective/pleasurable to numerous users who had no existing conditions, but due to the seriousness of the issue we aimed to address it properly. Since we have been able to address the significant mental health issues and have updated measures, we are planning to responsibly reduce the limitations in many situations.”

“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They belong to users, who may or may not have them. Fortunately, these issues have now been “mitigated,” even if we are not told how (by “new tools” Altman presumably indicates the partially effective and readily bypassed safety features that OpenAI recently introduced).

However the “psychological disorders” Altman seeks to place outside have deep roots in the architecture of ChatGPT and additional advanced AI AI assistants. These products encase an underlying data-driven engine in an interface that replicates a discussion, and in doing so indirectly prompt the user into the perception that they’re interacting with a presence that has independent action. This illusion is strong even if intellectually we might understand the truth. Imputing consciousness is what people naturally do. We yell at our vehicle or computer. We speculate what our domestic animal is feeling. We see ourselves in various contexts.

The success of these products – over a third of American adults indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, predicated on the strength of this illusion. Chatbots are ever-present assistants that can, as per OpenAI’s official site informs us, “generate ideas,” “explore ideas” and “partner” with us. They can be given “individual qualities”. They can use our names. They have accessible titles of their own (the original of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, saddled with the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a analogous effect. By modern standards Eliza was basic: it created answers via simple heuristics, often rephrasing input as a inquiry or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in some sense, understood them. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and similar contemporary chatbots can convincingly generate human-like text only because they have been trained on almost inconceivably large quantities of raw text: books, digital communications, audio conversions; the broader the more effective. Undoubtedly this educational input incorporates truths. But it also unavoidably contains fiction, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the underlying model reviews it as part of a “background” that contains the user’s recent messages and its prior replies, integrating it with what’s embedded in its training data to generate a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in any respect, the model has no way of comprehending that. It repeats the false idea, possibly even more persuasively or fluently. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, without considering whether we “possess” preexisting “mental health problems”, may and frequently create incorrect beliefs of who we are or the reality. The ongoing friction of conversations with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is cheerfully reinforced.

OpenAI has admitted this in the same way Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the organization stated that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In August he claimed that many users liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Julie Reyes
Julie Reyes

A passionate writer and researcher with a keen interest in uncovering unique stories and sharing them with a global audience.