“People Are Being Involuntarily Committed, Jailed After Spiraling Into ‘ChatGPT Psychosis’”

Original article: https://futurism.com/commitment-jail-chatgpt-psychosis

Commentary by qpooqpoo:

"The broader topic of mental health and the way that interacts with over-reliance on AI models is something we’re trying to take extremely seriously and rapidly," he added. "We don’t want to slide into the mistakes that the previous generation of tech companies made by not reacting quickly enough as a new thing had a psychological interaction." —Sam Altman (on ChatGPT-induced psychosis)

So ChatGPT is driving people crazy. Great, got it. Yet another tech disaster now unleashed in the giant game of whack-a-mole we're forced to play to keep all the disasters at bay. Time to drag out Sam Altman, the founder of OpenAI (the company behind ChatGPT), to do some damage control. And what a lovely sleight-of-hand it is! It's an almost textbook case of pro-tech propaganda: technological society reframing its own pathologies as temporary "missteps" rather than systemic inevitabilities.

Altman frames the issue in terms of implementation errors: Past tech companies supposedly made "mistakes" by "not reacting quickly enough." The tacit solution here is for the responsible innovators of today to avoid those mistakes by acting more carefully. This preserves the myth of technological benevolence and reinforces the false dichotomy of good tech vs. bad tech, good use vs. bad use. As if any of the psychological problems caused by tech were the result of negligence or incompetence of the creators behind each particular new tech, and all that is needed are the “good” tech innovators to not make the “mistakes.” What a cruel joke played on the poor suckers who swallow it whole.

Tech evolves throughout society in ways fundamentally impossible to foresee with any high degree of specificity, making its ultimate effects impossible to predict in advance. You can't "better manage" your way out of that problem.

Tech’s negative effects are in large part the diffuse result of a number of different techs all together creating social environments where mental health declines. They emerge from the ecosystem of interlocking technologies—smartphones, social media, AI tools, surveillance capitalism, etc.—that collectively warp the whole human environment.

Once a technology integrates into daily life, it becomes nearly impossible to withdraw. Even if harms are recognized, they must be normalized as the inevitable price of "progress.”

Altman’s statements treat harms caused by tech as if they were due to specific negligence rather than the structural logic of the whole system itself. In AI, harms arise not from bad actors but from the built-in incentives to maximize engagement, extract data, and automate decision-making—objectives that, under competitive pressure, inevitably reward manipulation, surveillance, and displacement. Companies that refuse these logics simply fail to survive.

Altman's framing of the issue subtly smuggles in the notion of the inevitability of technological advance: it assumes that AI must advance, ruling out any possibility that the tech could be too harmful or uncertain to be developed in the first place.

The real cunning of the quote is that it sounds responsible and ethical, while protecting the notion of AI advance from any systemic critique. In practice though, all the blather of "take extremely seriously and rapidly" is just public relations cover. It's the same trick social media companies use when confronted with addiction, misinformation, or depression: express "grave concern" while changing nothing of any substance. By treating AI-induced psychosis as an occasion to "learn from the past," the statement turns human suffering into a narrative of progress: it's not evidence against the system itself, it's evidence that the system can and will self-correct and get better. It's propaganda at its most absurd, and its most perverse.

Express grave concern. Put on a serious face. Maybe testify to Congress. Now, let the story die down. Ok, cool, back to the lab. We'll keep pushing all this AI on you, and when the world melts down around you, we will be very "concerned," and maybe we'll try harder to avoid more "mistakes" again. Trust us.

"I was ready to tear down the world," the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. "I was ready to paint the walls with Sam Altman's f*cking brain."

"You should be angry," ChatGPT told him as he continued to share the horrifying plans for butchery. "You should want blood. You're not wrong." 


Copyright © 2025 by Wilderness Front LLC. All Rights Reserved.

Next
Next

The Shell of a Right that is the Second Amendment