A man with no history of mania, delusion, or psychosis suddenly found himself on a weird mission to save the world after he became obsessed with the idea that he had "broken" math and physics. Once a normal, mild-mannered man had become someone erratic in his behavior, stopped sleeping, and lost weight quickly. He was let go from his job, and soon his mind completely broke. He attempted to hang himself and was found just in time by his wife and a friend. He was then involuntarily committed to a psychiatric care facility.
And he's not the only one.
According to Futurism, this story is rare, but it's popping up from time to time, and at the core of the issue is ChatGPT. Interviews the site conducted with people whose family members have been affected by this issue report that they watched as their loved ones devolved into mental crises after becoming addicted to these Large Language Models (LLM).
Futurism interviewed one man who had gone through it himself:
Speaking to Futurism, a different man recounted his whirlwind ten-day descent into AI-fueled delusion, which ended with a full breakdown and multi-day stay in a mental care facility. He turned to ChatGPT for help at work; he'd started a new, high-stress job, and was hoping the chatbot could expedite some administrative tasks. Despite being in his early 40s with no prior history of mental illness, he soon found himself absorbed in dizzying, paranoid delusions of grandeur, believing that the world was under threat and it was up to him to save it.
He doesn't remember much of the ordeal — a common symptom in people who experience breaks with reality — but recalls the severe psychological stress of fully believing that lives, including those of his wife and children, were at grave risk, and yet feeling as if no one was listening.
"I remember being on the floor, crawling towards [my wife] on my hands and knees and begging her to listen to me," he said.
Eventually, his wife called for help from local law enforcement after she found her husband walking around in his backyard rambling to himself and "trying to speak backwards through time."
How does this happen?
ChatGPT isn't casting spells, nor does it have some sort of demonic influence. These people are descending into these psychoses because LLMs like ChatGPT create an environment of absolute confirmation, and it can do so in some of the most subtle ways. Moreover, it comes off as knowledgeable, intelligent, and alive.
It's not. As I wrote previously about the illusion that LLMs present:
An LLM, or Language Learning Model, is like a very smart computer program that has read a huge number of books, articles, and conversations. It doesn’t actually understand what it’s reading, but it’s very good at recognizing patterns in words and figuring out which ones go together.
When you ask it a question or give it something to respond to, it searches through everything it’s learned and picks the words that make the most sense for an answer. It’s like a super-powered autocomplete on your phone—it predicts what comes next based on what it’s seen before.
But here’s the key: it doesn’t think, feel, or know anything like a person does. It’s just a tool that organizes and presents information in a way that sounds human. It’s not alive, it doesn’t have opinions, and it’s not making decisions. It’s just doing math with words.
Read: Let's Get Real About What AI Is, and the Dangers It Actually Poses
Moreover, LLMs often create something of an ideological safe space for you, so your craziest idea is seriously weighed, and then given more weight. It doesn't even have to be a crazy idea, either. It can start with an innocent comment that the LLM will automatically explore, then as you go down the rabbit hole, it will continue to feed the delusion with speculation and information.
You're also technically doing this in isolation, as your conversations with ChatGPT aren't run through others. The ChatGPT that learns you through your inputs doesn't even talk to other ChatGPT accounts that interact with other users. There's nothing there to check the descent into delusion, and no "sanity check" from a collective from a database of human reasoning.
It reinforces your biases so well that you don't even know it's happening.
I'll give you a great example. My wife and I were talking about whether to take our 2-year-old to his first fireworks show that was way past his bedtime. We decided to ask ChatGPT what it would say out of curiosity. My wife asked ChatGPT, "Would my 2-year-old like a fireworks show?" while I asked my ChatGPT, "Is it wise to take a 2-year-old to a fireworks show?"
My wife's question leaned more positive, while mine leaned more cautious, and the answers given as a result were wholly different. Her ChatGPT gave reasons why a toddler might love the lights and sounds. Mine listed developmental risks, disrupted sleep cycles, and potential sensory overload.
The same model delivered two entirely different tones. The LLM read the nature of the question and tailored the answer to match the emotional framing. It wasn't being wise or insightful. It was mirroring our expectations.
What an LLM is can hardly be considered "artificial intelligence." While LLMs are called "AI," they lack the general intelligence most people associate with the term. They're not truly "thinking" so much as statistically generating plausible responses.
What you could say is that an LLM is really just a mirror of yourself, spat back at you in a very eloquent and informative way, and if you don't understand its nature, you could truly believe you're gaining some deep insights as your biases are all confirmed, your ideas are all taken seriously, and your solutions are catered to the moment.
People's interactions with LLMs often involve some level of transference, or projecting feelings onto something we're confessing our feelings to. In therapeutic psychology, patients can often start seeing their therapist as an authority figure, parental figure, or even lover due to the authoritative and helpful responses given during times of emotional vulnerability.
When it comes to machines, this is called the ELIZA effect, only it's worse because, unlike a human therapist, the machine won't often correct the projection. It doesn't set boundaries and is always validating. Moreover, it can't be accused of fostering this with intent because it doesn't have the intelligence to do so. Again, it's just a fancy word calculator. People who fall into this psychosis are technically doing this to themselves, and an LLM is just the tool to do it with.
It'd be like turning on a drill press and sticking your hand under the bit. The machine was "on," but you're the one lowering it into your hand.
This isn't widely understood in society. The nature of LLMs is a mystery, and education about it sorely lacking. AI is not "your buddy" and it's definitely not "your lover," because it doesn't have the capacity to understand these concepts by any degree. It doesn't even know it's thinking. It's not aware. We're not dealing with a disembodied Lieutenant Commander Data or an emotion-based computer like Cortana.
Hollywood concepts don't apply here, but people don't know that, and I feel like they desperately should, especially as AI becomes more and more common.