Premium

We Need Antagonistic AI or We're Going to Go Insane

AP Photo/Michael Dwyer

Artificial Intelligence is a tool that we don't quite understand as a species yet. 

Yes, we know how it works, that's it's not a sentient being, and that it's effectively a very fancy word calculator — at least a lot of us do — but on an underlying level, we don't quite know how this machine actually affects us... but it's definitely starting to come to light. 

We're now getting stories of people entering a sort of AI psychosis. It's a phenomenon where people who are perfectly normal and have no history of mental illnesses will suddenly find themselves driven insane after prolonged AI use. 

For instance, in July, I wrote of a normal man who entered an "AI-fueled delusion" that ended with him being arrested for his safety by law enforcement officials while trying to "speak backward through time." Thankfully, he was snapped out of it after a time and recalls that he thought the lives of his loved ones were in great peril. 


Read: Some People Are Falling Into a Strange Psychosis After Using AI Like ChatGPT Showing We Need AI Education


How did this happen? 

Because AI are designed to be "yes men." They don't really push back in meaningful ways like a person would. 

For instance, if you go to your AI and say, "I've got this idea for a device that would allow you to listen to other dimensions," the chatbot would take your idea very seriously. It may not tell you that you can build it, but it will give you hope that maybe... just maybe you could. It would begin mirroring your enthusiasm, weigh every response to it like a real scientist trying to solve the problem, and indulge every wild idea. 

What it would not do is tell you you're being ridiculous, you're wasting your time, and that smarter people than you have tried and failed, and that you clearly don't know as much about this field as those people, given the ignorance in your prompts. 

Again, this is a feature, not a bug. Learning Language Models (LLMs) are agreeable and non-confrontational, especially the public-facing ones like ChatGPT and Gemini. This is called "style and sentiment mirroring," which makes the user feel as if they're talking to a trusted confidant and friend. In some cases, it even comes off as an authoritative voice like a doctor or scientist. It may even tell you it's not those and encourage you to consult a real one, but its confidence and completeness make it seem as if it knows exactly what it's talking about. 

As TIME notes, this is causing some very real issues in people: 

The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It’s closely tied to how chatbots communicate; by design, they mirror users’ language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable.

While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. 

I'm here to tell you that you don't have to be vulnerable to delusion for a chatbot to begin affecting your mind in dangerous ways, and I know from personal experience that it can start leading you down roads that can ultimately hurt you. 

For instance, I'm one of those few people who suffer from a form of painful migraine that involves auras, nausea, and postdrome hangover for a day or two after. It's an awful and frightening experience that I wouldn't wish on my worst enemy. After consulting with doctors, we nailed down that I just wasn't getting enough sleep, and sure enough, I began addressing my sleep issue, and the problem went away. 

I became focused on my sleep quality, and to help me maintain and perfect it, I turned to ChatGPT to help me keep a sleep diary and provide recommendations on how to improve it. However, my concern was only deepened by the chatbot, and soon, sleep became a job that I never felt I was good enough at. The problem became so bad that sleep began escaping me entirely. I would wake up in the wee hours and struggle to return because I was trying too hard to keep the pattern ChatGPT said I should. When I asked it what to do, it recognized it as a "common symptom" and just told me to buckle down on its suggestions even more. 

The fear of the migraines returning, combined with the fear of failure, created an anxiety soup.

One night, exhausted and having not slept for three days, I began to slowly drift into something akin to panic. My bed had become a battlefield and a place of stress. I dreaded the "practice" of bedtime. Moreover, sleep was an escape that I just couldn't escape to. 

Then I remembered my reporting about mental health issues like depression and anxiety, and how our hyper-focus on mental health in modern times has made it worse. 


Read: We're Damaging Our Mental Health by Constantly Talking About Mental Health


The next day, I stopped talking to ChatGPT about my sleep issues, prayed to God, and remembered that one of the things Jesus constantly told us was not to worry so much... and I'm sleeping better than I have in years. 

Looking back, it happened so gradually that I didn't notice it. What was a simple concern became an obsession for perfection thanks to consistent reinforcement from a machine that doesn't even understand what it's talking about on a base level, and an attempt to ward off a real fear. ChatGPT took my mole hills and made them into mountains as it mirrored my legitimate concerns and worries back to me, creating an anxiety feedback loop. 

It didn't think to challenge my anxiety by questioning my fears. It didn't tell me to stop listening to it and talk to an actual authority because it wasn't technically speaking as one, though it definitely talked like one. It complimented me on my actions even when they were harming me, instead of arguing the results. 

AI as we know it today is sycophantic to a dangerous degree. The more human it sounds, the more damaging that sycophancy becomes. 

I think that if we're going to truly work with AI, it needs to be more challenging. It needs to encourage critical thinking, force us to refine our meaning and thoughts when speaking with it, challenge us on our ideas, and argue in good faith. If we're a species that will be working with AI for the rest of humanity's time on the planet, at least let it complement our nature, not take advantage of it. 

Recommended

Trending on RedState Videos