If you've never heard of the "Dead Internet Theory," it's the conspiracy theory that most of the content you see nowadays is created by bots and is AI-generated content that humans had nothing to do with. It posits that, at some point in 2016 or 2017, the internet became infested with auto-content and that human interaction is actually a smaller fraction of what's happening online.
I'm not entirely sure how true that is, and at this point in our history, I don't think it's actually true. Humans are perpetually online and interacting with each other, and moreover, people actually seem to revile bot activity to a point where social media services do a lot to tamp it down.
But this doesn't mean AI isn't heavily involved. In fact, I find a different concept far more frightening than the Dead Internet Theory, and that's the idea that we are using AI to communicate more and more.
I make it no mystery that I use AI to help me do my own jobs. ChatGPT is my research assistant, and when I'm trying to flesh out an idea, it's a great tool to converse with. However, I always drew a line in that I never allow it to write for me, and for two reasons. The first is that I think having a robot write for me and then slapping my name on it is wildly unethical. You pay to read my writing, my opinion, and in my style, not ChatGPT's.
Secondly, when you work around ChatGPT or AI-centered assistants enough, you start to see the hallmarks of AI communication. It's hard for me to pin them down, because it's kind of like looking at the code in the Matrix, but the signs are often there. It says things that are great in terms of description, but they often feel inhuman in their attempt to sound human. There's no soul there, and I don't want my readers to feel like they're getting something that looks great on the surface but lacks substance when you really chew on it.
The issue is that not everyone has that same mentality, and the younger generations increasingly don't.
I ran into a situation recently where a younger Gen Z man I know was having trouble with a woman whose feelings he had hurt. He was advised to apologize to her while explaining why he acted the way he did toward her, which is standard stuff. He sent an example of his apology, and it was pretty clear he wrote it through ChatGPT. When confronted about it, he simply said this is how the kids communicate complex situations nowadays.
Around that same time, another situation popped up where a group of Gen Z young adults wrote out a ChatGPT-generated list of issues they were having at a job I'm close to.
It was eye-opening in that I was starting to see the future of human communication, and it's not good. These kids are using AI to communicate for them, to generate words that explain complex emotions or situations.
It's not a dead internet; it's an internet that still bustles with human activity, but it's done so through the puppet of AI. No longer are we presenting ourselves to one another, with our quirks, personalities, vulnerabilities, and even weirdness. Our communication with each other is sanitized and predictable. We lose our cultural idiosyncrasies in the face of responses generated by a program that has been trained on all the same data. Human interaction becomes scripted, not genuine.
People often express fear of AI becoming sentient and destroying humanity, a Hollywood outcome that is highly unlikely, but what should scare people more is that the ghost in the machine isn't some algorithm that evolves out of our control... it's us. We're the ghost in the machine.
I predicted a while back that humanity would merge with AI in a way, but my hope is that it wouldn't involve us effectively wearing an AI suit that turns humanity into a synthetic being when it comes to how we face the world. I think it's absolutely terrifying that we could become so homogenous in how we present ourselves to the outside world that you can't really tell one person from the next when at a virtual distance.
This is effectively us handing our humanity over to a machine and telling it to act for us while we withdraw into ourselves and forget how to speak to each other in a raw, unfiltered manner.
But perhaps we were always headed here. Gen Z was handed a world of corporate legalese it had to follow in order to be accepted in the workplace, and social sterilization has been around since the older generations. If you're a Gen Z kid offered the tools to make all that we called "good" as a professional society streamlined, then why not use it, especially if you lack the experience and wisdom to understand why it's bad? Moreover, older generations don't have experience with this kind of tool to warn kids away from it in a way that makes sense to them.
Still, our relationship with AI was always going to be one of assistance, which is fine. I just don't think it's good when we become the machine. We strip ourselves of humanity for convenience, and not having to handle our own emotions in emotional moments. We just become robots, and we become robots to each other.