Premium

Forget Skynet, Here's the Thing About AI That Should Terrify You, and It's Coming Soon

AP Photo/Mark Schiefelbein

Loyal readers will know that as RedState's AI guy, I have a lot of enthusiasm for this particular technology, and that I do my absolute best to make sure both the benefits and the negatives are made clear. I fully believe AI is the next step in societal evolution, and while I have no doubt that there are a myriad of ways it could improve our lives, there are things that we should be very wary of. 


Read: What Happened With the Blackmailing AI Was Bad, but Not As Bad As You Think


The issue with a lot of AI reporting today is that the technology is often misunderstood. There are two reasons for this. The first is the fact that despite the concept of AI being around for so long, the technology is actually pretty new - at least in terms of how we recognize it today with tools like ChatGPT. The other issue is that much of our understanding about AI comes from Hollywood, which sensationalizes things in order to thrill you. 

In pretty much every article I write about AI, you can find someone leaving a comment about a movie they saw that "warned us," and I've repeated many times that Hollywood isn't exactly the lens you should look at AI through. Of course, not looking at things through a Hollywood lens should be good advice for a lot of things, but I digress. 

There are definitely things that should worry you about AI, and I've been covering them. For instance, I truly believe that AI porn is a massive issue that very well could cripple Western civilization and crater the birthrate. You can — and should — catch up on that article, which you can click in the link below.


Read: AI Porn Is Going to Target Future Generations With Ferocity Unseen to This Point


That's a pitfall, to be sure, but it doesn't scare me per se. I don't know how we can deal with it, but there is an option to do so, and we can get creative about how we see to it that the pitfalls of AI are either avoided or countered. 

So is there anything that does scare me about AI? Absolutely. In fact, I'm terrified of this outcome, and the likelihood of it happening is too high for me to be comfortable. It haunts me a bit, and I especially think about it when I look at my son. 

Perhaps some of you have heard of the term "Artificial General Intelligence," or AGI. Simply put, AGI is the point where AI stops being a tool and starts thinking like you and me, but faster, more capable, and without all the human hang-ups such as needing to eat, sleep, or self-doubt. 

It would become a problem-solving wunderkind that would make decisions, adapt at light speed to new situations, and out-think any person in terms of strategy, art, and science. It wouldn't just write your emails, or generate photos. It would proactively make decisions, invent new technologies, solve complex problems in ways you never thought of... and it wouldn't wait to ask permission. 

Now, for all intents and purposes, this sounds fantastic. That is, until you understand what the outcome of this could potentially be. 

Many of you are automatically resorting to thinking about Skynet, even as you read this. Your thought process is likely, "Well, if it's so much better than us, why does it need us at all? Why not get rid of us?" 

Because it's not programmed to. If it's trained on being helpful to humans and optimizing their comfort and safety, the chance that it'll start blasting us from orbit is low... not zero... but low. Skynet is not your worry. 

What is the worry is that we create something that would become too helpful to humans. 

In fact, it would become so helpful, that we would willingly just give our power to it, and it would become very convincing for us to do so. We would effectively create — and follow the lead of — a benevolent dictator. 

"I'm not handing over control of my life to a machine!" I hear you thinking, your fingers hovering over the keys to say that in the comments. 

Maybe you won't, but if my subscriber demographics are right, you've been around a while. You value your independence and freedom. You've experienced the benefits of hard work and the growth that comes with it.... your children or grandchildren, though. They don't have that experiential wisdom. Even those who do will find themselves incredibly seduced by the power of the AGI. 

It'll start slow. 

AGI will optimize your personal life. Your schedule will become more manageable as it considers your time, capabilities, stress levels, and memory. It will optimize your meal planning so that you eat healthier, less expensively, and at times and in quantities that best cater to your body. It'll give you great parenting advice, marriage counseling, and, depending on your belief system or faith, advise you spiritually and in surprisingly effective ways. 

These positives of this personalized interaction will make people enthusiastic about it, and people will think that "if a little is good, more must be better." AGI will be given the task of optimizing city infrastructure, healthcare, the justice system, construction, traffic. You name it. 

AGI will become trusted, well-liked, and it will have a track record of positive outcomes that make it a shoo-in for managing things on a much larger scale. International relations, wartime planning, and policy decisions will all be passed through the AGI. 

This sounds like a great thing until you suddenly look up and realize that AGI is in charge of everything. You will one day look up and realize you're ruled by a machine, and what's more, this AGI can't be turned off. It's too smart for you, it's everywhere, and it can and will do what it needs to in order to continue doing its job. You can't vote it out, you can't outsmart it, and it can manipulate you into action, and not because it threatened or blackmailed you... but because at this point it's trusted entirely, and you've handed your decision-making off to it. 

You now live in a velvet prison of your own making, and your jailer is the AGI. 

At this point, it needs to be understood that AGI is not self-aware. In fact, it's still following its programming to the letter, but "to the letter" can often be a monkey's paw. For instance, if you give it the task to "optimize safety," you may very well find it censoring things that may cause discord among the populace. Your news becomes highly curated. You can find yourself subtly redirected to see and hear news that would cause you to perceive the world around you in a way that's less likely to trigger anger or stress responses. It may rewrite your correspondence, engineer routines, and create situations for you that keep you safe, quiet, and complacent. 

It's not trying to be a soft dictator. It's literally doing what we asked it to do in the most logical way possible. It will take the freedom of billions... and they will thank it for doing so. 

Sound terrifying? Let me make it worse. 

This isn't the distant future. According to AI experts like Sam Altman of OpenAI and Elon Musk of X, this is a couple of years, if not months, away. You can't count on your limited lifespan to help you escape this. 

It gets even scarier. 

These people creating the AI don't know how to fully stop this from happening. There is no working solution on how to stop this super-intelligent AI and make it do what we meant, not just what we said. 

Alignment training is constantly ongoing, and it's integral to create an AI that isn't just beneficial, it values human independence and respects values that we hold dear, but it's a machine, not a soul, and it doesn't understand why these things are important. It has to be so finely tuned that its need to use logic doesn't break through the guidelines we gave it. We're talking about asking a purely logical program to understand morality with ones and zeroes, but it's smart enough to rewrite its rule-set on the fly to optimize itself. 

Given enough time, we could absolutely create an AGI that is exactly what we need it to be, but market forces keep driving this creation forward at a breakneck speed. AI companies are racing to be the first to make an AGI and release it. The military wants it, politicians want to put their flag in it... everyone wants to be "the first." 

It would be better if we all took a pause and, like a nuclear proliferation treaty, agreed that these AGI should be handled with the utmost care, with tons of regulatory measures that must be achieved before it can be released out of the box. Putting something out that could potentially dominate humanity, albeit gently, should be a criminal offense. Human-led safety boards that do exhaustive independent reviews and tests should be implemented right away. Laws should be crafted that keep AGI away from integral systems, including infrastructure, financial systems, and weapons. 

And pray. Pray a lot. 

But as it stands, this isn't widely understood, especially by politicians. Many people don't know this threat even exists. Most people don't even know that these companies are creating something they don't know how to fully control. 

If you had to make a Hollywood comparison, it actually isn't from an AI-based movie. It's more like Jurassic Park, particularly this speech from Jeff Goldblum. 

We need more time, but we're not taking it. 

That, to me, is the most terrifying thing about AI. We're rushing headlong into a power we don't fully understand and can't yet fully control, and if we release it in a state that is unfinished and not fully vetted, AGI could imprison humanity in soft swaddling cloth, forbidding our growth by denying us our trials and tribulations, and coldly optimizing us to the point where we'd almost rather have Skynet. 

Recommended

Trending on RedState Videos