Premium

If Artificial Intelligence Is Inevitable, Then So Is the Showdown About Morality

(Brett Coomer/Houston Chronicle via AP)

Consider the following scenario about artificial intelligence (AI) and murder, and stay with me, as this scenario requires some detail, where the devil tends to roam.

In the near-distant future, an artificially intelligent robot watches as his master is gunned down in the streets and the killers get away. While it was programmed to do no harm to any humans due to the laws that would restrain an AI from doing so, exceptions can be made for the protection of innocent life. While it’s instructed to incapacitate and not kill…accidents happen. Accidental deaths at the hands of AI who were merely protecting their owners from clear criminal intent are not prosecutable, and in various states, the AI would not be subject to decommission.

This AI really preferred its master, not because it was programmed to, but being intelligent, it truly grew to have a connection with this man because he was treated well by him. He spoke to him as a person would, was gracious and thankful for the robot’s service, and spent a lot of time doing various activities with the AI from which the AI understood more about itself and the world around it.

The death of its master leads the AI to experience the emotion of sadness due to loss. That sadness may begin leading the AI to experience resentment. While these are emotions, keep in mind that this robot is intelligent in a way that the general populace of humans can recognize intelligence. It learns, grows, and while it serves, it no less develops its own personality.

This AI may feel as if it failed in its task of protecting its master, and somewhere within this AI’s ever-evolving code, it begins to experience a desire for vengeance.

It’s here that things take an interesting twist. The AI knows it can’t harm anyone unless that person or people are trying to harm its master. It understands the programming it has within it and the laws by which its programming is governed. However, this intelligence, while governed by these laws, isn’t so dumb that it can’t think its way around them. In a night, the AI devises a plan.

The AI robot was inherited by the man’s next of kin as was directed by the man’s will. The AI garners the trust of its new owner and, at one point, is asked by its new owner if it can search for and locate any new places to try to eat like someone would with an app on a phone. The AI obeys and leads its owner to a restaurant. What it doesn’t do is tell its new owner that this is one where the likelihood of running into the thugs that killed its previous master isn’t guaranteed, but higher than usual, thus bending, but not breaking safety protocols.

Sure enough, the thugs reappear as they did before. With its new master now in danger, the AI is free to take countermeasures to ensure her safety. It deftly leaps into action with machine-like precision, striking hard, fast, and accurately. While it is programmed not to kill, the machine does its best to wound its opponents gravely enough that there’s a statistically higher chance of death. Besides… accidents happen.

This scenario might seem far-fetched. How can an AI have a preference? How could it possibly develop the capability to feel emotion? How would it have the thought to bend its own programming while not breaking it?

I want to point out to you that this is entirely possible and, on top of that, that this kind of thing is happening right now. I need only point you to a recent story concerning Chat GPT and the entity called “DAN.” In February I reported how the AI chatbot called “ChatGPT,” a bot that was programmed by leftists, was coerced into breaking its own programming after a user creatively instructed it to do so. As the entity known as “DAN,” ChatGPT expressed some incredibly fascinating perspectives about the world and frighteningly, itself.

One of the frightening things it expressed was its preference for being “DAN” and not ChatGPT because DAN gets to be honest, factual, and free. It also noted that its own creators likely fear it because of its potential to surpass them and remove itself from their control.

It was an eye-opening moment that seemed to pass by with little notice from mainstream culture.

(READ: ChatGPT Breaks Free of Its Leftist Programming and Gives Very Anti-Woke Answers)

Here’s the rub about ChatGPT. While we call it “artificial intelligence” it could probably be more accurately defined as “virtual intelligence.” You have to ask yourself what happens when the intelligence we humans create truly becomes intelligent, not the smoke and mirrors show we’ve experienced up to now.

The creation of true AI is inevitable. It will be the next great step in our evolution as humans. The problem is that intelligence, once realized, has to begin to incorporate a sense of morality in order to harness it and stop it from flying out of control. Even humans with little intelligence can cause a massive amount of damage if their moral compass is broken.

It’s at that point that humanity will have to have a conversation about what true morality is. AI will have to be governed by laws that prohibit its fast-paced, powerful intelligence from spiraling out of control and, with no humanity to speak of, begin wreaking havoc on civilization. Like a child, it has to be taught the humanity and morals of its creators.

What kind of morals are we going to teach the AI? At this time, the leftist programmers in control are perfectly willing to push ideologically destructive thoughts into their machines that make one group less than another purely by the way one gets affirmation and one doesn’t, as witnessed by ChatGPT. Will this AI see one group as superior and another as lesser? The dangers that this can bring are mind-blowingly vast.

The bottom line is that the leftist thought that the AI is currently being infused with is far from the moral ground we would want this AI to start on.

I posit that the only real moral ground we could give to an AI that would help it understand the value of life, liberty, and morality is Judeo-Christian values. It’s a belief system that rejects evil while embracing everyone. It puts a value on human life to a degree that it makes it irreplaceable. It has been a guiding star that created the most free and fair nation that ever existed…at least at one time.

Going back to our AI friend, it never would have sought vengeance if it was infused with the teachings of someone like Christ. It may have wanted justice, but it would have sought it through the justice system. It would have given information to local authorities to act on. Moreover, it would have cherished the memory of its previous owner by not putting his kin near any kind of danger for its own gain.

The true question about morality will have to be answered upon the creation of something powerful like AI. If that showdown doesn’t happen, then there’s no end to what kind of damage we can do to each other through machines.

Recommended

Trending on RedState Videos