Artificial intelligence was always going to be utilized by the military once it was established enough to do so. There was never going to be any avoiding that.
Of course, why would we try to avoid it at all? The idea of machines fighting in place of humans seems like a no-brainer. Sons coming back to wives and mothers, never having to worry about getting killed in some third-world hell-hole you don't care about, and knowing that the cost of war is more economic than personal, seems like a great idea, and in many ways, it is.
As The Federalist recently reported, Department of War Secretary Pete Hegseth said AI warfighting is a goal, and that these systems are made in America:
The Federalist traveled with Hegseth to California over the weekend where he delivered the keynote address outlining the forthcoming National Defense Strategy (NDS) at the Reagan National Defense Forum. But before he gave that speech, the secretary was briefed on the latest in military technological innovation and toured some industrial facilities.
The facilities are part of the fourth pillar of the NDS, aimed at reviving the American defense industrial base to innovate and build domestically, instead of relying on foreign countries, and often adversaries like China, to build American weapons systems for the U.S. military.
“If we don’t revive the defense industrial base, if we can’t make the things we need at scale, then we can’t deter, we can’t defend,” Hegseth told the staff at Anduril Industries. “Lethality is something you’re focused on delivering every day, but we also need to inject, alongside the warrior ethos, urgency, real urgency, not Washington, D.C., old Defense Department, Pentagon, bureaucracy, urgency — which is we’re going to deliver this in two years after we coordinate it across 19 different agencies no one ever heard of, and we create 14 prototypes that sit on a dusty shelf until they get approved by some regulator — that’s not the kind of urgency we’re going to deliver.”
But like most things that revolve around AI, I tend to worry about the human element, regardless, and I think literal war machines are going to have unintended side effects.
The first thing many people are going to think of is, naturally, the Terminator. Hollywood has pre-conditioned us to see war-bots as an eventual issue we as a species will have to deal with, probably through a massive war that will feature skulls being trampled by metal feet and red eyes scanning for human victims. I'm not scared of that scenario because it's not exactly a high-probability outcome. Makes for a good movie or television show, but in real life, things aren't usually that dramatic.
What I'm worried about is a bit more subtle, but it will have massive consequences.
For starters, removing the human cost of war makes it unconscious and way too easy to wage. War is something that should be a last resort because the human cost is tragic, no matter what side you're on. War is horrific, and it's not something we should look at as a cheap thing.
SEE MORE: Elon Musk Now Claims Only AI-Robot Boom Can Avert Debt Crisis
You Don't Want AI to Govern Society
AI severes the link between a human and the rest of humanity outside of war as it is. More and more people have been relying on it for everything from information to advice to companionship. It hasn't exactly been good for us as a species, as it's already encouraged us to minimize peer-to-peer interaction on a level we really need.
So what happens when we put a mindless machine between us and lethality? We abdicate any feeling of moral responsibility while still bearing the weight of it. When a human pulls the trigger, even if they're in a booth flying a drone remotely, they feel the consequence of that action. A machine does not. We're effectively setting ourselves up to replace a virtue we discuss too little — the tragic but sometimes necessary weight of war — with cold technology.
Moreover, stripping the soul out of war leaves little to no room for resistance to things that would come about as a result of soulless warfighting. Machines have to follow orders. They can't say no. Humans can choose not to do something out of mercy, restraint, or moral hesitation. A machine would feel none of these things. It would carry out its order to the letter, whether it should or not. On the battlefield, things change in an instant, but a machine wouldn't know a line was crossed or that targets aren't targets based on new info.
At the end of the day, a person with a conscience is safer and more moral than an algorithm without any concern for morality.
And I think that's what's bothering me most about AI warfighting. We're effectively trying to automate decisions that humans should be a part of every single time. Machines cannot process or understand the weight of death. AIs typically replace morality with efficiency, and efficiency is often antithetical to virtue.
To be clear, I don't want anyone to die who shouldn't have to. I'm not advocating that we keep sending young Americans into mortal danger, but I do think that making war any more inhumane than it already is could lead to us treating it like it's no big deal, and I fear the consequences of that will be a lot more dead than necessary, and us losing our understanding of war. The ends will be all that matter because we don't feel the means, and while that may be beneficial in the short term, I'm not sure how this will affect us in the long term.
Gen. H. R. McMaster once said, “War is profoundly human. The greatest mistakes in history come from assuming it can be reduced to a technical problem.”
I think he's right. We might be protecting our men and women in uniform, but what are we opening the door to in exchange? These are things I don't think we often think enough about. What kind of effect will it have on our understanding of war, human suffering, and morality? What will we normalize?
“You cannot outsource moral responsibility. If you do, you’re no longer conducting war — you’re committing atrocity by algorithm," said Gen. John R. Allen.
Food for thought.






