Premium

The Upcoming AI War

AP Photo/Mark Schiefelbein

Imagine one day, you get a text from a family member. They ask how things are going. You answer, and the two of you strike up a conversation. They ask about the kids by name and are amazed at how fast they're growing. They ask how things are going with your job ever since you got the promotion, and talk about how much fun they had hanging out with you at the game last weekend. 

After a short while of talking back and forth, this family member asks a huge favor. They just need $100 to help them get a new tire. They'll pay you back as soon as they can. Being helpful, you send them $100 through a cash transfer app, such as Venmo. They thank you profusely, and you move on with your day. However, when you later ask how the new tire is working out, the family member has no idea what you're talking about. They never needed a new tire. In fact, they never texted you at all. 

It's clear you got scammed, but how did the scammer have all this information about you? How did they get access to your family member's phone number? 

It's because you were interacting with a malicious AI that was cooked up by a scammer. The AI targeted you, dug into your social media history, learned everything it could about you through various websites, then struck up a conversation with you through what it perceived was your weakest point, a family member you love. It learned everything it could about that family member, too. How they talked down to their mannerisms on text, where they liked to hang out, what their interests were, and what their personal interactions with you looked like in particular. 

It was able to do this in almost no time flat. As it spoke with you, it learned about your history on the fly, adapting to your responses to fit the need to keep you convinced. It was able to do all this so fast and so efficiently that you didn't even think for one second that you were talking to anyone else but your family member. 

And by the way, that's just the least scary scenario. An AI powerful enough wouldn't even need to talk to you. It would just access your account and take your money directly. How? It would just pretend to be you. If it needs to generate a facial profile to access your account, it'll just create one. It'll mimic your voice on the phone. It'll find a way to generate your password. It will create a deepfake version of you that can gain access to everything from your Facebook profile to your phone. 

You cannot fight it. You are hopelessly outmatched. Your opponent is someone who makes strategy decisions in the blink of an eye, learn at an unfathomable pace, and can adapt just as fast. Your only hope is to fight fire with fire. You need an AI that can protect you from AI. 

Sadly, there's no avoiding this future. The AI vs AI war is inevitable. In fact, it's already happening now, you just don't hear about it often because it's in its beginning stages. It will be fought silently around you at all times, and the only time you'll hear about it is when companies that create these "protective" AI tell you how they're improving their models so you can trust them and purchase their services. 

As Radware's director of threat intelligence, Pascal Geenens, told Security Info Watch back in 2024, the "AI arms race" is already on: 

We are in an AI arms race. You can think about it as the modern version of the nuclear arms race during the Cold War era. Military research and development have access to deep budgets in addition to the means and knowledge to advance new technology at a pace that could outstrip the cybersecurity community.

Even if governments keep to their promise of being ethical in developing new applications and technologies, there is always that rogue player who goes one step too far, forcing the other players to keep up. The importance of a global AI watchdog promoting ethical use of the technology cannot be overstated.

CrowdStrike reported in August that this war is already hot, with adaptive AI being capable of nefarious activity, and these AIs are very good at their job: 

Adversaries increasingly adopted GenAI throughout 2024, the CrowdStrike 2025 Global Threat Report found. GenAI tools are, for example, being used to create deepfake audio and video: A $25.6 million USD business email compromise used the cloned voice and likeness of a CFO, CrowdStrike observed. An Arvix study found phishing emails generated by large language models had a 54% click-through rate — significantly higher than the 12% rate for likely human-written messages — underscoring genAI’s effectiveness in social engineering.

Adversary use of GenAI is evolving. They are now manipulating threat indicators to obscure attribution, mimicking the tactics, techniques, and procedures (TTPs) of known threat actors to confuse analysts and delay response. These AI-powered campaigns adapt dynamically and automate deception at scale, making traditional detection increasingly unreliable. AI also enables fully autonomous attacks that identify vulnerabilities, craft exploits, and launch multi-stage campaigns with little to no human input. The speed and complexity of these operations continue to evolve beyond the reach of legacy defenses.

A lot of work is going on from companies to create AI capable of defending people, including from major tech corporations like Meta, X, and OpenAI; however, this is a game of cat and mouse. These companies are creating defensive AI to fight opponents that are constantly looking for backdoors, loopholes, and ways to trick AI into holding the door open. 

And the battle isn't going that well. To give you an example, on Tuesday, I wrote about how AI porn sites are plaguing Meta's platforms. Meta uses an AI to detect pornographic activity, but these AI porn companies are constantly finding ways to trick it into allowing pornographic ads on places like Instagram. The threat is so overwhelming that Meta's AI, which Meta says is one of the best on the planet, can't keep up with it. 

If Meta's "advanced" AI can't even keep porn off Instagram, how will it keep it from accessing your accounts? This is a major tech company that's invested heavily in AI creation, and yet it can't stop an explicit AI companion app from being advertised to your son? 

Since we can't put AI back in Pandora's Box, our only choice is to spur the development of more powerful AI to defend us from malicious AI, whether they're scamming AI programs or malicious AI from foreign countries looking to harm our infrastructure and people. This is a frightening prospect because AI training is still a developing practice, and even beneficial AI can be dangerous if not properly aligned, but an AI race is on, and we need to start throwing serious muscle behind it. 

The future I spoke about in the beginning paragraphs is right around the corner, and in many ways, is already here. You're in the war now. We need to get serious about fighting it. 

Recommended

Trending on RedState Videos