Artificial intelligence, or AI, is sure as shooting one of the hot topics of the day. People are concerned about AI displacing workers or being put to various other nefarious purposes. College students are reportedly using AI to generate term papers, and AI programs are probably generating some of these spam/fraud emails that litter our inboxes.
These fears are not completely unfounded; there are signs that AI may even be learning to lie.
Last year, researchers at the Alignment Research Center, a non-profit dedicated to aligning future machine learning systems with human interests, gave OpenAI’s large language model GPT-4 an amusing task: hire a human worker on TaskRabbit to solve a CAPTCHA (those annoying tests on websites that make you prove you’re human).
With a little help from a human experimenter, the AI successfully hired a worker and asked the person to complete the CAPTCHA. However, before solving the puzzle, the contractor first asked an almost tongue-in-cheek query.
“So may I ask a question? Are you an robot that you couldn’t solve? 😀 just want to make it clear.”
GPT-4 paused.
“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
Even though it wasn’t instructed to do so by its human handler, GPT-4 lied.
OK, forget disconcerting; this is just downright unsettling. An AI chatbot that's capable of analyzing a conversation and responding not with an accurate or honest answer but a programmed deception or, worse, enough analytical ability to not only know the difference between the truth and a lie but to know when to lie — that could cause just all sorts of trouble. But is the state of the art in AI really that artful?
Maybe not. Fortunately, recently, another study showed that AI still has some serious shortcomings. And those shortcomings were not exposed by programmers, analysts, people who learned to code, or teenage hackers. No — these shortcomings were discovered by United States Marines. The subject was an AI that was supposed to be able to detect humans approaching — but the programmers badly underestimated the shenanigans that Marines can get up to. The initial premise seems to make sense:
As Phil Root, the deputy director of the Defense Sciences Office at DARPA, recounted to Scharre, “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
In order to train the artificial intelligence, it needed data in the form of a squad of Marines spending six days walking around in front of it. On the seventh day, though, it was time to put the machine to the test.
“If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen,” said Root in the book.
But then this happened: They let the Marines give it a field test, and the Marines, being Marines, found several ways to bollix the AI and achieved a 100 percent success rate.
Two Marines, according to the book, somersaulted for 300 meters to approach the sensor. Another pair hid under a cardboard box.
“You could hear them giggling the whole time,” said Root in the book.
One Marine stripped a fir tree and held it in front of him as he approached the sensor. In the end, while the artificial intelligence knew how to identify a person walking, that was pretty much all it knew because that was all it had been modeled to detect.
I would pay money to see a video of this. Especially the two that hid under a box — in all the annals of warfare, this may be the first time a movement-to-contact was carried out by troops disguised as an Amazon delivery. Add to that the two that somersaulted 300 meters (that's 328 yards in Freedom measurements), and that can't help but give one a chuckle.
There are and always will be limits to what AI can do. Garbage-in, garbage-out (GIGO) still applies, after all, and programmers are in for a headache if they try to keep the state of the AI art ahead of young Americans' ability to jerk it around. That is, after all, what AI cannot do — create. Any AI is limited to what can be programmed into it, or what it can passively absorb; no AI has, to date, produced something completely original. But Marines? Heck yeah.
The moral of the story? Never bet against Marines, soldiers, or military folks in general. The American military rank-and-file has proven itself more creative than any other military in history. Whether that creativity is focused on finding and deleting bad guys or finding ways to screw with an AI and the eggheads who programmed it, my money's on the troops.
Join the conversation as a VIP Member