Anyone who has ever seen a Hollywood dystopian action flick knows that the more automated life becomes, the closer we are to huddling in giant, underground cities while an elite few humans battle the machines for dominance of the planet.
When it comes to law enforcement activities, the use of artificial intelligence takes on a particularly sinister feel. San Francisco recently suspended its killer robot program after Americans discovered it has an actual killer robot program. Robotic aids, not unlike those sent in to examine and contain bombs, had been approved for use against violent suspects in some “extreme situations.” The thought of sending a robot in to assess and react to a very human situation seems like a very good way to make some very bad mistakes. Machines are not infallible, after all, and they cannot replicate the nuances and complicated processes of human judgment and interaction.
One Georgia man recently discovered the dark side of facial recognition technology when he was arrested on a warrant from Louisiana. Randall Reid, 28, was picked up in DeKalb County, Georgia, last November. Authorities had connected him to a string of purse thefts in Jefferson Parish and Baton Rouge, Louisiana.
Randall insisted he’d never been to Louisiana in his life, and didn’t even know what “Jefferson Parish” was. He couldn’t have done it. The problem is, the computer said he did.
“They told me I had a warrant out of Jefferson Parish. I said, ‘What is Jefferson Parish?’” Reid said. “I have never been to Louisiana a day in my life. Then they told me it was for theft. So not only have I not been to Louisiana, I also don’t steal.”
Facial recognition software connected surveillance images to Reid’s Georgia identification records, and an arrest warrant was issued by Baton Rouge authorities. Georgia authorities executed the warrant and jailed the Georgia man.
Reid was later released after authorities noticed significant discrepancies between the two men. Reid had a mole on his face, while the suspect did not. There was also at least a forty pound difference between the men. The one feature they had in common was that they are both black.
That fact has renewed fears around the inherent danger in relying on technology in law enforcement situations. Facial recognition software is known to misidentify black people and other minority groups at a much higher rate than white people.
An MIT study of three commercial gender-recognition systems found they had errors rates of up to 34% for dark-skinned women — a rate nearly 49 times that for white men.
A Commerce Department study late last year showed similar findings. Looking at instances in which an algorithm wrongly identified two different people as the same person, the study found that error rates for African men and women were two orders of magnitude higher than for Eastern Europeans, who showed the lowest rates.
It isn’t just the racial vulnerabilities that make the technology creepy. It’s just creepy. Recently, a New York City lawyer was turned away from a holiday Rockettes performance after facial recognition technology in the lobby identified her as a lawyer for the firm that is involved in bringing lawsuits against the parent entertainment company, Madison Square Garden Entertainment. The woman was only accompanying her daughter’s Girl Scout troop for a day a fun, but had to leave the group after being flagged by the software.
One might say the show had the right to reject people who could be there gathering information to bolster a law suit, but using facial recognition software to do it creates more problems than it solves, particularly when it comes to privacy.
China already uses facial recognition to crack down on citizens in nearly every aspect of life. That thought alone should be enough to give any American pause when it comes to the technology.