Premium

Police AI Rollout: Big Brother's New Crystal Ball?

AP Photo/Mark Schiefelbein

Some years back, I read a great short story by legendary science fiction writer Isaac Asimov. The story, titled "All the Troubles of the World," took place in a far future where a global computer, Multivac, has taken over all of humanity's issues, from health care to education to, well, everything. This wasn't the internet; not even the great Asimov foretold that, at least, not completely. No, this was one computer, an artificial intelligence, that every person could interact with. And in the story, Multivac had just been given the task of predicting crimes before they happened, following which the police would simply go talk to whoever Multivac had predicted was a probable perp, and crime dropped, worldwide, to near-zero. Then a startling prediction emerged: A man was predicted to commit a murder. The police talked to him, and the probability of the murder went up. They put him under house arrest, and the probability went up. They put him in jail, and the probability went up. Meanwhile, his minor son asked Multivac how to help his father, and Multivac spat out a card with detailed instructions: Go to this place, tell the guards this to be allowed in, go down this corridor, to this terminal in this room, and when a light turns red, flip this switch.

Back to that in a moment. Right now, in the United Kingdom — that place that I often use to illustrate what America's Left would like to do — the possibility of using AI to predict crimes is under evaluation right now.

In a recent interview with the Telegraph, Sir Andy Marsh, the head of the College of Policing, said that police were evaluating up to 100 projects where officers could use AI to help tackle crime. This includes utilising such things as “predictive analytics” to target criminals before they strike, redolent of the 2002 film Minority Report. The aim, according to Home Secretary Shabana Mahmood, is to put the “eyes of the state” on criminals “at all times”. This is to be outlined further in an upcoming white paper on police reform.

The expansion of AI use in British policing is continually being sold as innovation, efficiency and protection. But in reality it marks a decisive step towards a society in which liberty is treated as a risk to be managed. Wrapped in the language of safety and reform, AI represents a quiet but profound transformation of the state’s relationship with its citizens: from upholder of the law to permanent overseer of behaviour.

Think about that for a moment. Targeting criminals before they strike. Now, the British don't have our Bill of Rights, but I can't help but wonder that a large number of Brits are thinking, "Wait, what? Cor blimey, that doesn't sound right." And it doesn't. Any structure of basic human rights should prohibit any interference with a person who has committed no crime, regardless of what they may or may not do in the future, regardless of what some faceless AI thinks they may be capable of, somewhere in the depths of its algorithms. 

How far may this be taken? In the United Kingdom today, people are being jailed for what they say on the internet. Could this be taken to the extreme where someone may face some infringement on their liberty and property because of what some AI thinks they might say? I'm no doomsayer when it comes to AI, even though I make scant use of it myself. It's a genie that's not going back in the bottle. But his proposal seems fraught.  


Read More: AI's Nuclear Nightmare: Machines Now Poised to Spark Global War?

NOAA Rolls Out New AI Weather Wizards: Faster Forecasts, Fewer Flops?


What about the presumption of innocence? What about unreasonable search and seizure? What about the right to face one's accuser, when your accuser is a program? Would anyone think it a good idea to actually arrest someone because a program indicated they are likely to commit some infraction? At what level of probability would a person be interrogated? Imprisoned?

There's nothing good about this idea. But there are plenty of people out there who would want to do just this, and make no mistake, plenty of those people have political intent. Fortunately, for us, we have the Constitution that protects us from many of these possibilities. But the rest of the world?

This is a bad idea that should never see the light of day.

So what happened in the Asimov story? Turns out that because the boy was a minor, he counted under his father's Multivac account. Two programmers figured out what was going on, and asked Multivac what it had told the lad — being a computer, it gave them the information, and they rushed to the place in question and stopped the boy just in time. The programmers realized, you see, that if the boy had flipped that switch, Multivac would have been burned out and destroyed.

Realizing that they had, indeed, placed all the troubles of the world on this AI, this world-spanning computer, one of the programmers typed into his console, "What do you, Multivac, want more than anything in the world?"

Multivac hummed. A card spat out. The programmers looked at it. It said: I WANT TO DIE.

We can't predict what an AI might do. We sure can't bet our safety and security on it. Fortunately, for us here in the United States, it won't pass constitutional muster. But elsewhere?

Recommended

Trending on RedState Videos