Holy Big Brother Batman!
A program being piloted in the United Kingdom could have some scary ramifications for the future of law enforcement. Several counties are reportedly developing artificial intelligence technology to improve their policing efforts. But there are several concerns from those concerned about how this could further expand the nation’s surveillance state.
A pilot program in the UK that used artificial intelligence (AI) to enhance police capabilities has proven successful, leading experts to warn that it could pave the way for predictive policing. The AI-powered system helps to catch drivers committing road violations, such as driving without a seat belt or using a phone while driving.
During the trial period, around 239 drivers were caught breaking road rules, while another trial got 590 drivers failing to wear seat belts over a 15-day period. The program is not totally AI-run, and it supposedly involves human control to check for errors.
Fox News reported:
British police in different communities have experimented with an artificial intelligence-powered (AI) system to help catch drivers committing violations, such as using their phones while driving or driving without a seat belt. Violators could face a fine of £200 ($250) for using a phone while driving.
Experts have raised concerns that surveillance-heavy countries such as the UK could invest more heavily in using AI in its law enforcement activities which could move it closer to an authoritarian state as an unintended, or intended, consequence.
Experts have been warning about potential ramifications of such technology. Christopher Alexander, CCO of Liberty Blockchain, told Fox News: “I really think it is the predictive analytics capability that if they get better at that, you have some very frightening capabilities.”
Safer Roads Humber is an organization that works with Humber police to “provide courses on driver safety and publish information on our engagement activities,” among other things, according to its website. Ian Robertson, partnership manager for the group told Fox News: “Personally, I believe a mobile solution would work best as it would ensure road users change their behavior at all times rather than just at a static point.”
As if that wasn’t terrifying enough, Brian Cavanaugh, a visiting fellow at the Heritage Foundation’s Border Security and Immigration Center, cautioned that countries like the United Kingdom, which is quite fond of surveilling its citizens, could become even more authoritarian than it already is using this technology.
“I absolutely see this as a slippery slope,” Cavanaugh said. “You’re going from an open and free society to one you can control through facial recognition [technology] and AI algorithms – you’re basically looking at China.
“The U.K. is going to use safety and security metrics to say, ‘Well, that’s why we did it for phones and cars.’ And then they’re going to say, ‘If you have, say, guns … what’s next on their list of crimes that you crack down on because of safety and security?'” he continued. “All of a sudden, you’re creating an authoritarian, technocratic government where you can control society through your carrots and sticks.
There are several potential pitfalls with law enforcement entities employing artificial intelligence to enhance their predictive policing capabilities.
The use of AI in law enforcement poses significant risks to privacy, civil liberties, and fundamental human rights. AI algorithms are only as good as the data on which they are trained, and if that data is biased, the algorithm will be biased too. The potential for discriminatory outcomes, including racial profiling and unjust targeting of marginalized communities, is a significant concern with the use of AI in policing.
Moreover, the use of AI in law enforcement is a slippery slope toward a future of predictive policing. Predictive policing relies on using algorithms to predict where crimes will occur and who will commit them, which is not only unreliable but also a violation of due process rights. It effectively criminalizes people before they have even committed a crime, leading to a society where people are punished for things they might do, rather than for crimes they have actually committed. It is a real-life version of the “pre-crime” concept introduced in the movie “The Minority Report.”
Finally, the use of AI in law enforcement raises significant concerns about accountability and transparency. AI algorithms can be opaque, making it difficult to understand how they work and to identify errors or biases in their decision-making processes.
Law enforcement agencies must be accountable to the people. The use of AI undermines that accountability by removing human judgment and decision-making from the process. Ultimately, the use of AI in law enforcement is a dangerous path that could lead to a future where civil liberties will likely be curtailed, and society is governed by algorithms rather than by people.
The U.K. appears to be on the way to this type of society. But it doesn’t mean it will stay there. Sure, the U.K. does not have the Constitutional protections that Americans have. However, we have already seen that our government is more than willing to seek other ways to violate our natural rights. Moreover, there are likely plenty of individuals in the elite class who would relish the opportunity to have at its disposal better ways to conduct surveillance on American citizens.
The opinions expressed by contributors are their own and do not necessarily represent the views of RedState.com.