AI Perception Gap: AI Policing Reinforces And Exacerbates Societal Inequality
The AI We Fear vs. The AI We Miss
“When the robots take over, I’m blaming you!”
I’d just mentioned my plans to pursue postgraduate studies in Artificial Intelligence whilst sitting in the bar of a Mexican hostel. As I wrote earlier, my fellow backpackers consistently responded to my AI ambitions with a chorus of dystopian predictions. The pattern was always the same — educated twenty-somethings, raised in tech-savvy countries, all convinced that AI meant the end of the world. Through these conversations, two things became clear: AI was viewed by Gen Z as tomorrow’s problem, and tomorrow’s threat.
Two thousand miles north and a decade earlier, a different kind of AI story was already unfolding in Reading, Pennsylvania. In 2011, the city had a poverty rate of 41.3%, the highest in the USA. Inequality and crime were endemic in Reading’s society. With a growing need for greater policing capabilities, the force were instead forced to cut head count by forty-five officers due to the economic aftermath of the 2008 financial crisis. Desperation led police chief William Heim to hunt for efficiency gains and he found the perfect solution: an artificial intelligence system designed to increase police efficiency, reduce crime, and — rather impressively — remove human prejudice from policing. A year after deployment, Chief Heim was overjoyed to report massive efficiency gains. Yet beneath these impressive reports lay a troubling reality: the system was silently perpetuating discrimination against certain ethnic groups.
The gap between perception and reality is stark. In Mexico, travellers feared for AI catastrophes and they expected them tomorrow. Meanwhile, in Reading (and other cities in the USA and UK), an AI system built with noble intentions was silently reinforcing societal inequalities today. The true threat of AI isn’t in science fiction — it’s in the seemingly benign algorithms already operating in our communities.

Reading’s Digital Police Force
PredPol: Birth of an Algorithm
Reading’s police force wasn’t seeking a technological revolution — they were simply trying to survive budget cuts. What they got was a crime prediction software — from California big data start-up, PredPol — that transformed their city into a grid of colour-coded, sports pitch-sized cells, with each cell’s colour corresponding to the probability of crime occurrence. The logic was compelling: let AI direct a smaller police force to where they’re needed most, maintaining public safety despite budget cuts.
Beyond efficiency gains, PredPol promised to revolutionise the fairness of policing itself. Human officers, shaped by years of media exposure and societal influences, inevitably carry unconscious biases that can affect their policing decisions. By replacing human intuition with AI-driven deployment, Reading police could eliminate human bias. PredPol’s designers built their system on just two neutral features: the location and timing of previous crimes. The individuals behind these crimes, as well as their race and ethnicity, were deliberately excluded from the algorithm, in what the designers trumpeted as a foolproof safeguard against bias.
Numbers Don’t Lie (Or Do They?)
The system was immediately successful with burglaries in Reading down by 23% within a year of deployment. PredPol’s success quickly resonated beyond Reading’s borders. In 2013, police in Kent, UK, adopted PredPol’s system and within a year reported its targeted predictions to be ten times more effective than random patrolling. The system’s elegant interface made it instantly accessible to officers and it seemed to deliver on its promise of fairer policing, replacing human intuition with data-driven decisions.
On paper, it seemed flawless. A data-driven system making objective decisions based purely on time and location. But as we’ll see, this apparent success story would reveal one of AI’s most insidious capabilities: the power to automate and amplify existing social biases while maintaining an illusion of objectivity.

Beneath the Algorithm
Before we dig into PredPol’s mistakes, here’s a quick technical lesson. AI systems are like young children learning to play a game: they are given an objective (what to achieve) and training data (examples to learn from). They analyse the examples (often millions of them) to learn how to achieve the objective.
The Devil in the Data
Consider the simple example of chess. After analysing millions of games, AI learns which sequences of moves increase the likelihood of winning. If two chess players whom never learned that knights can jump over other pieces played millions of games, the resulting dataset would hide one of chess’s powerful tactics. Any AI learning from this data would be blind to the power of a knight in defence and attack. AI simply cannot learn patterns it is not exposed to (some AI systems actually can learn unseen patterns, but the technology used by PredPol cannot).
Imagine two identical crimes: one committed in a heavily policed, disadvantaged neighbourhood, the other in an affluent area with minimal police presence. The first crime becomes a data point, the second remains invisible to the system. Multiply this effect across decades of policing, and you have a dataset that paints certain communities as inherently criminal while others appear crime-free. PredPol’s system can’t learn about crime patterns in areas where historical data is sparse. Instead, it simply reinforces existing patrol patterns, guiding police to the same disadvantaged communities which in turn produces more crime data points, creating a self-perpetuating cycle of surveillance and enforcement.
PredPol’s Flawed Goals
The training data’s flaws were just the beginning. PredPol’s fundamental objective proved equally problematic. While a chess AI’s goal is trivial (to win the game), choosing an objective in real-world settings is often more nuanced. Consider a self-driving car with only one instruction: avoid collisions. The simplest way to achieve that? Never leave the garage. AI systems do exactly what they are trained to do, and PredPol trained their AI to maximise arrests by intelligently optimising police patrol.
But do more arrests mean safer streets? Not necessarily. The system’s singular focus on sending officers where arrests are likely just reinforces old patterns. Real policing isn’t just about cracking down — it’s about building trust, keeping communities safe before crimes happen. The role of a data scientist is to encode qualitative objectives like trust numerically (an extremely difficult challenge), thereby allowing an AI system to reflect real-world objectives. Instead, model designers at PredPol took the easy route and used the numerically available objective of arrests. Wealthy, degree-holding data scientists took the path of least resistance, and the cost of their convenience fell squarely on the poor.

White Collar, Dark Data: PredPol’s Blind Spots
Now imagine if this algorithmic intensity targeted white-collar crime.
In the 2000s, financial leaders orchestrated frauds that shattered the global economy, leaving millions without jobs, homes, and healthcare. These crimes, orchestrated from pristine offices rather than dark alleys, caused more devastation than any street offense PredPol was built to prevent. Yet from their training to their bulletproof vests, police are designed to operate on the streets, not in boardrooms. Agencies chasing white-collar crime — like the FBI — have struggled for years to get close to bankers. The finance industry’s wealth and powerful lobbies ensure it remains significantly underpoliced, while AI systems like PredPol’s continue directing resources towards already over-policed communities. Rich bankers dodged arrests in the past, wealthy data scientists craft AI systems that embed historical arrest patterns in the present, and disadvantaged communities continue to face discrimination in the future. Inequality becomes code, poverty becomes data — and Silicon Valley celebrates their “efficiency gains’’ over $200 bottles of Dom Pérignon.
The Price of Automated Policing
Back in that Mexican hostel bar, my fellow travellers painted vivid pictures of superintelligent machines rising against humanity. While they envisioned spectacular science fiction scenarios, real AI systems like PredPol’s had already been reshaping communities through the quiet power of algorithmic bias for a decade.
PredPol’s designers proudly built a system that ignored race and ethnicity and focused on geography instead. Yet in our highly segregated cities, they overlooked a critical detail: geography itself serves as a powerful proxy for race. The system dutifully directs officers to the same neighbourhoods, generating more arrests, creating more data points, and perpetuating a self-fulfilling prophecy of “high-crime areas”.
The AI challenge isn’t spotting killer robots; it’s developing the nuanced understanding needed to recognise and address hidden algorithmic biases before they become embedded in our social fabric. The future my fellow travellers feared may never come to pass, but the present they missed is already here.

Appendix - Weapons of Math Destruction
Mathematics Ph.D. graduate Cathy O’Neil coins the term Weapons of Math Destruction (WMDs) to describe mathematical models like PredPol’s. WMDs are designed with the very best intentions but due to misunderstandings during design, they unintentionally punish the poor, incease inequality, and threaten democracy. They operate in bulk and are cheap to run, hence, multiplying the negative impact on society. To learn about more examples of WMDs in credit scoring, insurance applications, and job hunting, I point you to O’Neil’s New York Times best-seller Weapons of Math Destruction; Chapter 5 of which is about PredPol!
Enjoy Reading This Article?
Here are some more articles you might like to read next: