Today, predictive policing is one of the biggest — and most hotly debated — topics in the field of criminal justice. Police departments have begun to augment traditional forecasting with computer algorithms to determine where crime is likely to happen, and who is likely to commit it.
There are two approaches to predictive policing, and they can be used separately or in tandem. The first looks at geographic data to discover hotspots where crime might be likely to occur in the future, allowing police departments to increase surveillance in those areas. The second analyzes social networks and behaviors to identify potential criminals or, conversely, identify those who might be at greater risk of becoming a victim.
Early studies say this analytical, statistics-based approach is working; however, critics warn that predictive policing could open a Pandora’s Box of problems. Here are four of the potential pitfalls of predictive policing.
Problem 1: Increased Racial Profiling
The business case for predictive policing is straightforward: It proposes to help police departments focus their limited resources in the areas they are most needed. However, civil liberties groups say the analytics may be crunching data that might be outdated, wrong or biased.
Faiza Patel, co-director of the Liberty and National Security Program at the Brennan Center for Justice at New York University Law School, urges caution for police departments using this new technology. “It undermines the constitutional requirement that police should target people based upon an individual suspicion of wrongdoing, not statistical probability,” she writes.
Research shows that using historical data to infer trends might cause police departments to concentrate their efforts in some neighborhoods, which can lead to increased enforcement that produces skewed crime statistics compared to other areas. The cycle will continue, as more crime reports in an area will result in greater enforcement, which leads to more reports, and so on. This can reinforce racial biases, straining relationships between law enforcement and minority communities.
Problem 2: Privacy Threats
Using data to determine hotspots may not be a privacy concern, but using data to identify individuals is. A recent Verge article describes a case in Chicago in which a man was flagged for being at risk for committing crimes — he was put on what the city calls its “heat list” — even though he didn’t have a violent criminal record. Police contacted the man to tell him he was at risk and being monitored, even though he hadn’t done anything wrong.
At the heart of the privacy issue is a single question: Do the privacy expectations that exist in the physical world also extend to online spaces? The legal limits online are not as clear as they are in the real world.
Problem 3: Overreliance on Technology
There’s always a tendency to believe new technology will solve old problems. This is rarely the case, however, as technology is just a tool. Predictive policing software may be able to analyze data, but humans must still interpret the output in a way that’s actionable and understandable. A recent report by the RAND Corporation’s Safety and Justice Program says that focusing on the accuracy of the information, instead of the usefulness of it, could propose significant problems.
Another issue might be the quality of the data itself — input that is outdated, biased or just wrong will not produce accurate or useful results.
Problem 4: Misunderstanding of Causal Relationships
The human element is also critical to understanding the relationship between the analysis and the reasons for it. Too often, people might be quick to jump to simple cause-and-effect conclusions without determining whether the cause actually leads to the effect. Consider, as an example, what it means when many crimes are reported between 7 a.m. and 8 a.m. Can you infer that the crimes are really taking place at these times, or is the explanation that this is when people are waking up or opening their businesses and first noticing they have been affected?
Some police departments are taking steps to look beyond the analytics to discover root causes. The Marshall Project, in an article about predictive policing, says commanders in the St. Louis County Police Department meet weekly to discuss why certain areas are more prone to particular types of crime. This kind of communication — looking at the why and not just the what — is critical to making the best use of predictive policing software.
Criminal justice professionals must remember that predictive policing is promising, but still in its infancy. While some police departments are reporting early success with the software, other reports are less promising. A report in the Journal of Experimental Criminology found that individuals who had been identified through predictive analytics as potential victims of gun violence were no more or less likely to be victims than people not identified. Expert John Hollywood said that predictive policing only provides an “incremental” advantage over other best practices.