Quantifying Injustice

Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice. ...

Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan
Email Ursula Whitcher

What is predictive policing?

Predictive policing is a law enforcement technique in which officers choose where and when to patrol based on crime predictions made by computer algorithms. This is no longer the realm of prototype or thought experiment: predictive policing software is commercially available in packages with names such as HunchLab and PredPol, and has been adopted by police departments across the United States.

Algorithmic advice might seem impartial. But decisions about where and when police should patrol are part of the edifice of racial injustice. As the political scientist Sandra Bass wrote in an influential 2001 article, "race, space, and policing" are three factors that "have been central in forwarding race-based social control and have been intertwined in public policy and police practices since the earliest days" of United States history.

One potential problem with predictive policing algorithms is the data used as input. What counts as a crime? Who is willing to call the police, and who is afraid to report? What areas do officers visit often, and what areas do they avoid without a specific request? Who gets pulled over, and who is let off with a warning? Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice.

Measuring bias in predictive policing algorithms

William Isaac
Dr. William Isaac

 

In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. This program is based on research by the anthropologist P. Jeffrey Brantingham, the mathematician Andrea Bertozzi, and other members of their UCLA-based team. The PredPol algorithm was inspired by efforts to predict earthquakes. It is specifically focused on spatial locations, and its proponents describe an effort to prevent "hotspots" of concentrated crime. In contrast to many other predictive policing programs, the algorithms behind PredPol have been published. Such transparency makes it easier to evaluate a program's effects and to test the advice it would give in various scenarios.

Kristian Lum
Dr. Kristian Lum

 

Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.

The term "synthetic population" brings to mind a city full of robots, or perhaps Blade Runner-style androids, but the actual technique is simpler. The idea is to create an anonymized collection of profiles that has the same demographic properties as a real-world population.

Blade Runner costumes
A synthetic population isn't about Blade Runner. (Photo by Zach Chisholm, CC BY 2.0.)

 

For example, suppose you are interested in correlations between choices for a major and favorite superhero movies in a university's freshman class. A synthetic population for a ten-person freshman seminar might look something like this:

  1. Education major; Thor: Ragnarok
  2. Education major; Wonder Woman
  3. History major; Wonder Woman
  4. Math major; Black Panther
  5. Music major; Black Panther
  6. Music major; Black Panther
  7. Music major; Thor: Ragnarok
  8. Undeclared; Black Panther
  9. Undeclared; Thor: Ragnarok
  10. Undeclared; Wonder Woman

This is a toy model using just a couple of variables. In practice, synthetic populations can include much more detail. A synthetic population of students might include information about credits completed, financial aid status, and GPA for each individual, for example.

Lum and Isaac created a synthetic population for the city of Oakland. This population incorporated information about gender, household income, age, race, and home location, using data drawn from the 2010 US Census. Next, they used the 2011 National Survey on Drug Use and Health (NSDUH) to estimate the probability that somebody with a particular demographic profile had used illegal drugs in the past year, and randomly assigned each person in the synthetic population to the status of drug user or non-user based on this probabilistic model. They noted that this assignment included some implicit assumptions. For example, they were assuming that drug use in Oakland paralleled drug use nationwide. However, it's possible that local public health initiatives or differences in regulatory frameworks could affect how and when people actually use drugs. They also pointed out that some people lie about their drug use on public health surveys; however, they reasoned that people have less incentive to lie to public health workers than to law enforcement.

West Oakland BART stop
A West Oakland transit stop. (Photo by Thomas Hawk, CC BY-NC 2.0.)

 

According to Lum and Isaac's probabilistic model, individuals living anywhere in Oakland were likely to use illegal drugs at about the same rate. Though the absolute number of drug users was higher in some locations than others, this was due to greater population density: more people meant more potential drug users. Lum and Isaac compared this information to data about 2010 arrests for drug possession made by the Oakland Police Department. Those arrests were clustered along International Boulevard and in an area of West Oakland near the 980 freeway. The variations in arrest levels were significant: Lum and Isaac wrote that these neighborhoods "experience about 200 times more drug-related arrests than areas outside of these clusters." These were also neighborhoods with higher proportions of non-white and low-income residents.

The PredPol algorithm predicts crime levels in grid locations, one day ahead, and flags "hotspots" for extra policing. Using the Oakland Police crime data, Lum and Isaac generated PredPol crime "predictions" for every day in 2011. The locations flagged for extra policing were the same locations that already had disproportionate numbers of arrests in 2010. Combining this information with their demographic data, Lum and Isaac found that Black people were roughly twice as likely as white people to be targeted by police efforts under this system, and people who were neither white nor Black were one-and-a-half times as likely to be targeted as white people. Meanwhile, estimated use of illegal drugs was similar across all of these categories (white people's estimated drug use was slightly higher, at just a bit more than 15%).

International blvd poster
A poster reports police brutality on International Blvd. in Oakland. (Photo by Evan Hamilton, CC BY-NC 2.0.)

 

This striking disparity is already present under the assumption that increased police presence does not increase arrests. When Lum and Isaac modified their simulation to add arrests in targeted "hotspots," they observed a feedback effect, in which the algorithm predicted more and more crimes in the same places. In turn, this leads to more police presence and more intense surveillance of just a few city residents.

In a follow-up paper, the computer scientists Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian worked with a pair of University of Utah undergraduate students to explore feedback effects. They found that if crime reports were weighted differently, with crime from areas outside the algorithm's "hotspots" given more emphasis, intensified surveillance on just a few places could be avoided. But such adjustments to one algorithm cannot solve the fundamental problem with predictions based on current crime reports. As Lum and Isaac observed, predictive policing "is aptly named: it is predicting future policing, not future crime."

Further reading

  • Sandra Bass, "Policing Space, Policing Race: Social Control Imperatives and Police Discretionary Decisions," Social Justice, Vol. 28, No. 1 (83), Welfare and Punishment In the Bush Era (Spring 2001), pp. 156-176 (JSTOR.)
  • Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger and Suresh Venkatasubramanian. Runaway Feedback Loops in Predictive PolicingProceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 2018.
  • Kristian Lum and William Isaac, To predict and serve? Significance, October 10, 2016. (The Royal Statistical Society.)
  • Cathy O'Neil, Weapons of Math Destruction.

     

 

Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan
Email Ursula Whitcher