Skip to content

Predictive Policing and Automated Peace

If you’re a teenager in Los Angeles planning a party and looking forward to a night of drink, debauchery and chemical highs, you’d better keep quiet about it.

The twenty-four-seven “eComm Unit,” a police division established in 2012 as a kind of online patrol squadron, is combing the web for people like you—especially if you mention underage sex or nitrous oxide. The eComm team is made up of very busy people: its “Social Media Dispatchers” (or “law enforcement technicians” as the L.A. Sherriff’s Department calls them) work around the clock looking for crimes in the planning stages, identifying “everything from illegal parties advertising illicit drugs to assaults and arsonists.”

This brand of citizen surveillance (or “strategic listening”) is just one example of a growing field of “predictive policing,” which employs emergent technologies to update the traditional role of police. Instead of simply waiting for crime to happen and then responding, a mixture of high tech and a helpful informing public seeks to nip deviant activity in the bud before it ever takes place.

The commercial world wants a slice of the action too, of course. A brochure from global tech outsourcing company Accenture urges law enforcement to “Tap the Power of Social Media to Drive Better Policing Outcomes,” while start-up ECM Universe touts itself as a “social media surveillance solution” offering software that crawls sites “rife with extremism” in order to “alert investigators of warning signs.” “We got 2 drug dealers and 1 drug party in just the first 3 days of the free trial!” the website boasts. Private companies such as X1 Discovery are also helping police and prosecutors capture metadata on suspects.

From a law enforcement angle, this all makes perfect sense. After all, companies like Facebook already hold way more data on citizens than the police do, with a built-in ability to monitor their users at an unprecedented scale and—helpfully—without having to go through the courts to do it. For instance, Facebook routinely screens its users’ activity for signs of “grooming,” the early stages of child molestation; in 2011, it began using PhotoDNA, which compares photos uploaded to images from the FBI’s National Crime Information Center, and it regularly cooperates with police subpoenas by handing over reams of user data including photos, posts, messages, contacts, and past activity–possibly the most elaborate snitch dossier imaginable. (Twitter, by contrast, has fought the authorities in refusing to hand over user data.)

In cases involving murderers or pedophiles, this all seems relatively reasonable. But if cops can use computing technology to monitor potential criminals-in-the-making, does it also make sense to for them to use it to predict crimes that have not yet been committed? This is the question law enforcement agencies all over the U.S. and in parts of Europe have been putting to the test, with a raft of new algorithmic crime prediction packages.

Availing themselves of all the exciting opportunities of “big data,” these prediction platforms—such as the PredPol software already employed in L.A., and the NYPD’s Domain Awareness System—analyze past trends in crime statistics to anticipate future ones. The numbers are then crunched to provide suggestions–such as PredPol’s pink squares, potential crime hotspots worthy of increased patrols. While it’s not quite Minority Report yet, the statistics seem impressive: PredPol boasts somewhat hazily of reducing aggregate crime in two areas of Atlanta by 19 percent since 2010, while a trial in Los Angeles saw the program score 6 percent compared with human analysts’ 3 percent in predicting crime patterns.

Algorithms have a way of appearing neutral—merely scientific, rather than political. But blind trust in the impartiality of these numbers can lead to dangerous justifications of entrenched social bias. What if the pink squares happen to repeatedly coincide with black or poor areas? A human decision to increase patrols there would be open to the charge of profiling; if PredPol suggests it, any resultant bias can simply be blamed on the software. Moreover, this kind of smart statistical intelligence requires a large number of electronic eyes and ears, with troubling implications for privacy. Take Oakland’s ShotSpotter system, a large network of microphones that detect gunfire, map its location, and send an alert to patrol officers within twenty seconds. That does sound a bit more like Minority Report.

Such implications become more worrying still when similar algorithms are employed—not at the anonymous, macroscopic level, but for determining the fate of an actual person’s life. Software already used in Baltimore and Philadelphia aims to automate the decisions of a parole board by algorithmically predicting which parolees are likely to commit murder after being released. This is like making a decision about someone’s life in the same way Amazon calculates which books to recommend an online shopper.

Sneaking technology like this into law enforcement effectively makes Big Tech responsible for aspects of civil governance (at a time when Silicon Valley is already under fire for secretive encryption practices). Do we really want to live in a world where the mere mention of illegal substances, protests, or other trigger-words might flash up an alert somewhere—especially in the light of some recent disastrous false positives? (The news story last year that cops visited an innocent couple who searched Google for pressure cookers was later debunked, but notably, it was met with little surprise from readers.)

Few of us would disagree with law enforcement using social media to help expose sex offenders or murderers. Community outreach groups that watch social media and offer real-time responses to, say, suicidal messages, are providing an important service. But we should keep an eye on who’s doing the watching, and who’s being watched. Constant online monitoring by both police and private sector can make the idea of round-the-clock hidden surveillance seem natural, and it opens the door for policing any behavior seen as deviant. (The “crimes” hunted by the eComm unit include self harm, flash mobs, and “unsanctioned protests.”)

As for the grownups spying on teenage social media feeds for mentions of “drug parties,” quite frankly, one wonders if maybe those on both sides of the equation should just stop spending so much time on Facebook and get out a bit more.