Skip to content

Face Value

It’s not only the state’s use of facial recognition technology we should worry about

Recently, some of my friends have started using their faces to pay for things. Not by charming strangers in bars, but instead by using the iPhone Face ID feature, which has users “glance” at their phone to make a contactless payment. The glance is, in reality, usually a close encounter of phone and face.

Face ID uses Automated Facial Recognition (AFR): every time you use it to make a payment, the Apple TrueDepth camera takes a picture of your face, creates a two-dimensional “map” of it by charting the position of facial features over thirty thousand data points, and then compares this to Apple’s stored image of your face. This means every time you use Face ID, Apple anonymously collects data including, but not limited to: the product purchased, the approximate price paid, where and when you bought it, and your facial expression while doing so. You might have preferred to keep using your fingerprint, since the “glance” is inconvenient, but on the new, home buttonless iPhones, Face ID is the default way to use Apple Pay.

I thought of this during the start of an ongoing UK court case, taken by a man named Ed Bridges against South Wales police for their use of AFR. Bridges says he was photographed without warning by a police patrol conducting a trial on a street in Cardiff; UK police have been conducting these unregulated trials, during which they photograph members of the public and compare them to watch lists, for several years. The process is opaque and their criteria for putting together watch lists questionable. Bridges’s legal team has argued that the way police use the technology breaks data protection and equality rules. The verdict will be decided later this year.

At around the same time, San Francisco became the first city in the United States to ban non-federal government agencies, including police, from using facial recognition technology. Nearby Oakland and Somerville, Massachusetts, are already considering similar bans. These developments have been hailed as a preliminary step toward regulating invasive facial recognition surveillance, but they apply only to government use of the technology. The growing prevalence of its use by private companies, such as Apple’s Face ID, raises questions about how effective this type of legislation can be. 

Regulations for facial recognition technology in the private sector are scarce at present.

During Bridges’s three-day hearing in Cardiff, police described a remarkably clunky approach to using this supposedly cutting-edge technique. There is, for example, no standardized process for deciding where and when to deploy it. Police choose somewhere busy, like a sporting event, concert, or protest, and then turn up in a van and park in the middle of a crowd, hoping their machine will photograph as many faces as possible to match against their watch list. This approach can be very error-prone, due to the shadowing on faces in crowded spaces and people wearing scarves and hats, as they often do at games. 

The hearing also demonstrated the limits of the South Wales police officers’ knowledge of the inner workings of their software. They use a device developed by NEC, a Japanese company, which comes to them as a “black box.” When the question of bias arose, the police said they could not access the training set—the database of faces—used to make their tool because it is a trade secret protected by NEC. They claimed that if a face doesn’t match with their watchlist, it is deleted within milliseconds, but how can they be sure the NEC software does not automatically save its own copy to use in further training sets? Such questions demonstrate the fundamental difficulty of regulating government surveillance when private companies are involved.

Meanwhile, while many celebrated the new San Francisco rule as a “victory for privacy,” it affects virtually no practical change. Police in the area don’t use facial recognition technology, so the law was essentially banning something that doesn’t exist. Bodies operated by the federal rather than local government are not covered by the new rule, so there will still be facial recognition at the airport. AFR surveillance is also still used in the form of personal home devices like the Google-owned smart doorbell Nest Hello. And it might soon be coming to Amazon’s popular Ring doorbells, which are installed on the front doors of homes to record any activity that happens within thirty feet of the device. This footage can be stored for the homeowner or shared with others in the community via a “Neighbors” app.  Such devices are legal in California, where people walking around in public spaces are apparently deemed to have no reasonable expectation of privacy. CNET recently reported that over fifty police forces across the United States have partnered with Ring to offer free or discounted doorbells to citizens, in some cases using taxpayer-funded subsidies. And last year, Amazon filed a patent for a system that would use AFR to compare the surveillance footage from Ring doorbells to police watch lists and then notify law enforcement automatically if it found matches. A similar technique to the one used by current UK policing operations—but covering much more ground, and much, much slicker. 

Regulations for facial recognition technology in the private sector are scarce at present. A bill currently making its way through the early stages of Congress, the Commercial Facial Recognition Privacy Act, would change that—but even if it passes, it will only make companies seek affirmative consent before “using facial recognition technology to identify or track an end user” and again before sharing that data with third parties. In other words, companies would only have to make you tick “yes” on one of those digital permissions forms that appear when you open an app and could then track your face for as long as they’d like to. Anyone who owns a phone knows this is hardly a barrier. Nobody really has a choice about the way they interact with these platforms: you let them use you in certain ways or you lose access.

The geographic restrictions of laws versus the omnipresence of the internet also makes it hard to design localized rules governing this kind of technology that actually work. Some states, such as Texas, do have laws that prevent companies from collecting facial data without permission; in December 2017, when Google added a feature to their Arts & Culture app that used AFR to match people to a similar famous painting, it was not available in Texas. But, not wanting to be kept out of the fun, local news outlets like the Houston Chronicle ran cheery articles explaining how “some folks have had good luck” subverting the rules designed to protect their civil liberties by turning off their location services or using a VPN.

The fundamental question of the South Wales trial was: does the usefulness of this technology outweigh the expense and privacy violations necessary to using it? Police argued yes, since it may help them catch dangerous wanted criminals and terrorists they would have no other way of finding. But they have also claimed that any potential privacy violations are negated by fact that they publicize when they will be using the technology. Now, I’m not a terrorist or highly wanted criminal, but if I were, I would probably get into the habit of checking the South Wales Police Twitter feed before heading out to big events. The technology is either needlessly invasive, useless, or both. A report by Cardiff University academics evaluating the effectiveness of AFR as used by South Wales police found no conclusive evidence it had reduced repeat offending, measurably increased detections, or improved community cohesion. It concluded that the force had not fully anticipated “the effort and investment required to get AFR to regularly contribute to tangible policing outcomes.” 

Nobody really has a choice about the way they interact with these platforms: you let them use you in certain ways or you lose access.

Apply this same test to the private sector. Facial recognition technology seems like the sort of thing that might have lots of exciting futuristic uses, and in the spaceships and spy lairs of popular films, for example, it does. In the real world, my friends find it slightly less convenient than using a fingerprint to buy things, and slightly more convenient than using a bank card. It can also be used to, say, mass tag people in Facebook photo uploads, in security doorbells, or in some of the other smart home features made by Nest, among others.

It’s hard to argue that any of these represent a life-altering usefulness, but part of their function is to train the technology so that sleeker, more responsive systems can be put to other uses. A camera such as the Apple TrueDepth used in Face ID can only learn to pick out faces in a crowd by practicing on lots of faces at a glance first. At CES 19, a yearly symposium of human-facing technologies, the Proctor and Gamble-owned skincare brand SK-II showcased a new “Future X Smart Store,” in which facial recognition is used to track people walking around the shop and guide them toward recommended products based on their skin and apparent age. Meanwhile, some shopping malls in the UK and Australia already use surveillance cameras with facial detection technology installed in digital billboards to identify the age, gender, and mood of shoppers and screen them advertisements accordingly. Frankly, who wants this, other than the advertisers themselves?

The conspiratorial line of thinking that targeted advertisements have sinister mind control powers may be overblown, but it certainly doesn’t do us any good to move through the world in a constant, swirling vortex of fragments of information we have had some vague virtual interaction with. I don’t want to walk around shops and have pictures of weird jokes or things I clicked on accidentally flashed at me constantly on the off chance I might spend some money. Something I want even less is to be monitored while I shop so that pictures of things I pick up or glance at can haunt me on the internet for weeks later. A popular privacy campaign slogan is “the right to be forgotten”; we should also have the right to forget about products we don’t want and don’t care about.

There is one group that stands to benefit from the widespread adoption of facial recognition technology, however: the very wealthy, with their smart home digital assistants like Google’s Nest Hub Max, which uses facial recognition to display different family members’ messages, schedules, and music recommendations. In the future, such products could even adjust temperature and light to different family members’ preferences (presumably family members in such a home have very particular and diverging preferences and never all find themselves in the house at the same time). Notice the inequity here: those who can’t afford a fancy surveillance doorbell live under the watchful eye of those who can, and the data of the masses is harvested to make minor improvements to the smart homes of the tiny proportion of people who can afford them.

Orwellian is the word most often used by those railing against AFR used by the state; you can find it many of the articles about the South Wales police case and the new San Francisco law. It is worth remembering that when George Orwell wrote 1984, in the late 1940s, the UK state was in the midst of a massive public housing initiative, and the energy and transportation industries were being rapidly nationalized. If Orwell foresaw some of our own bleak future, he certainly didn’t predict how powerful the private sector would become. Other critics have invoked the example of China as a warning, where facial recognition technology has been used both as a method of controlling the largely Muslim Uighur minority and to monitor people in public places as part of a misguided attempt to foster cohesion through a social credit system. It’s not clear to me that this is fundamentally different than using the technology to follow people around constantly, prodding them to spend more money or scanning their faces at football games, except as a matter of scale.

A state which outsources its surveillance efforts to private companies is still a surveillance state.

None of this is to suggest that we shouldn’t fight against the use of facial recognition by law enforcement agencies; in many ways it represents the worst of policing, with vast amounts of money and resources spent monitoring low-impact crimes like pickpocketing, and police effectively acting like an elastic band to keep anyone ever involved in petty crime bouncing back to the courts again and again. But without a corresponding legal focus on private companies, the problem of surveillance and privacy doesn’t go away.

In fact, it might even get worse: the South Wales case demonstrates how police who work with the private sector can hide behind the curtain of trade secrets and “black box systems” when probed in court. A report on algorithms in policing by The Law Society of England and Wales stated that the lack of transparency rules governing private and public sector partnerships is one of the key barriers to legally challenging technology like AFR. Companies are not covered by rules like the Freedom of Information Act that help hold the government accountable; privately owned companies, which many tech companies are, don’t even have to answer to any shareholders. Not that this necessarily makes a difference: an attempted shareholder revolt over Amazon’s decision to continue selling its Rekognition tool to police recently failed, with less than 3 percent of the vote.

When the law draws an arbitrary line between the public and private sector’s use of this technology, it creates an “everything the light touches” style of regulation, with some elements of facial recognition up for scrutiny and others arbitrarily protected. Police are then incentivized to team up with the private sector in the knowledge that some of their surveillance methods will remain safely beyond the reach of the law. A state which outsources its surveillance efforts to private companies is still a surveillance state, just one with very little oversight.