Skip to content

Body Language

We have a lot more to fear from biometric surveillance than facial recognition alone

In Lincolnshire, England, the local police force has indicated that they will be testing out a kind of “emotion recognition” technology on CCTV footage which they collect around Gainsborough, a town in the county. Lincolnshire’s police and crime commissioner, Marc Jones, secured funding from the UK Home Office for this project, but the exact system that the police will use, and its exact supplier, are still not confirmed. All that the police force knows is that they want to use it.

It may seem like an overreach for a provincial police force to roll out a nebulous new technology, even the description of which sounds sinister, but I’ve started to think there is a less obvious reason behind their choice. Facial recognition has been the subject of much scrutiny and regulation; in the UK, a man named Ed Bridges took the South Wales Police to court over the use of automated facial recognition and won. But the South Wales Police force has indicated that they will continue to use their system, with the chief constable Matt Jukes saying that he was “confident this is a judgement we can work with.”

Some police forces may not want to find themselves ensnared in the same kind of costly legal battle that the South Wales Police fought for years.

Legislative attempts to push back against surveillance tech are in the works outside of the UK too, and some are more expansive in their targets. One such bill, introduced by Democratic lawmakers in the United States, would also ban federal agencies from using or spending money on gait recognition and voice recognition without the explicit authorization of Congress, and it would impose financial penalties on states that don’t pass their own bans. But legislation that attempts to eliminate only one kind of harm—such as the use of facial recognition by law enforcement—can lead to workarounds, particularly as private companies are not subject to such limitations. It also doesn’t account for the many other ways that our physical bodies make us legible to systems of surveillance.

The consequences of a narrow focus on FR are already visible. Some police forces may not want to find themselves ensnared in the same kind of costly legal battle that the South Wales Police fought for years. For them—and for many other organizations—alternative kinds of algorithmic surveillance are springing up to work in tandem with FR, or to take its place, particularly in cities or regions where a moratorium on FR has been imposed. Startups and established technology companies alike have expanded their research into identifying the parts of the body and kinds of behavior that make people uniquely identifiable.


In 1997, Ann Davis wrote an article called “The Body as Password” in Wired, pointing out the many ways the body was already being decoded. She cites the example of Coca-Cola using “hand geometry” to stop workers from “buddy-punching” a late coworker’s time card. Davis also notes that demand for digital finger imaging was growing more rapidly than others because of pressure from law enforcement. The drive for biometric technologies had already begun before Davis wrote about it, and in the piece she predicts, with some sense of wonder, that there will be more than fifty thousand “biometric devices” by the year 2000.

Twenty years on, companies like the Japanese tech firm NEC (which has contracts with law enforcement in more than twenty states and around the world) have developed earprint recognition technology, the logic being that the contours of the inner ear are hard to modify and often just as uniquely identifiable as a fingerprint. The Chinese company Watrix has developed gait recognition, a kind of anthropometric surveillance system that claims to identify people from the way they walk or move. They say that they can identify someone from as far as fifty meters away; as Watrix’s CEO told AP in 2018, “you don’t need people’s cooperation for us to be able to recognize their identity.” Companies like Affectiva have pioneered emotion recognition, which purports to discern how someone is feeling through indications like the movement of their mouth or the corners of their eyes and the inflections in their speech. Amazon is working to introduce payments via palm print at their Amazon Go stores, in service of a “frictionless” experience. And a relatively mysterious company called MITRE works in research and development for U.S. federal agencies; one of their services is identifying people’s fingerprints via Facebook photos.

Between federal agencies, police departments, and private companies, the spread of these technologies can be difficult to pin down. If one form of surveillance becomes publicly reviled—or at least, subject to some level of scrutiny—then two others will pop up in its place. Activists and researchers who work in the area have frequently emphasized that facial recognition just happened to be the surveillance technology that matured the most quickly. When I spoke to the sociologist Simone Browne, author of Dark Matters: On the Surveillance of Blackness, she pointed out that facial recognition is part of an industry that prides itself on finding workarounds.

“The amount of money that’s being spent on biometrics, from 2010, to 2015, to 2020, is exponential,” she says. “All of the things that we post, all of the things we do—whether that’s behavioral or bodily things—it’s all part and parcel of the identity industrial complex. Whether it’s pattern recognition, using people’s acne scars or tattoos, or facial recognition, they work in tandem.”

If one form of surveillance becomes publicly reviled—or at least, subject to some level of scrutiny—then two others will pop up in its place.

Browne’s “identity-industrial complex,” a term she coined in 2005, refers to the growth of for-profit identity management technologies, which are often deployed by immigration agencies processing visas or companies that regulate arrivals into airports or seaports. The state, military, and the private sector all get bound up in the provision and management of biometric services—like fingerprint and iris scans—and seek to make the whole of a person identifiable through these markers.

But identity-based markers aren’t only legible through complex technological systems. During anti-police brutality protests in the beginning of June in Philadelphia, a woman allegedly set fire to two cop cars. Weeks later, the FBI tracked her down using footage of her forearm tattoo and the slogan on her T-shirt, the latter of which they traced to Etsy, then to her LinkedIn, where they found contact and location information. Etsy may not be a conscious part of the identity industrial complex, and social media and social networks are well established sites of data leakage, but this kind of detective work would nonetheless have been close to impossible without a warrant and significant time before 2005, when Etsy was founded. 

The Philadelphia example isn’t an outlier: a collaborative project between Purdue University and the Indianapolis Metropolitan Police Department led to the development of a bespoke app that interprets and maps gang graffiti and tattoos based on a large image database compiled by law enforcement agencies and the Indiana prison system. Tattoo recognition is still a developing field, although it has been underway since at least 2014, when a research project called Tatt-C (supported by the FBI, among other organizations) began to collect photos of tattoos, many of which were taken of prisoners around the United States, to further train computer systems to match and “decode” them. Tattoos are commonly used by agencies like ICE to determine whether someone should be deported, and it’s not a stretch to imagine how much more profiling and suffering could result from federal agencies actively using tattoo recognition algorithms.

The Covid-19 pandemic has also legitimized and hastened the spread of some kinds of surveillance that might otherwise have been met with more pushback. Software to monitor students as they take tests remotely has become ubiquitous in the last year, as has software to monitor employees at home. And as we’re told that it is crucial to get back to workplaces or to shopping malls and restaurants, a whole reopening industry has trafficked heavily in forms of biometric surveillance. As lockdowns around the world began to ease up, at least for a short period in the summer, many corporations and governments relied on biometric technology like fever scanners at the entrances to office buildings, despite the fact that some of these measures don’t have a significant impact on the spread of Covid-19.  


Simone Browne writes about the phenomenon of “digital epidermalization,” where digital technologies are thought to produce a “truth” about the body which can’t be revealed by the person alone. We see this kind of thinking in police forces who might blindly trust an algorithm’s assignment of guilt  without bothering to check someone’s alibi, leading to a wrongful arrest. But however ineffective it may be toward its intended purpose, biometric surveillance has successfully created a hierarchy of inherent worth, one underpinned by outdated ideas about criminality and desirable behavior from centuries ago, when physiognomy was viewed as a legitimate science—a way of telling whether someone was trustworthy or likely to commit a crime based on the way that their forehead or nose was shaped. Obviously, these hierarchies don’t spring up fully formed; they reflect preexisting biases and systemic oppression within societies. But biometric surveillance in its current form only reifies these ideas.

These systems also work in tandem with techno-utopian beliefs about the way that the world works: the idea that fundamentally political matters, like how to make our cities better places to live in, can be solved with indiscriminate data collection. The lure of Silicon Valley solutions is that they present as objective and rational, running on nothing but numbers and cold, hard reality.

These systems legitimate the viewpoint that any data collected, even without a specific reason, can end up being valuable at some point.

Of course, the fact is that many of these technologies are unproven in the wild, and often they are used on unwitting individuals in their very first excursions. The accuracy rate of the automated facial recognition that has been rolled out by police across the UK is just 19 percent, according to an independent review of two years of FR trials. Voice recognition systems, which already exist in commercial devices like Alexa, misidentify words spoken by black people up to 16 percent more often than they do words spoken by white people.  But whenever there’s a possibility to make a profit, Silicon Valley’s emphasis on the infallibility of numbers seems to fade away. “Most industries use some form of specialist expertise, which is then held accountable through public engagement; in public services, usually that’s measured by impact, and in technology, it’s usually measured by adoption,” writes Sean McDonald in “Technology Theatre.” The success of technologies like facial recognition or palm print recognition isn’t measured by its accuracy or even the value of its use case—just by how many people use it regardless.

Such hypocrisy is not exclusive to tech startups. But biometric surveillance isn’t just harmful because it’s often inaccurate. These systems also legitimate the viewpoint that any data collected, even without a specific reason, can end up being valuable at some point—even if it means inventing a use for it. Emotion recognition systems, for example, are trained on the assumption that some tics or facial movements registered by computer vision systems naturally betray the actual emotions that a person is feeling. Of course, people smile when they see their friends, or laugh when they watch something funny, but we smile for other reasons too, like discomfort or out of politeness. This seems entirely logical—and surely anyone who has been outside or interacted with someone else would know this—so there must be another reason that Amazon is  rolling out a fitness wrist band, Halo, which can apparently detect emotion in users’ voices. Firms and startups that use emotion recognition technology aren’t trying to understand people better, or to improve their social skills.  Rather, they want to use this information to modify people’s moods, or to monitor their reactions to a particular product in the hopes of making it more desirable.

Writing in OneZero, Jathan Sadowski suggests taking a Marie Kondo-esque approach to dismantling surveillance infrastructure, from predictive algorithms to hidden stingrays. If it doesn’t improve human well-being or social welfare, then it’s worth getting rid of. But decision-makers in a police department, or a congressman running for re-election in a town that’s been rocked by protests, are likely to view “social welfare” through a different lens than I do. And as Os Keyes points out in Real Life Mag, it’s not enough to just get rid of individual technologies, because “leaving the skeleton of the surveillance infrastructure intact means quite simply that it can be resurrected.” Tearing this entire infrastructure  down to its studs—and making it impossible to build on—is the only way forward.