When we talk about automation, algorithmic decision-making, and artificial intelligence, we often think about the future, rather than the present. The media and political class talk breathlessly about the possibilities of such advanced technology—driverless cars, integrated workplaces, and just-in-time customer experiences. Meanwhile, for the dystopians among us, artificial intelligence evokes the prospect of faceless machinery dominating human society—the robots will eat all the jobs and use our human bodies as batteries to power their ruthless and sociopathic new order. Even tech boosters have worried about the future of superintelligence. As author Kate Crawford pointed out a few years ago, a sector of affluent white men who pondered “the singularity” imagined an existential threat to human life. “Perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator,” she wrote. But the present problem is that engineers are already building all kinds of discrimination into algorithms: “For those who already face marginalization or bias, the threats are here.”
In reality, automated decision-making systems already influence millions of people’s lives every day, often in insidious ways and without accountability. Perhaps most obviously, the content that we see in many popular online spaces is not the same as that seen by our neighbors. Online spaces are curated to suit us, or at least, a version of us as homo consumptio. Companies like Google and Facebook collect as much data as possible because the more finely grained the information that companies can collect about users, the greater the capacity of these platforms to segment audiences for advertising. Advertisers can select from targeted audiences that are automatically generated, without human oversight. As a result, Facebook has permitted digital redlining by allowing advertisers to exclude people from particular ads based on race. Google has allowed automated ad categories, enabling companies to advertise to those with affinities for hate speech. No doubt there is plenty more of this discriminatory and illegal behavior going on that has thus far avoided scrutiny.
It is ironic that the trope of automation revolution was always that robots would take all the jobs, when in fact they often ended up displacing middle management.
The secondary data market, where companies sell data sets in exchange for insights, has generated the capacity to build or buy automated tools to allow access to customers outside of these platforms. Predatory industries, like payday lenders, gambling companies, and for-profit private education providers, extensively manipulate web users—using automated processes to seek out those who are vulnerable and exploit that vulnerability for profit. Gambling companies purchase data from banks to help them find new punters. For-profit schools post fake job ads that are then shown to particular categories of people so that when they call up about the job, they can be sold a dodgy degree. Meanwhile, Christians have made use of this secondary data industry for their own ends: a vigilante group used advertising data from Grindr (purchased legally) to dox a closeted priest. It is only a matter of time before anti-choice fanatics try something similar for those seeking or providing abortions. Such stories cast a fresh hue on the term “surveillance capitalism.”
Automated decision-making is also influencing access to basic services, like mainstream banking and credit. Job applications are routinely processed by machines, which have, on various occasions, shown biases against women and people with English as a second language. Once at work, automated processes govern more and more of our existence. It is ironic that the trope of automation revolution was always that robots would take all the jobs, when in fact they have often ended up displacing middle management. Most obviously, in Amazon fulfillment centers, tasks are allocated and managed through machines, and employees have been fired automatically for non-compliance. But more commonly used technologies, like Microsoft Office, permit highly granular surveillance of worker productivity. Or consider that opaque performance algorithms deployed in public school systems in the United States have resulted in highly competent teachers losing their jobs for supposed performance failures, as documented by Cathy O’Neil in her book Weapons of Math Destruction.
From decisions about welfare entitlements to child protection, governmental authorities use algorithms to make all sorts of life-altering determinations. In her book Automating Inequality, Virginia Eubanks argues that current models of data collection and algorithmic decision-making create what she calls a “digital poorhouse,” which serves to control collective resources, police our social behavior, and criminalize non-compliance. And this is a global phenomenon, in which “citizens become ever more visible to their governments, but not the other way around,” in the words of a 2019 United Nations report.
While the automation revolution has been directed at optimizing penny-pinching when it comes to social services, it has heralded a gilded age for law enforcement. Police budgets are scaling up capacity with sophisticated technology. Predictive policing systems have been rolled out in major cities across the United States, at considerable expense. Algorithms also dominate the lives of those in the justice system, including decision-making tools to suggest sentencing, predict recidivism, and govern probation. In 2016, ProPublica investigated the “risk scores” assigned by an algorithm to more than seven thousand people in Florida with arrest records. “The score proved remarkably unreliable in forecasting violent crime: only 20 percent of the people predicted to commit violent crimes actually went on to do so.” Moreover, the risk scores came with “significant racial disparities,” labeling black defendants as future risks “at almost twice the rate” as white defendants. A New York Times report in early 2020 noted that one of the problems groups have in challenging apparent bias is that automated systems are “taking humans and transparency out of the process.” The algorithm that ProPublica investigated in Florida, for example, was developed by Northpointe, a for-profit company that marketed one of the most widely used risk assessment programs in the nation. Now known as Equivant, the company considers its software to be proprietary.
Last but very much not least, there is the automation of warfare. Automated weaponry is an obvious example, but this is much more than attaching a bomb to a drone (which is hardly a challenge for most militaries). It includes collecting and labeling data for training object identification algorithms. It also involves using algorithms to identify patterns of human interactions or social graphs, to permit sophisticated targeting of adversaries. For example, the NSA had a program ominously called Skynet, which used mobile phones to track movement of people in places like Pakistan, tracking who they engaged with and whether they were known to be associated with terrorism. One of the key individuals identified by the program as a potential target was in actual fact an Al Jazeera journalist—an absurd outcome.
The Skynet is Falling
In other words, the “artificially intelligent apex predator” is not a future problem. These systems are ubiquitous and inscrutable, and there are few, if any barriers to their widespread implementation. The underlying assumption that is justifying this trend is that computers make better decisions than humans. It’s hardly difficult to debunk this in practical terms, with countless examples of algorithms gone wrong. Facial recognition tools are a common example, with their repeated failure to identify faces, especially those from racial minorities. Indeed, former Detroit police chief James Craig estimated facial recognition failed 96 percent of the time. More generally, research indicates that over 85 percent of artificial intelligence projects used in business fail. The failure rates of these automated systems are a stunning indictment of technology that is the object of such significant financial and political investment. While there is often some reason to use automated systems, there are stories of human misery and mistakes pretty much everywhere you look.
One answer to this, of course, is that we just need to improve these systems with more data and more refinement to reduce the fail rates. But there is also a bigger question about what exactly we mean by the term fail. Consider a 2019 report from Georgetown Law’s Center on Privacy and Technology: an NYPD officer obtained a blurry photo of a man reportedly stealing beer from a CVS through the store’s surveillance system. The photo itself produced no matches in the facial recognition database, but the officer noted the suspect looked like the actor Woody Harrelson. He ran a photo of Harrelson through the database, found a match, and that person was arrested for the offense. For the NYPD (and their institutional political backers), facial recognition helped them find a suspect and solve a crime, which looks like the system working. For the rest of us, it’s an appalling reminder of the fragility of rights in the face of technology and power.
As Stafford Beer liked to remind us, the purpose of a system is what it does. At present, systems of automation support a lucrative industry and provide cover to governments that prefer to spend money on technology-as-magic rather than grapple with social inequality and dysfunction. This is politically possible because our present system allows for the benefits of the automation revolution to be privatized and the harms to be socialized. The price of both success and failure in various automation experiments is paid by the poor and marginalized, upon whom these systems are tested and refined, while simultaneously denying them the resources and capacity to challenge them.
Would it be better if these systems worked well and failed less? On one level, it’s hard to avoid answering yes. But it is equally important to avoid the parameters of this debate being set with inbuilt assumptions about the utility and necessity of this technology, and how it is produced. Given we live in a digital dystopia of decision-making gone haywire, should we embrace a counter-utopian ideal of being failure free, or a realist rejection of the idea that such a thing is possible?
Magical Irrealism
One way to start to untangle this conundrum is to pull back the curtain on these automated systems and look to the material components of their creation. Arthur C. Clarke observed that “any sufficiently advanced technology is indistinguishable from magic.” Like the final reveal in the Emerald City, an examination of the industry shows how, too often, the automation revolution is less of a formidable phenomenon and more a matter of a white guy pulling levers, projecting omnipotence while lacking self-awareness. Understanding automated systems as human creations, rather than magic, is a good place to start.
The first point to make is that for a computer to help you make a decision, you need a data set to train it on. An original sin of this field is that these data sets are often either improperly obtained or inherently biased, or both. Clearview AI, for example, famously scraped Facebook for images to train its facial recognition technology on. This was illegal, a plundering in plain view with Facebook too uninterested to notice, but that hasn’t stopped Clearview from raising $30 million from investors in a Series B round. The secondary data market is generated by countless data sets collected usually for an entirely different purpose but subject to deidentification to avoid privacy regulation. The use made of this data extends far beyond what most everyday people would expect, which is one reason why you might never have heard of Acxiom, Epsilon, and Experian, some of the biggest secondary data brokers in the world, even though chances are they hold a profile about you.
But even legal data sets have their limitations. Perhaps surprisingly, a key data set for training computers in natural language processing is emails and documents collated and made publicly available in the litigation arising from the collapse of Enron. The corporate culture was hardly a model for good behavior by managers or representative of broader views. An alternative data set for such work is Google News archives, which contain their own problems arising from the fact that news reports reflect the preoccupations of journalists and those in power. What does it mean for natural language processing that a significant amount of news media in recent decades has been obsessed with the nexus between Islam and terrorism, for example? As Crawford observes, “Datasets in AI are never raw materials to feed algorithms: they are inherently political interventions.”
From decisions about welfare entitlements to child protection, governmental authorities use algorithms to make all sorts of life-altering determinations.
Once you have a corpus for training purposes, next you need something to train. This involves hardware that is built by real people. From children mining rare earths in the Democratic Republic of Congo, to the builders of machines in the factories of Shenzen, human laborers are the creators of computing power. Such supply chains are invariably political. When Elon Musk insisted, “we will coup whoever we want!” in response to the U.S.-backed coup in Bolivia, it betrayed how Silicon Valley’s innovative mindset is underpinned by an imperialist materiality. Bolivia is one of the world’s key suppliers of lithium, most commonly used in batteries. The upshot is a highly unequal distribution of computational power. In 2018, OpenAI found that the amount of computational power used to train the largest AI models had doubled every 3.4 months since 2012. That kind of computing power rests in the hands of a very small number of companies, like Amazon, Microsoft, Facebook, and Google, or the U.S. military. The automation revolution has made plain how those who wield the most advanced technology wield the most power.
The other key materialist input into the production of automated tools and artificial intelligence is human labor. The application of computing power to data sets requires bootstrapping of models, from labeling and categorizing data, to training the tool, to correcting errors and incorporating feedback. These tasks are often performed by microtask workers on platforms like Amazon Turk all over the world. The exploitation is rife: these workers rarely receive minimum wage. Indeed, a 2018 UN report found that the paltry sums are undermined by the unpaid work necessary to receive them:
On average, workers spent twenty minutes on unpaid activities for every hour of paid work, searching for tasks, taking unpaid qualification tests, researching clients to mitigate fraud and writing reviews.
Nine out of ten microtask workers also had their task “rejected” or payment refused entirely, often without recourse. Once again, such work also raises a raft of philosophical questions, such as what does it mean to label faces by gender or race? And is such a task likely to be influenced by the cultural context of the person doing the labeling work? Even if we put these complex questions aside, it is clear that those doing the labor that goes into automation technologies are routinely some of the most exploited workers in the world.
As an observer of the industrial revolution, Karl Marx was ambitious about the future of productive work. He imagined the possibility of a workday that lasted six hours, rather than the twelve that were the norm in his time. The consequence of large-scale industrial machinery, Marx imagined, would be the transformation of the role of human labor to the point where productive work became simply “conscious linkages” between the “mechanical and intellectual organs” of automated machinery. It’s certainly possible to imagine this kind of future arising from the automation revolution, but it is also hard to escape the reality of the injustice that is integral to its materialist predicates. The pursuit of fully automated luxury communism runs the risks of glossing over such injustices and, invariably, reproducing them.
Too Big to Succeed
How should we approach the automation revolution? Kate Crawford, in her careful and detailed book Atlas of AI, argues for a politics of refusal. Much in the same way that a realistic goal of the climate movement is to pursue a low-growth, more egalitarian society, there is much to be said for approaching the AI industry from the perspective of seeking to apply the brakes to the runaway train. Instead of asking how we can optimize artificial intelligence, Crawford argues we should be asking why we need to apply it at all. Campaigns to ban the use of facial recognition technology have gained momentum around the world, and this could be a foundation for similar moratoriums.
Equally, the international forces that go into the production of automation technology lend themselves to integration into broader movements for justice at earlier stages in the production of automation technology. Movements for climate justice, struggles for indigenous self-determination, and labor organizing all pose a challenge to the capacity for the technology industry to continue building and centralizing the material resources to develop their automation products that are imposed upon the poor and marginalized.
The price of both success and failure in various automation experiments is paid by the poor and marginalized.
But at the pointy end of deployment, I would argue that there will always, stubbornly, be a role to play for a more capacious idea of rights—including the right to appeal, the right to privacy, and concepts of individual human dignity. Such claims often come across as embarrassingly unambitious, as though they accept as a given the foundational problems of the automation revolution instead of embracing a more visionary idea of liberation. But while it might be unfashionable or conservative, talking about the regulation of automation from a rights-based perspective is both urgent to deal with the problems of the present and a foundational requirement for technological systems based on justice.
If we think there is a role to play for automated technologies—if we have answered “yes” to Crawford’s why-question—and aspire for a world where productive work is minimized as a precondition of human liberation, then we have to accept that these technological systems must be built and maintained. To that end, it is worth remembering that the point of an algorithm is to discriminate. The point can never be to correct for this, but rather to ensure the discrimination is intentional. If the purpose of a system is what it does, we need to impart intention into our use of automated technologies by building in systems of rights for those who experience these systems in unintended ways. Automated technology will always encounter unprecedented situations and generate unintended consequences. The point is not to prevent the unpreventable, it is to ensure that these moments are opportunities for learning and feedback, rather than an offloading of liability. One of the most powerful tools we have to prevent the system working in unintended ways is to ensure that individuals will always have meaningful recourse when these problems arise.
Calls for accountability and transparency might sound technocratic and unambitious. But the right to an explanation of how an automated decision-making tool arrived at a particular outcome mitigates against the idea that such technology can be dismissed as magical rather than material. The right kind of regulation would require documentation, controlled experimentation, caution, and consideration. That is, to emphasize the “consciousness” of the conscious linkages between machines that Marx imagined. Auditing and impact assessments are not just bureaucratic, legal demands at the fringes of the issue, they pose a challenge to the business model of proprietary automation in the present, and a foundation of more democratic technologies of the future. In a world in which governments and industry are taking all opportunities to process us as raw material and treat us as deidentified subjects defined by our metadata, there is a political potency in demanding people be treated with individual dignity and respect.
One way to imagine a more democratic vision of the automation revolution is therefore not as a world in which technological failures do not happen, but rather in which failure is collectively accounted for and learned from. A lot less fail fast, and a little more fail slow. A rejection of the “black box” and an insistence on a box pried open.