The regulation of technology capitalism is now a mainstream topic of discussion. Privacy scandals such as the misuse of Facebook data by Cambridge Analytica, the rise of right-wing extremism, and the diminishing quality of public spaces in the digital age have generated sufficient public outcry such that the Overton window is wide open. Aspirants for the Oval Office, like Senator Elizabeth Warren, are talking openly about breaking up big tech. She even placed a billboard in the heart of Silicon Valley with that slogan—the equivalent of a middle finger emoji at the elite of the technology industry.
Companies themselves, in anticipation of this threat, have begun to talk about regulation as well. Mark Zuckerberg penned an op-ed for the Washington Post talking about the importance of regulation and how he, as the head of Facebook, would like to see it implemented. Amazon is on the defensive, resiling from some of its most aggressive marketing tactics and voluntarily raising the minimum wage for its workers. Google developed its own Artificial Intelligence Principles, and set up its own ethics board in March of this year, the Advanced Technology External Advisory Council. The council was designed to look at how its declared principles applied to technology under development by the company, such as facial recognition and fairness in machine learning.
A week later, Google’s council was disbanded. Part of the controversy arose over the selection of members, which included the president of the conservative Heritage Foundation, Kay Coles James, as well as Dyan Gibbens, CEO of a drone company that has been involved in military applications of this technology. Associate Professor Joanna J. Bryson, an expert in artificial intelligence, chose not to resign as the council began to fall apart. “Believe it or not,” she tweeted, defending her decision in response to criticism about James’s involvement, “I know worse about one of the other people.”
It’s the kind of rollout that makes the Fyre Festival look like a well-organized military operation. But this was not just a problem with execution. There is a substantive problem afoot. Google designed its council without a budget for remuneration and scheduled its meetings too infrequently to allow it to act as a meaningful source of accountability. It looked a lot like it was bound to fail.
Self-regulation can work in professional settings. Most notably in fields like law, journalism, and medicine, the judgment of peers plays an important disciplinary role. But in a relatively nascent industry, where the products and services are ever more omniscient, powerful, and profitable, it is not clear at all that the same rules apply. For starters, Google’s AI principles remain so broad and malleable as to be virtually meaningless. Words like “harm,” “unfair,” and “common good” are used without nuance or detail. This remains a problem, too, in the field of machine learning, where it can be hard to anticipate drawbacks before they arise, and yet a competitive market puts pressure on companies to move fast, even if they end up breaking things.
The systems theorist Stafford Beer liked to remind us that the purpose of a system is what it does. The purpose of Google’s attempt to establish its own system of self-regulation was to allow the company to continue to develop technology for profit and circumvent formalized attempts at effective, democratic accountability that might put this at risk. All the talk about regulating the tech sector reflects a changing public appetite for regulation; users qua citizens do not think the current system is working. At this juncture, working out the relative merits of various regulatory proposals becomes more urgent.
Technology capitalism treats technology almost like a force of nature, without an agenda, inevitable and unstoppable. This framing binds us to the status quo.
In the development and production of digital technology, ethical questions are complex and far ranging, and we will need to apply nuanced substantive principles to resolve them. We cannot expect companies to do this themselves if we want the program to last longer than a week. But before we engage in that task, we need to develop an understanding of who has the capacity to enforce the rules we make. Regulation of technology companies is an issue that is less about the ethics of sophisticated technology and more about power.
For navigating these problems, history can be a powerful guide. Too often, technology capitalism seems without context or precedent; with its endless novelty and entrepreneurialism, it seems as though it has sprung from nothing. By technology capitalism I mean the leading edge of the technology industry; a system led by a class of people who are focused on orienting digital technology toward market-based systems of profit. In the context of the digital revolution, technology capitalism treats technology almost like a force of nature, without an agenda, inevitable and unstoppable. This framing binds us to the status quo.
Yet we have examples from history that can illustrate how organized people can collaborate to hold power accountable, as consumers, voters, and workers. Campaigns framed in these various ways have the potential to build power, to contain examples of what Nick Srnicek and Alex Williams have characterized as “non-reformist reform”—a viable project that draws on pre-existing tendencies, exacts concessions from capitalist systems, and opens up discussions about how to design things better. This is our chance to take lessons from our past and apply them in fresh ways to our present.
Consumer Frights
Until relatively recently, the regulation of technology was largely discussed through the prism of contract. Technology is often sold or accessed as a proprietary product, licenced through contract to the user. Clickwrap terms of service on major service platforms, for example, allow companies to operate broadly on a take-it-or-leave-it basis.
The law has traditionally respected the rights of private parties in making agreements on their own terms, and courts have been reluctant to intervene and set aside these freely bargained arrangements except in the most extreme cases. But many of the contracts we enter into on almost a daily basis share little in common with the context in which the law of contract developed. This is not least because there is an absence of some of the central foundations that have traditionally supported the relevant jurisprudence. Modern digital contracts are characterized by grossly unequal bargaining power, with an absence of a meeting of minds. There is no genuine consent or understanding among users about the rights and obligations of each party. It is formalized exploitation of our digital lives for profit.
Moreover, seeing consent as something that the individual is empowered to offer is something of a category error. For example, as service platform companies collect data, this allows them to know what they know, as well as know what they do not know. Put differently, companies can make inferences about the data they have not collected from data they have. If a company has sufficient intelligence about a certain class of people, it can draw conclusions about those who fit that demographic on the basis that they are part of a lookalike audience. It is not possible to opt out of this; we all end up bound by decisions made by others to consent to invasive data collection practices. In some ways it is like buying a car with faulty brakes. It’s a consumer choice that puts not only you at risk, but also makes the road less safe for all users.
Of course, this raises the question: Why would anyone choose to buy a car with faulty brakes? No one would think this was a choice that reflected a functional marketplace. The invisible hand is cuffed, markets are fettered in all sorts of ways we often take for granted. Consumers therefore are a group of people who possess certain powers to hold technology capitalism accountable. One of Saul Alinsky’s rules for radicals was to “make the enemy live up to their own book of rules.” Such an approach has drawbacks, but it also has political strength.
Consumers have a right to not be sold dangerous or poorly designed products. Yet the market, and its commitment to the bottom line, incentivizes cutting corners and cultivates a carelessness to the potential for harm. A way that we can appreciate this is by looking to other examples of successful consumer advocacy. One of the most compelling is the regulation of the automotive industry in the late 1960s in the interests of safety.
Half a century ago, car crashes were generally blamed entirely on the driver. The prevailing attitude was that drivers were individually responsible for their vehicle, despite cars being subject to few safety standards. The automotive industry lobbied hard to preserve this mindset, claiming that safety was incompatible with selling cars. “Self-styled experts with radical and ill-conceived proposals,” warned John F. Gordon in 1961, “[think] the only practical route to greater safety [is] federal regulation of vehicle design.” As president of General Motors, Gordon’s contempt for calls for regulatory reform is hardly surprising. Less mundane was the fact that he was speaking to the National Safety Congress and was received warmly.
Through the leadership of activists, lawyers, and journalists, and by agitation among consumers themselves, this situation was transformed in a relatively short space of time. The introduction of mandatory federal safety standards led to innovation and improvements in all aspects of car design. These reforms literally saved millions of lives.
Improving competition as an object of regulatory reform can conveniently appear reasonable and radical depending on the audience.
But these initiatives did not germinate within companies. The task of improving safety could not be left to industry, as market incentives mitigated against such an investment. “A democratic government is far better equipped to resolve competing interests and determine whatever is required [to improve safer transport] than are firms whose all-absorbing aim is higher and higher profits,” wrote Ralph Nader in 1965 in his seminal book, Unsafe at Any Speed. These safety problems were fixable; they were problems of design rather than individual responsibility. But they required centrally imposed rules to achieve this.
There are strategic limits to this logic. Arguments framed around consumer rights still rely on assumptions about the inherent value of the free market, and a commitment to making it a more functional mode of relations. But if we neglect this field, we lose important ground in the public debate about regulation. Even the most committed libertarian would struggle to justify abolishing the Food and Drug Administration on the basis that it limited individual freedom. No one would agree that an ideal society would require people to take responsibility for testing their food to check that it has not been poisoned. We expect a well-run society would have a process in place, centrally administered, to enforce the relevant rules as much as possible in an efficient way.
There is no reason why technological products could not be subject to similar testing and approval. Biased algorithms would be identified, automatic processes that produce perverse outcomes could be stopped before they are shipped. In the course of finding these examples, there would be a platform for public debate about how to respond to them. A consumer protection lens can help us think of other potential reforms. This might include prohibiting the use of data (including its sale) for any purpose other than the purpose that it was given by the user. This is what consumers currently expect, but not what companies actually deliver.
Consumer rights advocacy is not a cure-all for the problems of technology capitalism. But the implementation of rigorous laws that hold companies accountable for dangerous products in the name of protecting consumers has worked well in the past. If used shrewdly in the present, such advocacy has the potential to transform how we understand some of the worst excesses of technology capitalism, giving us a crucial opportunity to arrest them.
Gardens of Kingly Delights
Consumer protection law is not the only field in which the power of corporations can be curtailed. The law also has long recognized the problems associated with monopolistic tendencies within markets. Democratic governments have the power to both make and enforce rules at an industry level, and it is possible to convince them to use this power on the basis that it is necessary to do so to protect the integrity of our democracy.
This is the starting point for Senator Warren’s proposal to break up tech platforms. First, she argues that large tech platforms be treated like utilities and prohibited from competing in marketplaces they host. That is, in offering its search function, Google should not be permitted to prioritize its own reviews over those from Yelp, for example, and Amazon should not be allowed to function as a marketplace while also undercutting independent sellers with its own line of products. Second, Senator Warren argues that mergers that generate monopolistic advantages should be reversed. Amazon should not be allowed to own brick-and-mortar stores like Whole Foods, and Facebook should have to separate from platforms it currently owns, like WhatsApp and Instagram. “I want to make sure that the next generation of great American tech companies can flourish,” she argues. “Healthy competition can solve a lot of problems.”
These proposals have generated significant backlash from commentators and think tanks, often sponsored by technology capitalism. A great number of these concerns are unjustified or alarmist. It remains important to debunk them, but it is also worth critically reflecting on the motivations behind Senator Warren’s proposals.
The idea that optimizing competition is a way of optimizing capitalism is as old as the writings of Adam Smith. Improving competition as an object of regulatory reform can conveniently appear reasonable and radical depending on the audience, with the overriding effect of inoculating the electorate against considering more radical alternatives. Facebook co-founder, Chris Hughes, who has since left the company, recently penned an op-ed in the New York Times with the headline “It’s Time to Break Up Facebook.” Hughes encouraged readers to “imagine a competitive market in which they could choose among one network that offered higher privacy standards, another that cost a fee to join but had little advertising and another that would allow users to customize and tweak their feeds as they saw fit.” This idealized vision of technology capitalism is unrealistic, and there are good reasons why privacy should not be only for those who can afford it.
However, both Senator Warren and Hughes are correct in their claim that enormous companies like Facebook are a serious threat to democracy. The central basis for the key antitrust legislation in U.S. history, the Sherman Act, was not just about markets, but power. Legal scholar Lina M. Khan argues that the importance of antitrust law has traditionally not just been about economics, it was also understood in political terms. Legislators were animated by an understanding that the “concentration of economic power also consolidates political power,” she writes. Monopolies that vest control of markets in a single person create “a kingly prerogative, inconsistent with our form of government,” declared Senator Sherman in 1890 when he proposed the bill that would become his eponymous act. “If anything is wrong this is wrong. If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life.” Khan has observed that more recent interpretations of antitrust law over the last half century have given more weight to consumer welfare—often understood in the form of lower prices. This means that platform monopolies fall outside the frame of antitrust protection in their modern iteration.
For example, at the recent Facebook developer conference F8 the presentations were focused on getting people onto Facebook-owned apps (including Instagram and WhatsApp) and making it so they never need to leave. Facebook wants us to buy things, find a date, and apply for a job without ever leaving Facebook. If Zuckerberg gets his way, and it is hard to think that he will not, users will also be able to pay for things with Facebook’s cryptocurrency. (With a potential market of almost three billion users, this could easily become the largest traded currency in the world.) This corporate domination strategy is about creating a private version of the web, where a significant portion of our online lives is mediated through a company. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. A lot of ways, except that, critically, its constituents are disenfranchised. These are the foundations of corporate totalitarianism, where billions of people are made subservient to the whims of a boardroom dictator.
Within the current mainstream proposals to tackle this problem with antitrust law, there is a tendency toward crude nationalism. Senator Warren claims that her proposals “will allow existing big tech companies to keep offering customer-friendly services, while promoting competition, stimulating innovation in the tech sector, and ensuring that America continues to lead the world in producing cutting-edge tech companies.” Other commentators have raised a comparison with China, namely that Facebook, for all its flaws, evinces a laudable inclination toward respecting freedom of speech when compared to its Sino-counterparts. Such thinking is a culturally relativist electric dream—American capitalism propagates its own tolerance for cruelty and indifference to suffering, notwithstanding its formal commitment to liberal rights like freedom of speech. But having said that, it remains possible for social democracies to make demands on platform monopolies without falling into nationalist tropes. An internationalist user cohort could be the basis for greater internationalist solidarity, rather than a retreat to chauvinist attempts at dominance.
One possible alternative is to consider socialization or nationalization of major platforms. The centralization of users is a key feature of a successful platform like Facebook, but now that the technology has been built and there is a critical mass of users, it is possible to make the claim that there are benefits of public possession that might outweigh those of private ownership. It is possible to imagine a process whereby users are given control, like shareholders in a company, to appoint people to run the enterprise, or alternatively, an accountable authority of some description becomes responsible for managing the platform, like a public broadcaster.
Government procurement practices could be another way to undermine monopolies and clear space for newcomers with alternative approaches. Imagine, for example, that software products were required to meet certain criteria in terms of ethical and open source design before being used by public bodies. This could foster a culture of collaboration and keep the internet open, bringing down the walls of proprietary gardens like Facebook.
These approaches have the potential to open up space for thinking about technological development differently. Imagine if the web was less about titans of industry jostling for domination of the market, and more about improving public participation, inclusion, and community organizing. Such revolutionary ideas from our past have renewed potential in the digital age. Which leads us to the other key base of power that can wield influence over technology capitalism that we have yet to consider: workers.
Big Tech’s Gravediggers
“Labor produces for the rich wonderful things—but for the worker it produces privation,” wrote Karl Marx. “It replaces labor by machines—but some of the workers it throws back to a barbarous type of labor, and the other workers it turns into machines. It produces intelligence—but for the worker idiocy, cretinism.” Tech workers have routinely been stereotyped as elitist bros, but this is now starting to change. With recent rumblings in the industry, it’s starting to look a little less Mark Zuckerberg, a little more Marx. This radicalization within the belly of the beast is significant. Blacksmiths make swords, and they also have the capacity to beat them into ploughshares.
Organized workers have a vital role to play in holding their employers politically accountable.
Organized workers have a vital role to play in holding their employers politically accountable. They often have a greater comprehension about their industry’s power to influence public policy compared to many outsiders, and similarly have access to information about corporate behavior that is not available to others. For these reasons, workers are often some of the first people to raise concerns about corporate conduct, especially in a context in which it might sit at odds with a company’s public image.
The Google Walkout in November 2018 highlighted this potential, with thousands of workers abandoning their desks in a globally coordinated action to protest the company’s treatment of claims of sexual harassment, gender inequality, and systemic racism. The lightning rod was a New York Times story about payments made to former senior executive Andy Rubin upon his departure from the firm, despite the company finding that sexual misconduct claims against him were credible. Google’s chief executive, Sundar Pichai, argued that the company had taken a “hard line” over sexual misconduct. The behavior of his workers exposed the falsity of this corporate image.
These workers are not going away. The Tech Workers Coalition was established in 2015, and one of its aims is to ensure debates about technology are not had only on the terms dictated by the captains of industry. “We want to give a voice to tech workers as a separate entity from their companies and their corporate PR, as often rank-and-file ‘techies’ are lumped in with the CEOs and entrepreneurs of the industry,” said an organizer with the coalition, Ares Geovanos. This is not a movement that is confined to the United States. India has seen the flourishing of worker radicalism in recent times; in Brazil an organization called Infoproletários is growing, as are unions in the United Kingdom organizing digital workers.
Events like the Google Walkout highlight the challenges of cultivating a diverse workforce. Organized workers are one of the few forces capable of generating the necessary pressure. Increasing their ranks remains a critically important objective, given that these are the people who will build technology used by billions. But there are other ways in which this organizing is important. It opens up space to talk about other kinds of ethical considerations in the development of digital technology. Already this is happening, with protests around military uses of technology at Google and the scourge of fake news at Facebook.
This kind of organizing is the latest iteration of a long tradition of expansive, political unionism. African American slaves abandoned plantations during the Civil War and found other ways to limit their productive work in order to deliberately undermine the Southern cause. The Chartists in Britain advocated for political reform and the expansion of the franchise. In France in May 1968, workers went on a general strike in solidarity with protesting students. Australian workers rallied and conducted widespread boycotts against Indonesian industry in solidarity with East Timor in 1999. Industrial action for political objectives has a long and rich tradition throughout global history, and workers today are drawing upon it in the heart of technology capitalism.
Their work has implications for us all. Ethical design considerations can and should serve as an industrial and political organizing tool. “Technological professionals are the first, and last, lines of defense against the misuse of technology,” argues Cherri M. Pancake, president of the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery. In 2018, the organization published an updated code of ethics, which requires developers to identify possible harmful side effects or the potential for misuse of their work, consider the needs of a diverse set of users, and take special care to avoid the disenfranchisement of groups of people. In feedback, the ACM received this comment from a young programmer: “Now I know what to tell my boss if he asks me to do something like that again.”
Workers in the technology industry are some of the best placed to identify problems with their work before they cause harm to others. They also have the power to do something about it. Their efforts to organize, both in order to improve their direct conditions, as well as improve their industry more generally, serve as a bulwark against the predatory practices of technology capitalism.
At present, the development of technology is oriented largely around the pursuit of profit. The digital revolution holds immense potential to help us address some of the key problems facing society, like climate change, the democratic deficit, and wealth inequality. But it will only be put to such uses if we can wrest power away from executives and investors, who view human ingenuity and hard work as raw material for exploitation rather than common resources for collaborative problem solving. There are sources of power at our disposal. This is our chance to use them.