Skip to content

The Techies Who Said Sorry

Can the master's apps dismantle the master's house?

Who will help us fix our broken internet? Who will free us from the tyranny of the smartphone? Over the last year a cadre of tech experts has formed, armed with the knowledge of the industry’s profound failures—and how to fix them. Call them the penitent techies. Positioning themselves as would-be Cassandras, jaded by years in the Silicon trenches, these insiders decry the hold that screens maintain over our lives. They worry about divided attention, fractured relationships, and, since the 2016 election, the tearing of the fabric of democracy. They warn of mind control on a vast scale, billions suborning their whims to the opaque decision-making of big tech’s proprietary algorithms. The message has resonated widely, especially in the Bay Area, where mindfulness retreats and digital sabbaths bloom like spring flowers. Even Mark Zuckerberg has committed to “fixing” Facebook this year.

Of course, as they sound their perfervid warnings, most of these industry partisans exhibit little clue that numerous academics, scholars, and activists—from major universities to the ACLU to lone voices in the digital wilderness—have been decrying the internet’s inequities for years. But perhaps we shouldn’t be too hard on the tech industry’s late arrivals to the shores of conscience. There’s room for plenty here. If these wizened Valley veterans suddenly care about the problems permeating our digital economy, maybe it’s time to listen. So, what do they have to say?

The most prominent among this new wave of woke techies is probably Tristan Harris, a former Google product manager and design ethicist. Skipping around the media landscape from 60 Minutes to NPR to TED Talks, Harris has become the leading voice calling for what might be termed a more humanistic digital ecosystem. His foremost complaint is about how tech companies compete for, and control, human attention and promote device addiction. As Harris said last year in an interview with Wired, “the problem is the hijacking of the human mind.” Whereas some critics speak in vague bromides, Harris offers plain examples of features that embody his concerns: autoplaying videos, endless notifications, Snapchat “streaks,” alerts that someone has looked at your profile, and all the other unbidden features that keeps you online and watching/clicking/consuming.

Maybe a constant stream of notifications isn’t compatible with a settled mind.

Not long ago, people voicing this kind of complaint were met with condescension, especially from the tech industry itself. It was thought of as an individual consumer’s problem—not an emblem of society-wide concern—if someone spent too much time on their phone. That seems to be changing. There’s an increasing sense that the social, mental, and economic costs of these systems are all too real. Sometimes it feels as if we are surrendering control over basic aspects of our lives. After all, when the CEO of Netflix, Reed Hastings, announces that one of his company’s greatest competitors is sleep, it’s easy to think that we are dealing with forces greater than simple human choice. These are powerful tools of coercion, whose negative externalities we’re only beginning to understand. And as Hastings implied of his battle against sleep, the big players like Netflix are winning.

This growing sense of alarm is finding its way to the managers of these platforms. Designers should consider the consequences of their creations, goes the new conventional wisdom, while those responsible for user data should think carefully about what data they need and how they’ll protect it. Maybe, they propose, a constant stream of notifications isn’t compatible with a settled mind. Along these lines, Harris and some confederates have established the Center for Humane Technology, a think tank dedicated to “reversing the digital attention crisis and realigning technology with humanity’s best interests.”

And what are humanity’s best interests? The Center for Humane Technology, like other similarly inspired ventures, displays some diagnostic aptitude. Ranging from issues of AI to the influence of micro-targeting and customized advertisements, the website offers a useful primer on some of the issues facing the digital economy. But like other concerned techies, the Center displays a hopelessly narrow focus, bracketing off issues of digital coercion and control from larger, structural forces of political economy. The problem, in short, is far bigger than any of these people is willing to admit. Rather than trim the edges here and there with a touch of regulation or a spot of design ethics, our broken panopticon needs a total overhaul, a full teardown and rebuild.

The fundamental underlying problem is the system of economic exchange we’re dealing with, which is sometimes called surveillance capitalism. It’s surveillance capitalism that, by tracking and monetizing the basic informational content of our lives, has fueled the spectacular growth of social media and other networked services in the last fifteen years. Personal privacy has been annihilated, and power and money have concentrated in the hands of whoever owns the most sophisticated machine to collect and parse consumer data. Because of the logic of network effects—according to which services increase in value and utility as more people use them—a few strong players have consolidated their control over the digital economy and show little sign of surrendering it.

It wasn’t supposed to be this way. For years, tech executives and data scientists maintained the pose that a digital economy run almost exclusively on the parsing of personal data and sensitive information would not only be competitive and fair but would somehow lead to a more democratic society. Just let Facebook and Google, along with untold other players large and small, tap into the drip-drip of personal data following you around the internet, and in return you’ll get free personalized services and—through an alchemy that has never been adequately explained—a more democratized public sphere.

Personal privacy has been annihilated, and power and money have concentrated in the hands of whoever owns the most sophisticated machine to collect and parse consumer data.

While these promises provided the ideological ballast for the tech revolution of the last decade or two, they turned out to be horribly wrong. There is nothing neutral, much less emancipatory, about our technological systems or the data sloshing through them. They record and shape the world in powerful, troubling ways. The recent clutch of stories, including in the New York Times and the Guardian, about Cambridge Analytica, the favored data firm of the Trump campaign, provides a humbling example of how personal data can be used to manipulate voter populations. This essential truth has been known at least since 2012, when a University of California-San Diego study found that a few nudges on Facebook appreciably increased voter turnout. From there, it’s only a small jump to isolating and bombarding millions of potential Trump voters with customized appeals, as Cambridge Analytica did.

As some critics have noted, the Trump/Cambridge Analytica story is less about a few rogue data scientists getting hold of millions of Facebook users’ data and more about Facebook being used exactly as it’s designed, pairing people with ads based on their behavior. While their acquisition of user data may have violated Facebook’s terms and conditions, Cambridge Analytica did what an endless number of advertising firms, not to mention Facebook itself, has been doing for years: cataloging internet users and appealing to them with specialized ads. This is what the system was designed to do. This is the logical end-product of surveillance capitalism. To fix this state of affairs, we don’t need, as so many newly minted design ethicists are now arguing, to build a better mousetrap. We need to demolish the house entirely, and try to imagine a new, more just world to live in.