Skip to content

The Taming of Tech Criticism

BOOK REVIEWED
The Glass Cage: Automation and Us, by Nicholas Carr, W. W. Norton, $26.95

What does it mean to be a technology critic in today’s America? And what can technology criticism accomplish? The first question seems easy: to be a technology critic in America now is to oppose that bastion of vulgar disruption, Silicon Valley. By itself, however, this opposition says nothing about the critic’s politics—an omission that makes it all the more difficult to answer the second question.

Why all the political diffidence? A critical or oppositional attitude toward Silicon Valley is no guarantee of the critic’s progressive agenda; modern technology criticism, going back to its roots in Germany at the turn of the twentieth century, has often embraced conservative causes. It also doesn’t help that technology critics, for the most part, make a point of shunning political categories. Instead of the usual left/right distinction, they are more comfortable with the humanist/anti-humanist one. “What if the cost of machines that think is people who don’t?”—a clever rhetorical question posed by the technology author George Dyson a few years ago—nicely captures these sorts of concerns. The “machines” in question are typically reduced to mere embodiments of absurd, dehumanizing ideas that hijack the minds of poorly educated technologists; the “humans,” in turn, are treated as abstract, ahistorical émigrés to the global village, rather than citizen-subjects of the neoliberal empire.

Most contemporary American critics of technology—from Jaron Lanier to Andrew Keen to Sherry Turkle—fall into the cultural-romantic or conservative camps. They bemoan the arrogant thrust of technological thinking as it clashes with human traditions and fret over what an ethos of permanent disruption means for the configuration of the liberal self or the survival of its landmark institutions, from universities to newspapers. So do occasional fellow travelers who write literary essays or works of fiction attacking Silicon Valley—Jonathan Franzen, Dave Eggers, Zadie Smith, and Leon Wieseltier have all penned passionate tracts that seek to defend humanistic values from the assault of technology. They don’t shy away from attacking Internet companies, but their attacks mostly focus on the values and beliefs of the companies’ founders, as if the tech entrepreneurs could simply be talked out of the disruption that they are wreaking on the world. If Mark Zuckerberg would just miraculously choose a tome by Isaiah Berlin or Karl Kraus for his ongoing reading marathon, everything could still go back to normal.

Meanwhile, a more radical strand of tech criticism, confined mostly to university professors, barely registers on the public radar. Those—like Robert McChesney or Dan Schiller or Vincent Mosco—who work on technology, media, and communications within Marxist analytical frameworks, hardly get any attention at all. The last radical critics to enrich the broader public debate on technology were probably Murray Bookchin and Lewis Mumford; for both, technology was a key site for struggle, but their struggles, whether for social ecology or against hierarchical bureaucracy, were not about technology as such.

That radical critique of technology in America has come to a halt is in no way surprising: it could only be as strong as the emancipatory political vision to which it is attached. No vision, no critique. Lacking any idea of how sensors, algorithms, and databanks could be deployed to serve a non-neoliberal agenda, radical technology critics face an unenviable choice: they can either stick with the empirical project of documenting various sides of American decay (e.g., revealing the power of telecom lobbyists or the data addiction of the NSA) or they can show how the rosy rhetoric of Silicon Valley does not match up with reality (thus continuing to debunk the New Economy bubble). Much of this is helpful, but the practice quickly encounters diminishing returns. After all, the decay is well known, and Silicon Valley’s bullshit empire is impervious to critique.

Why, then, aspire to practice any kind of technology criticism at all? I am afraid I do not have a convincing answer. If history has, in fact, ended in America—with venture capital (represented by Silicon Valley) and the neoliberal militaristic state (represented by the NSA) guarding the sole entrance to its crypt—then the only real task facing the radical technology critic should be to resuscitate that history. But this surely can’t be done within the discourse of technology, and given the steep price of admission, the technology critic might begin most logically by acknowledging defeat. Changing public attitudes toward technology—at a time when radical political projects that technology could abet are missing—is pointless. While radical thought about technology is certainly possible, the true radicals are better off theorizing—and spearheading—other, more consequential struggles, and jotting down some reflections on technology along the way.

The Self-Driving Critic

Nicholas Carr, one of America’s foremost technology critics, is far from acknowledging defeat of any sort—in fact, he betrays no doubts whatsoever about the relevance and utility of his trade. In his latest book, The Glass Cage, Carr argues that we have failed to consider the hidden costs of automation, that our penchant for delegating mundane tasks to technology is misguided, and that we must redesign our favorite technologies in such a way that humans take on more responsibility—both of the moral and perceptual varieties—for operating in the world.

Silicon Valley’s bullshit empire is impervious to critique.

Carr makes this case using his trademark style of analysis, honed in his previous book, The Shallows. Drawing on the latest findings in neuroscience and timeless meditations from various philosophers (Martin Heidegger stands next to John Dewey), he seeks to diagnose rather than prescribe. The juxtaposition of hard science and humanities is occasionally jarring: a deeply poetic section, which quotes Maurice Merleau-Ponty and Robert Frost, is abruptly interrupted to inform us that “a study of rodents, published in Science in 2013, indicated that the brain’s place cells are much less active when animals make their way through computer-generated landscapes than when they navigate the real world.”

The Glass Cage is subtitled “Automation and Us,” and Carr tries hard to direct his critique toward the process of automation rather than technology as such. His material, however, repeatedly refuses such framing. Consider just three of the many examples that appear under “automation”: the automation of driving via self-driving cars, the automation of facial recognition via biometric technologies, and the automation of song recognition via apps like Shazam, which identify a song after just a few seconds of “listening.” They do look somewhat similar, but differences abound as well. In the first example, the driver is made unnecessary; in the second example, technology augments human capacity to recognize faces; in the third example, we create a genuinely new ability, since humans can’t recognize unknown songs. Given such diversity, it’s not obvious why automation—rather than, say, augmentation—is the right framework to understand these changes. What are we automating with the song identification app?

Carr’s basic premise is sound: a little bit of technology and automation can go a long way in enabling human emancipation but, once used excessively, they might result in “an erosion of skills, a dulling of perceptions, and a slowing of reactions.” Not only would we lose the ability to perform certain tasks—Carr dedicates a whole chapter to studying how the introduction of near-complete automation to the flight deck has affected how pilots respond to emergencies—but we might also lose the ability to experience certain features of the world around us. GPS is no friend to flaneurs. “Spell checkers once served as tutors,” he laments. Now all we get is dumb autocorrect. Here is the true poet laureate of First World problems.

Carr doesn’t try very hard to engage his opponents. It’s all very well to complain about the inauthenticity of digital technology and the erosion of our cognitive and aesthetic skills, but it doesn’t take much effort to discover that the very same technologies are also widely celebrated for producing new forms of authenticity (hence the excitement around 3-D printers and the Internet of Things: finally, we are moving from the virtual to the tangible) and even new forms of aesthetic appreciation (the art world is buzzing about the emergence of “The New Aesthetic”—the intrusion of imagery inspired by computer culture into art and the built environment). Why is repairing a motorcycle deemed more pleasurable or authentic than repairing a 3-D printer?

Carr quickly runs into a problem faced by most other contemporary technology critics (the present author included): since our brand of criticism is, by its very nature, reactive—we are all prisoners of the silly press releases issued by Silicon Valley—we have few incentives to exit the “technological debate” and say anything of substance that does not already presuppose that all communications services are to be provided by the market. It’s as if, in articulating a program, Silicon Valley had also articulated all the possible counter-programs, defining a horizon of thought that even its opponents could never transcend.

As a result, Carr prefers to criticize those technologies that he finds troubling instead of imagining what an alternative arrangement—which may or may not feature the technology in question—might be like. His treatment of self-driving cars is a case in point. Carr opens the first chapter with rumination on what it was like to drive a Subaru with manual transmission in his youth. He notices, with his usual nostalgic flair, how the automation of driving might eventually deprive us of important but underappreciated cognitive skills that are crucial to leading a fulfilling life.

This argument would make sense if the choice were between a normal car and a self-driving car. But are those really our only options? Is there any evidence that countries with excellent public transportation systems swarm with unhappy, mentally deskilled automatons who feel that their brains are underused as they get inside the fully automated metro trains? One wonders if Nicholas Carr has heard of Denmark.

Note what Carr’s strand of technology criticism has accomplished here: instead of debating the politics of public transportation—a debate that should include alternative conceptions of what transportation is and how to pay for it—we are confronted with the need to compare the cognitive and emotional costs of automating the existing system (i.e., embracing the self-driving cars that Carr doesn’t like) with leaving it as it is (i.e., sticking with normal cars). Disconnected from actual political struggles and social criticism, technology criticism is just an elaborate but affirmative footnote to the status quo.

The inherent latent conservatism of Carr’s approach is even more palpable when he writes about the automation of work. He starts from the depressing premise that we are all, somehow, born alienated, and the best way for us to overcome this alienation is by . . . working. Carr draws on research in psychology—Mihaly Csikszentmihalyi’s notion of “flow” is crucial to his argument—to posit that challenging, engaged work does make us happier than we realize. Its absence, on the other hand, makes us depressed:

More often than not . . . our discipline flags and our mind wanders when we’re not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one’s work, to concentrate and lose oneself in it.”

Thus, as our work gets automated away, we are likely to get stuck with far too many unredeemed alienation coupons! Carr’s argument is spectacular in its boldness: work distracts us from our deeply alienated condition, so we have to work more and harder not to discover our deep alienation. For Carr, the true Stakhanovite, work is a much better drug than the soma of Huxley’s Brave New World.

As with the transportation example, something doesn’t quite add up here. Why should we take the status quo for granted and encourage citizens to develop a new ethic to deal with the problem? In the case of work, isn’t it plausible to assume that we’d get as much “flow” and happiness from doing other challenging things—learning a foreign language or playing chess—if only we had more free time, away from all that work?

Were he not a technology critic, Carr could have more easily accepted this premise. This might also have prompted him to join the long-running debate on alternative organizations of work, production, and life itself. Carr, however, expresses little interest in advancing this debate, retreating to the status quo again: work is there to be done, because under current conditions nothing else would deliver us as much spiritual satisfaction. To be for or against capitalism is not his game: he just comments on technological trends, as they pop out—in a seemingly automated fashion—of the global void known as history.

And since the march of that history is increasingly described with the depoliticized lingo of technology—“precariousness” turns into “sharing economy” and “scarcity” turns into “smartness”—technology criticism comes to replace political and social criticism. The usual analytical categories, from class to exploitation, are dropped in favor of fuzzier and less precise concepts. Carr’s angle on automated trading is concerned with what algorithms do to traders—and not what traders and algorithms do to the rest of us. “A reliance on automation is eroding the skills and knowledge of financial professionals,” he notes dryly. Only a technology critic—with no awareness of the actual role that “financial professionals” play today—would fail to ask a basic follow-up question: How is this not good news?

Technology criticism is just an elaborate but affirmative footnote to the status quo.

Nicholas Carr finds himself at home in the world of psychology and neuroscience, and the only philosophy he treats seriously is phenomenology; he makes only a cursory effort to think in terms of institutions, social movements, and new forms of representation—hardly a surprise given where he starts. Occasionally, Carr does tap into quasi-Marxist explanations, as when he writes, repeatedly, that technology companies are driven by money and thus are unlikely to engage in the kind of humanistic thought exercises that Carr expects of them.

But it’s hard to understand how he can square this realistic stance with his only concrete practical suggestion for human-centered automation: to push the designers of our technologies to embrace a different paradigm of ergonomic design, so that, instead of building services that would automate everything, they would build services that put some minor cognitive or creative burden on us, the users, thus extending rather than shrinking our intellectual and sensory experiences. Good news for you office drones: your boring automated work will be made somewhat less boring by the fact that you’ll have to save the file manually by pressing a button—as opposed to having it backed up for you automatically.

This user-producer axis exhausts Carr’s political imagination. It also reveals the limitations of his techno-idealism, for his proposed intervention assumes, first, that today’s users prefer fully automated technologies because they do not know what’s in their best interest and, second, that these users can convince technology companies that redesigning their existing products along Carr’s suggestions would be profitable. For if Carr is sincere in his belief that technology companies are driven by profit, there’s no other way around it: he is either a cynic for advocating a solution that he knows wouldn’t work, or he really thinks that consumers can renounce their love of automation and demand something else from technology companies.

Carr firmly believes that our embrace of automation comes from confusion, infatuation, or laziness—rather than, say, necessity. “The trouble with automation,” he explains, “is that it often gives us what we don’t need at the cost of what we do.” In theory, then, we can all live without relying on the wonders of modern technology: we can cultivate our cognitive and aesthetic skills by ditching our GPS units, by cooking our own elaborate dishes, by making our own clothes, by watching our kids instead of relying on apps (au pairs are so last century). What Carr fails to mention is that all of these things are much easier to do if you are rich and have no need to work. Automation—of cognition, emotion, and intellect—is the intolerable price we have to pay for the growing corporatization of everyday life.

Thus, there’s a very sinister and disturbing implication to be drawn from Carr’s work—namely, that only the rich will be able to cultivate their skills and enjoy their life to the fullest while the poor will be confined to mediocre virtual substitutes—but Carr doesn’t draw it. Here again we see what happens once technology criticism is decoupled from social criticism. All Carr can do is moralize and blame those who have opted for some form of automation for not being able to see where it ultimately leads us. How did we fail to grasp just how fun and stimulating it would be to read a book a week and speak fluent Mandarin? If Mark Zuckerberg can do it, what excuses do we have?

“By offering to reduce the amount of work we have to do, by promising to imbue our lives with greater ease, comfort, and convenience, computers and other labor-saving technologies appeal to our eager but misguided desire for release from what we perceive as toil,” notes Carr in an unashamedly elitist tone. Workers of the world, relax—your toil is just a perception! However, once we accept that there might exist another, more banal reason why people embrace automation, then it’s not clear why automation à la Carr, with all its interruptions and new avenues for cognitive stimulation, would be of much interest to them: a less intelligent microwave oven is a poor solution for those who want to cook their own dinners but simply have no time for it. But problems faced by millions of people are of only passing interest to Carr, who is more preoccupied by the non-problems that fascinate pedantic academics; he ruminates at length, for example, on the morality of Roomba, the robotic vacuum cleaner.

Carr’s oeuvre is representative of contemporary technology criticism both in the questions that it asks and the issues it avoids. Thus, there’s the trademark preoccupation with design problems, and their usually easy solutions, but hardly a word on just why it is that startups founded on the most ridiculous ideas have such an easy time attracting venture capital. That this might have something to do with profound structural transformations in the American economy—e.g., its ever-expanding financialization—is not a conclusion that today’s technology criticism could ever reach.

From There and Thou to Here and Now

A personal note is in order, since in surveying the shortcomings of thinkers such as Nicholas Carr, I’m also all too mindful of how many of them I’ve shared. For a long time, I’ve considered myself a technology critic. Thus, I must acknowledge defeat as well: contemporary technology criticism in America is an empty, vain, and inevitably conservative undertaking. At best, we are just making careers; at worst, we are just useful idiots.

Since truly radical technology criticism is a no-go zone for anyone seeking a popular audience, all we are left with is debilitating faux radicalism. Some critics do place their focus squarely on technology companies, which gives their work the air of anti-corporate populism and, perhaps, even tacit opposition to the market. This, however, does not magically turn these thinkers into radicals.

In fact, what distinguishes radical critics from their faux-radical counterparts is the lens they use for understanding Silicon Valley: the former group sees such firms as economic actors and situates them in the historical and economic context, while the latter sees them as a cultural force, an aggregation of bad ideas about society and politics. Thus, while the radical critic quickly grasps that reasoning with these companies—as if they were just another reasonable participant in the Habermasian public sphere—is pointless, the faux-radical critic shows no such awareness, penning essay after essay bemoaning their shallowness and hoping that they can eventually become ethical and responsible.

In a sense, it’s just a continuation of the old battle between materialism and idealism. At the very start of my career as a technology critic, I fell into the idealistic trap, thinking that, with time, good ideas could crowd out bad ones. As Silicon Valley was extending its reach into domains that were only lightly touched by information technology—think of transportation, health, education—these fields were suddenly overflowing with half-baked, stupid, and occasionally dangerous ideas. Those ideas could and should be documented, studied, and opposed. This, I thought, was the true calling of the technology critic.

Serious technology criticism, I thought, could tie the tongues of our digital gurus, revealing their simplistic sloganeering for the cheap dross that it is. All that hankering for frictionlessness and eternal bliss, the cult of convenience and total transparency, the thoughtless celebration of self-reliance and immediacy: so much in Silicon Valley’s master plan smacked of teenage naiveté. Instead of waxing lyrical about the utility of apps—the bailiwick of conventional technology criticism—the technology critic could reveal the political and economic programs that they helped to enact. Thus, I thought, it was possible to be neither romantic nor conservative while keeping politics and economics front and center.

To pick an example from my own work: A smart trashcan that uploads snapshots of its contents to Facebook—yes, it exists—might be read as an experiment in getting our online friends to police our behavior. Or it might be read as an extension of political consumerism to the most banal domestic chores. Placed under the right theoretical lens, even mundane objects could help illuminate the contemporary condition. Moving between such objects and ideologies, the technology critic could reveal how important, critical questions are not being asked and how certain marginal interests are being sidelined. To recover these lost perspectives and continue a debate that would otherwise be closed prematurely: this is what the best kind of technology criticism could accomplish.

Well, goodbye to all that. Today, it’s obvious to me that technology criticism, uncoupled from any radical project of social transformation, simply doesn’t have the goods. By slicing the world into two distinct spheres—the technological and the non-technological—it quickly regresses into the worst kind of solipsistic idealism, paying far more attention to drummed-up, theoretical ideas about technology than to real struggles in the here and now.

The rallying cry of the technology critic—and I confess to shouting it more than once—is: “If only consumers and companies knew better!”

In a nutshell, the problem is this: given enough time, a skilled technology critic could explain virtually anything, simply by assuming that somebody, somewhere, has confused ideas about technology. That people have confused ideas about technology might occasionally be the case, but it’s a case that ought to be made, never taken for granted. The existence of Facebook-enabled trashcans does not necessarily mean that the people building and using them suffer from a severe form of technological false consciousness. Either way, why assume that their problems can be solved by poring over the texts of some ponderous French or German philosopher?

Alas, the false consciousness explanation is the kind of low-hanging fruit that no technology critic wants to pass up, as it can magically transport us from the risky fields of politics and economics to the safer terrain of psychology and philosophy. It’s so much easier to assume that those trashcans exist due to humanity’s inability to peruse Heidegger and Merleau-Ponty than to investigate whether the inventors in question simply tapped into available subsidies from, say, the European Commission.

Such investigations are messy and might eventually prompt uncomfortable questions—about capital, war, the role of the state—that are better left unasked, at least if one doesn’t want to risk becoming that dreadful other type of critic, the radical. It’s much safer to interpret every act or product as if it stemmed from some erroneous individual or collective belief, some flawed intellectual outlook on technology.

Take our supposed overreliance on apps, the favorite subject of many contemporary critics, Carr included. How, the critics ask, could we be so blind to the deeply alienating effects of modern technology? Their tentative answer—that we are simply lazy suckers for technologically mediated convenience—reveals many of them to be insufferable, pompous moralizers. The more plausible thesis—that the growing demands on our time probably have something to do with the uptake of apps and the substitution of the real (say, parenting) with the virtual (say, the many apps that allow us to monitor kids remotely)—is not even broached. For to speak of our shrinking free time would also mean speaking of capital and labor, and this would take the technology critic too far away from “technology proper.”

It’s the existence of this “technology proper” that most technology critics take for granted. In fact, the very edifice of contemporary technology criticism rests on the critic’s reluctance to acknowledge that every gadget or app is simply the end point of a much broader matrix of social, cultural, and economic relations. And while it’s true that our attitudes toward these gadgets and apps are profoundly shaped by our technophobia or technophilia, why should we focus on only the end points and the behaviors that they stimulate? Here is one reason: whatever attack emerges from such framing of the problem is bound to be toothless—which explains why it is also so attractive to many.

If technology criticism were solely about aesthetic considerations—Is this gadget well made? Is this app beautiful?—such theoretical narrowness would be tenable. But most technology critics find themselves in a double bind. They must go beyond the aesthetic dimension—they are decidedly not mere assessors of design—but they cannot afford to reveal the existence of the rest of the matrix, for that, too, risks turning them into something else entirely.

Their solution is to operate with real technological objects—these are the gadgets and apps we see in the news—but to treat the users and manufacturers of those objects as imaginary, theoretical constructs. They are “imaginary” and “theoretical” inasmuch as their rationale is imposed on them by the explanatory limitations of technology criticism rather than grasped ethnographically or analytically. In the hands of technology critics, history becomes just a succession of wise and foolish ideas about technology; there are usually no structures—social or economic ones—that get in the way.

Unsurprisingly, if one starts by assuming that every problem stems from the dominance of bad ideas about technology rather than from unjust, flawed, and exploitative modes of social organization, then every proposed solution will feature a heavy dose of better ideas. They might be embodied in better, more humane gadgets and apps, but the mode of intervention is still primarily ideational. The rallying cry of the technology critic—and I confess to shouting it more than once—is: “If only consumers and companies knew better!” One can tinker with consumers and companies, but the market itself is holy and not to be contested. This is the unstated assumption behind most popular technology criticism written today.

Well, suppose consumers and companies did know better. This would mean, presumably, that consumers would change their behavior and companies would change their products. The latter does not look very promising. At best, we might get the technological equivalent of fair-trade lattes on sale at Starbucks, a modern-day indulgence for the rich and the doubtful.

The first option—getting consumers to change their behavior—is much more plausible. But if the problem in question wasn’t a technology problem to begin with, why address it at the level of consumers and not, say, politically at the level of citizens and institutions? The lines demarcating the technological and the political cannot be drawn by those forever confined to think within the technological paradigm; one needs to exit the paradigm to get a glimpse of both alternative explanations and the political costs of framing the issue through the lens of technology.

Thus, technology critics of the romantic and conservative strands can certainly tell us how to design a more humane smart energy meter. But to decide whether smart energy meters are an appropriate response to climate change is not in their remit. Why design them humanely if we shouldn’t design them at all? That question can be answered only by those critics who haven’t yet lost the ability to think in non-market and non-statist terms. Technological expertise, in other words, is mostly peripheral to answering this question.

But most of our technology critics are not really interested in answering such questions anyway. Liberated from any radical inclinations, they take the institutional and political reality as it is, but, sensing that something is amiss, they come up with an ingenious solution: Why not ask citizens to internalize the costs of all the horror around them, for that horror probably stems from their lack of self-control or their poor taste in gadgets? It is in this relegation of social and political problems solely to the level of the individual (there is no society, there are only individuals and their gadgets) that technology criticism is the theoretical vanguard of the neoliberal project.

Even if Nicholas Carr’s project succeeds—i.e., even if he does convince users that all that growing alienation is the result of their false beliefs in automation and even if users, in turn, convince technology companies to produce new types of products—it’s not obvious why this should be counted as a success. It’s certainly not going to be a victory for progressive politics (Carr is extremely murky on his own). Information technology has indeed become the primary means for generating the kind of free time that, in the not so distant past, was at the heart of many political battles and was eventually enshrined in laws (think of limits on daily work hours, guaranteed time off, the free weekend). Such political battles are long gone.

In the past, it was political institutions—trade unions and leftist parties—that workers had to thank for the limited breaks they got from work. Today, these tasks fall squarely on technology companies: the more Google knows about you, the more time you will save every day, as it personalizes everything and even completes some tasks (like retrieving boarding passes) on your behalf. At best, Carr’s project might succeed in producing a different Google. But its lack of ambition is itself a testament to the sad state of politics today. It’s primarily in the marketplace of technology providers—not in the political realm—that we seek solutions to our problems. A more humane Google is not necessarily a good thing—at least, not as long as the project of humanizing it distracts us from the more fundamental political tasks at hand. Technology critics, however, do not care. Their job is to write about Google.