Skip to content

Excavating the Internet

Is digitally connected life an age-old idea?
Art for Excavating the Internet.


The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning by Justin E. H. Smith. Princeton University Press, 208 pages.

Towards the end of Don Quixote, the knight-errant, already famous for his misadventures, arrives in Barcelona and is invited to stay in the house of a wealthy man. Don Quixote’s host says that he has something to show him: a bronze head, “fabricated and made by one of the greatest enchanters and wizards the world has ever seen . . . which has the property and virtue of responding to any question spoken into its ear.” When the guests ask the head questions, it answers back: it tells them how many people are in the room, dispenses advice about virtue, and offers vague predictions of the future. After this first astonishment of the inanimate come to life, the secret comes out: the head is hollow, and Don Antonio Moreno’s nephew, “an astute and clever student,” is in a chamber underneath, supplying the answers the questioners would like to hear. Cervantes, while inventing the modern novel in all its encyclopedic richness, also becomes one of the first tech skeptics.

Has the internet been with us all along, waiting for technology advanced enough to deliver on its idea? Cervantes’s talking head is a parody of automata that appear in works dating back to antiquity. In particular, he draws on the “brazen head” supposedly designed in the thirteenth century by the philosopher and alchemist Roger Bacon. By answering yes-or-no questions, the head served as “a medieval Siri, if you will,” as historian of science and philosopher Justin E.H. Smith facetiously puts it in his book The Internet Is Not What You Think It Is. The bulk of current thinking about the internet is future-facing, swinging between techno-optimism and techno-pessimism: Are we on the road to the bliss of the Singularity, or to abysses as seen in The Matrix? In his peripatetic survey, Smith asks the reader to take a harder look backward. Rather than forecast, he is a genealogist looking to unearth precedents. Although he agrees the present landscape is dire, there is a certain deadpan cheer in insisting there is nothing truly new under the sun. Smith brings to the conversation a historian’s confidence that “no predicament is so bleak as to be unimprovable by a historical investigation into its origins and development.”

Despite those who sound the alarm, the pace of acceleration only increases.

The book is prominently subtitled: A History, A Philosophy, A Warning. While the first two elements are relatively clear (though perhaps somewhat unlikely to be delivered in fewer than two hundred pages), the third is murky. What exactly are we being warned against? What exactly did we think the internet was? The phrase itself has been rhetorical quicksand for as long as people have been trying to theorize it. Evgeny Morozov, perhaps the last decade’s most prominent tech critic, argues that “internet” ultimately obscures more than it reveals, lumping together everything from social networks to system architecture—and thereby saying nothing. Exactly because of this, the internet has been susceptible to grand pronouncements since the early days of cyberculture: the internet is a third industrial revolution, a transformation of everyday life, a way to “bring the world closer together,” as Mark Zuckerberg would have it. The utopian feeling that for years colored big-picture thinking about the internet has become background noise, diluted to the point of easy parody in the empty refrain of HBO’s comedy Silicon Valley: “Making the world a better place.”

This is not to say that the impact of the internet has been overstated, but that for a long time, a starry-eyed imprecision let certain realities slip. Then, in the middle of the last decade, uncertainty reemerged in the popular consciousness. The shift was gradual, though one might point to Edward Snowden’s revelation of the NSA’s PRISM surveillance program, along with the controversies of the 2016 presidential election. By 2018, in the fallout of the Cambridge Analytica scandal, best remembered for Mark Zuckerberg’s glassy-eyed appearance in front of Congress to explain the data breach, the old idea of the internet as a place of liberation had been permanently tarnished. There have always been naysayers to the techno-utopians, but their ranks had swelled into visibility—although their pronouncements often contained the same grandiose tones used by those they opposed. According to the technologist Jaron Lanier, we’re at risk of becoming a gadget: “A new generation has come of age with a reduced expectation of what a person can be, and of who each person might become.” In the words of the artist and writer James Bridle, we are entering a New Dark Age, an era in which we will only have faith in the algorithms that control our lives. Another critic, Richard Seymour, characterizes the new outlook of social media in his book The Twittering Machine with characteristic grimness:

The benefits of anonymity became the basis for trolling, ritualized sadism, vicious misogyny, racism and alt-right subcultures. Creative autonomy became “fake news” and a new form of infotainment. Multitudes became lynch mobs, often turning on themselves. Dictators and other authoritarians learned how to use Twitter and master its seductive language games, as did the so-called Islamic State whose slick online media professionals affect mordant and hyper-aware tones. The United States elected the world’s first “Twitter president.” Cyber-idealism became cyber-cynicism.

In one of the most widely discussed recent books trying to frame the new tech regime, retired Harvard professor Shoshana Zuboff popularized the term “surveillance capitalism” to describe the current system of economic domination born in and exported from Silicon Valley—briefly, one in which the internet’s monopoly platforms like Facebook and Google harvest the maximum of user information and extract from it the maximum revenue. As this view solidifies into a commonplace, tech-negative critiques have flourished in, though are not exclusive to, leftist circles and publications. And while these critiques have done much to clarify the challenges facing those who really would like to make the world a better place, there is also a lingering sense of futility: despite those who sound the alarm, the pace of acceleration only increases. What historian and philosopher Lewis Mumford described as the “magnificent bribe” of technology—the trade of autonomy for convenience—continues to lock us into a way of life from which we now find it impossible to extricate ourselves.

Smith begins in a similar spirit, writing that, fundamentally, today’s internet—by which he largely means social media—limits our freedom: “If we could put it on trial, its crime would be a crime against humanity.” He spotlights the groupthink of social media platforms and their failed promises of political change. He writes that the internet is addictive, that it destroys the powers of attention that are central to human flourishing. While his descriptions of online life’s ills are relatively nuanced, there’s little that hasn’t been examined at length in many of the previous works mentioned. However, Smith’s goal is to be critical without joining the ranks of the pessimists; he is “concerned to show that the greatest problem is not one of unstoppable technological determinism, or of a determinism that can only be countered by ‘flipping the off switch,’ but rather in clarifying the nature of the force with which we are contending, and understanding the limits of thinking that proceeds by analogy between human beings and machines.” Clarifying, for Smith, means introducing historical context—instead of radical ruptures, he tells us that the current predicament is part of a story as long as technological change, or human desiring, itself.

What can we learn from this historically minded, “ontological” approach? Smith has assembled a cabinet of curiosities that enlarge the internet’s ancestry. His scope is vast, beginning from the informational signals exchanged by plants and animals and continuing to early historical visions of interconnection, such as the second-century novel A True History, which conceives of a device on the moon that might hear everything that goes on down on earth. In one delightfully odd case, he describes Jules Allix, a nineteenth-century Frenchman who evangelized the technological marvel of the “snail telegraph,” “that is, a device that would communicate with another paired device at a great distance thanks to the power of what Allix called ‘escargotic commotion.’” Allix’s snail mail was a fraud, but Smith argues that even in his failure, there was a seed of invention, a desire to bind all things together that had to wait another hundred-plus years to be realized.

It’s not impossible to believe that Mark Zuckerberg has something in common with a snail-telecom charlatan, but the breadth of Smith’s argument can occasionally feel overextended. While it’s entertaining to roam with Smith across the centuries, and there’s something tantalizing, almost spiritual, in the idea that the concept behind the internet is somehow innate, Smith also performs a curious bit of argumentative jiu-jitsu to make his point. The internet, he writes, is an “excrescence” of human nature itself, “the most recent permutation of a complex of behaviors that is as deeply rooted in who we are as a species as anything else we do.” If fungi or moths have communicated for millennia across great distances, Smith wonders, why shouldn’t humans share in their desire? There is something tautological about this line of thought, a determinism that creeps in and naturalizes the present. By his own logic, it becomes hard to understand why the recent ugly spell of online life should be considered anything more than a blip on the radar—we’re asked to believe that all of human history led us to this moment, but also that things fell off sometime around 2011, as Smith claims. The book also conflates this “perennialist” vision of the internet, as Smith calls it, with active debates, the outcomes of which will have very different consequences for all users. Men dreamed of flight long before the invention of airplanes—perhaps Icarus is essential to understanding the human longing to become airborne, but will he help us decrease the volume of air travel? The connective tissue is missing between Smith’s curious examples and the kind of public-intellectual influence he wishes to court (after all, a warning is usually intended to have an imminent effect). We do need to know what the internet is, or has been, but we also need to know what we want it to be.

Smith is on surer footing when analyzing particular examples of the proto-internet, as when he discusses the philosopher G.W. Leibniz, a polymath who, in addition to being one of calculus’s inventors, designed one of the first calculating machines. In Smith’s account of Leibniz, the philosopher’s machine is something deeply different than a mind—Leibniz believed that consciousness was fundamentally irreducible to mechanistic actions. Instead, the machine is fruitful precisely because of how unlike a mind it is: “The more drudging and uninteresting operations of the mind might be outsourced to machines: they’ll do the math and the analysis of arguments, so that we might ‘think big,’ contemplating ideas and synthesizing the results that machines give us into new and original arguments.” Today the difference between mind and machine is increasingly blurred—how often now do we think of one in terms of the other? Smith argues that a return to Leibniz’s hard distinction might alert us to time when the two could clearly be seen as separate, and so better see how one might complement the other.

Perhaps Icarus is essential to understanding the human longing to become airborne, but will he help us decrease the volume of air travel?

Metaphors take a central place in Smith’s story of the internet. When we liken the mind too much to a computer, the computer starts to seem like a better mind than any human one could ever be. He is not the first to point out metaphor’s importance in narratives of the digital age: Meghan O’Gieblyn’s God, Human, Animal, Machine draws detailed historical links between Christian theology and ideas of artificial intelligence and the Singularity; Markus Krajewski’s The Server traces the history of work performed for masters being transferred from human subordinates all the way to unobserved computer programs. The internet is something so vast and so complex that it’s nearly impossible to talk about it without recourse to metaphor. But by repetition, the language of nets and webs (and the weaving they require) becomes naturalized, as does the technology itself—and the death of a metaphor can shut off our access to alternatives. Smith suggests that a renewed awareness of these terms could draw us away from short-term problems and help us see the bigger expanse. While the emphasis on metaphor feels intuitively correct, his account of what it means to engage with metaphor is strangely underdeveloped. At one point Smith suggests that “we are always free to trade in our metaphors when others come along that better satisfy our desire to make sense of things, or that simply fit better with the spirit of the times.” However, it’s not so obvious that we even have control over this vocabulary—if we are the makers of our language, or if our language unconsciously makes us, especially as a seemingly-endless volume of it accrues and merges on the internet’s vast frontiers.

The idea that we need to change language to create change is, however necessary, a familiar cry, one often belying a reluctance to consider a more committed political program. Overall, the question of change—what meaningful change is and why it happens—bedevils Smith’s book. To some, his project may smack of “Whig history,” the tendency (usually applied disparagingly) to interpret history in the light of the present. If the “internet” was something we were always bound to create, then how could it have ever developed differently? Smith’s book is not advertised as a thoroughgoing history, but it’s worth noting the importance of more granular accounts—Joanne McNeil’s Lurking, for instance—in showing how different the internet has already been at various points in its brief lifespan. The scholar Michael D. Gordin has suggested that the role of the historian of science is to act not as the Whig, but instead as the “Tory”—that is, to conserve an understanding of differences between historical periods, so as to see the true diversity of human ways of life. To see difference across time is to see change, and to see change is to ask why events happened as they did (and, perhaps, how they might have happened otherwise). If nothing is fundamentally different, then the scope for action is diminished.

Smith’s form of cautious acceptance, his admission that “the present situation is intolerable, but there is no going back,” is indicative of thinking of the internet as Platonic idea, not as something malleable: a product of choices by people, corporations, and governments. By contrast, the critic Jonathan Crary’s recent book Scorched Earth casts the “internet complex” as something fundamentally material and destructive in a resource-depleting sense, as it consumes vast amounts of energy and requires the toxic extraction of minerals and metals from around the world. Crary believes that the internet must, in essence, be turned off in order to imagine new forms of life to follow. While Smith is likely right that the off switch isn’t going to be pulled soon, his book ultimately obscures that our technological relations are still choices, and that we have many of them left to make.