While the brightest minds of Silicon Valley are “disrupting” whatever industry is too crippled to fend off their advances, something odd is happening to our language. Old, trusted words no longer mean what they used to mean; often, they don’t mean anything at all. Our language, much like everything these days, has been hacked. Fuzzy, contentious, and complex ideas have been stripped of their subversive connotations and replaced by cleaner, shinier, and emptier alternatives; long-running debates about politics, rights, and freedoms have been recast in the seemingly natural language of economics, innovation, and efficiency. Complexity, as it turns out, is not particularly viral.
Fortunately, Silicon Valley, that never-drying well of shoddy concepts and dubious paradigms—from wiki-everything to i-something, from e-nothing to open-anything—is ready to help. Like a good priest, it’s always there to console us with the promise of a better future, a glitzier roadmap, a sleeker vocabulary. This is not to deny that many of our latest gadgets and apps are fantastic. But to fixate on technological innovation alone is to miss the more subtle—and more consequential—ways in which a clique of techno-entrepreneurs has hijacked our language and, with it, our reason. In the last decade or so, Silicon Valley has triggered its own wave of linguistic innovation, a wave so massive that a completely new way to analyze and describe the world—a silicon mentality of sorts—has emerged in its wake. The old language has been rendered useless; our pre-Internet vocabulary, we are told, needs an upgrade.
Silicon Valley has always had a thing for priests; Steve Jobs was the cranky pope it deserved. Today, having mastered the art of four-hour workweeks and gluten-free lunches in outdoor cafeterias, our digital ministers are beginning to preach on subjects far beyond the funky world of drones, 3-D printers, and smart toothbrushes. That we would eventually be robbed of a meaningful language to discuss technology was entirely predictable. That the conceptual imperialism of Silicon Valley would also pollute the rest of our vocabulary wasn’t.
The enduring emptiness of our technology debates has one main cause, and his name is Tim O’Reilly. The founder and CEO of O’Reilly Media, a seemingly omnipotent publisher of technology books and a tireless organizer of trendy conferences, O’Reilly is one of the most influential thinkers in Silicon Valley. Entire fields of thought—from computing to management theory to public administration—have already surrendered to his buzzwordophilia, but O’Reilly keeps pressing on. Over the past fifteen years, he has given us such gems of analytical precision as “open source,” “Web 2.0,” “government as a platform,” and “architecture of participation.” O’Reilly doesn’t coin all of his favorite expressions, but he promotes them with religious zeal and enviable perseverance. While Washington prides itself on Frank Luntz, the Republican strategist who rebranded “global warming” as “climate change” and turned “estate tax” into “death tax,” Silicon Valley has found its own Frank Luntz in Tim O’Reilly.
Tracing O’Reilly’s intellectual footprint is no easy task, in part because it’s so vast.[*] Through his books, blogs, and conferences, he’s nurtured a whole generation of technology thinkers, from Clay Shirky to Cory Doctorow. A prolific blogger and a compulsive Twitter user with more than 1.6 million followers, O’Reilly has a knack for writing articulate essays about technological change. His essay on “Web 2.0” elucidated a basic philosophy of the Internet in a way accessible to both academics and venture capitalists; it boasts more than six thousand references on Google Scholar—not bad for a non-academic author. He also invests in start-ups—the very start-ups that he celebrates in his public advocacy—through a venture fund, which, like most things O’Reilly, also bears his name.
A stylish and smooth-talking self-promoter with a philosophical take on everything, O’Reilly is the Bernard-Henri Lévy of Route 101, the favorite court philosopher of the TED elites. His impressive intellectual stature in the Valley can probably be attributed to the simple fact that he is much better read than your average tech entrepreneur. His constant references to the learned men of yesteryear—from “Archilochus, the Greek fabulist” to Ezra Pound—make him stand out from all those Silicon Valley college dropouts who don’t know their Plotinus from their Pliny. A onetime recipient of a National Endowment for the Arts grant to translate Greek fables—“Socrates is [one of] my constant companions”—he has the air of a man ready to grapple with the Really Big Questions of the Universe (his Harvard degree in classics certainly comes in handy). While he recently told Wired that he doesn’t “really give a shit if literary novels go away” because “they’re an elitist pursuit,” O’Reilly is also quick to acknowledge that novels have profoundly shaped his own life. In 1981 the young O’Reilly even wrote a reputable biography of the science fiction writer Frank Herbert, the author of the Dune series, in which he waxes lyrical about Martin Heidegger and Karl Jaspers.
Alas, O’Reilly and the dead Germans parted ways long ago. These days, he’s busy changing the world; any list of unelected technocrats who are shaping the future of American politics would have his name at the very top. A Zelig-like presence on both sides of the Atlantic, he hobnobs with government officials in Washington and London, advising them on the Next Big Thing. O’Reilly’s thinking on “Government 2.0” has influenced many bureaucrats in the Obama administration, particularly those tasked with promoting the amorphous ideal of “open government”—not an easy thing to do in an administration bent on prosecuting whistle-blowers and dispatching drones to “we-can’t-tell-you-where-exactly” destinations. O’Reilly is also active in discussions about the future of health care, having strong views on what “health 2.0” should be like.
A stylish and smooth-talking self-promoter, Tim O’Reilly is the Bernard-Henri Lévy of Route 101, the favorite court philosopher of the TED elites.
None of this is necessarily bad. On first impression, O’Reilly seems like a much-needed voice of reason—even of civic spirit—in the shallow and ruthless paradise-ghetto that is Silicon Valley. Compared to ultra-libertarian technology mavens like Peter Thiel and Kevin Kelly, O’Reilly might even be mistaken for a bleeding-heart liberal. He has publicly endorsed Obama and supported many of his key reforms. He has called on young software developers—the galley slaves of Silicon Valley—to work on “stuff that matters” (albeit preferably in the private sector). He has written favorably about the work of little-known local officials transforming American cities. O’Reilly once said that his company’s vision is to “change the world by spreading the knowledge of innovators,” while his own personal credo is to “create more value than you capture.” (And he has certainly captured a lot of it: his publishing empire, once in the humble business of producing technical manuals, is now worth $100 million.) Helping like-minded people find each other, sharpen their message, form a social movement, and change the world: this is what O’Reilly’s empire is all about. Its website even boasts of its “long history of advocacy, meme-making, and evangelism.” Who says that spiritual gurus can’t have their own venture funds?
O’Reilly’s personal journey was not atypical for Silicon Valley. In a 2004 essay about his favorite books (published in Tim O’Reilly in a Nutshell, brought out by O’Reilly Media), O’Reilly confessed that, as a young man, he had “hopes of writing deep books that would change the world.” O’Reilly credits a book of science fiction documenting the struggles of a young girl against a corporate-dominated plutocracy (Rissa Kerguelen by F. M. Busby) with helping him abandon his earlier dream of revolutionary writing and enter the “fundamentally trivial business [of] technical writing.” The book depicted entrepreneurship as a “subversive force,” convincing O’Reilly that “in a world dominated by large companies, it is the smaller companies that keep freedom alive, with economics at least one of the battlegrounds.” This tendency to view questions of freedom primarily through the lens of economic competition, to focus on the producer and the entrepreneur at the expense of everyone else, shaped O’Reilly’s thinking about technology.
However, it’s not his politics that makes O’Reilly the most dangerous man in Silicon Valley; a burgeoning enclave of Randian thought, it brims with far nuttier cases. O’Reilly’s mastery of public relations, on the other hand, is unrivaled and would put many of Washington’s top spin doctors to shame. No one has done more to turn important debates about technology—debates that used to be about rights, ethics, and politics—into kumbaya celebrations of the entrepreneurial spirit while making it seem as if the language of economics was, in fact, the only reasonable way to talk about the subject. As O’Reilly discovered a long time ago, memes are for losers; the real money is in epistemes.The Randian undertones in O’Reilly’s thinking are hard to miss, even as he flaunts his liberal credentials. “There’s a way in which the O’Reilly brand essence is ultimately a story about the hacker as hero, the kid who is playing with technology because he loves it, but one day falls into a situation where he or she is called on to go forth and change the world,” he wrote in 2012. But it’s not just the hacker as hero that O’Reilly is so keen to celebrate. His true hero is the hacker-cum-entrepreneur, someone who overcomes the insurmountable obstacles erected by giant corporations and lazy bureaucrats in order to fulfill the American Dream 2.0: start a company, disrupt an industry, coin a buzzword. Hiding beneath this glossy veneer of disruption-talk is the same old gospel of individualism, small government, and market fundamentalism that we associate with Randian characters. For Silicon Valley and its idols, innovation is the new selfishness.
O’Reilly got his start in business in 1978 when he launched a consulting firm that specialized in technical writing. Six years later, it began retaining rights to some of the manuals it was producing for individual clients and gradually branched out into more mainstream publishing. By the mid-1990s, O’Reilly had achieved some moderate success in Silicon Valley. He was well-off, having found a bestseller in The Whole Internet User’s Guide and Catalog and having sold the Global Network Navigator—possibly the first Internet portal to feature paid banner advertising (“the first commercial website” as O’Reilly describes it today)—to AOL.
It was the growing popularity of “open source software” that turned O’Reilly into a national (and, at least in geek circles, international) figure. “Open source software” was also the first major rebranding exercise overseen by Team O’Reilly. This is where he tested all his trademark discursive interventions: hosting a summit to define the concept, penning provocative essays to refine it, producing a host of books and events to popularize it, and cultivating a network of thinkers to proselytize it.
It’s easy to forget this today, but there was no such idea as open source software before 1998; the concept’s seeming contemporary coherence is the result of clever manipulation and marketing. Open source software was born out of an ideological cleavage between two groups that, at least before 1998, had been traditionally lumped together. In one corner stood a group of passionate and principled geeks, led by Richard Stallman of the Free Software Foundation, preoccupied with ensuring that users had rights with respect to their computer programs. Those rights weren’t many—users should be able to run the program for any purpose, to study how it works, to redistribute copies of it, and to release their improved version (if there was one) to the public—but even this seemed revolutionary compared to what one could do with most proprietary software sold at the time.
Software that ensured the aforementioned four rights was dubbed “free software.” It was “free” thanks to its association with “freedom” rather than “free beer”; there was no theoretical opposition to charging money for building and maintaining such software. To provide legal cover, Stallman invented an ingenious license that relied on copyright law to suspend its own most draconian provisions—a legal trick that came to be known as “copyleft.” GPL (short for “General Public License”) has become the most famous and widely used of such “copyleft” licenses.
From its very beginning in the early 1980s, Stallman’s movement aimed to produce a free software alternative to proprietary operating systems like Unix and Microsoft Windows and proprietary software like Microsoft Office. Stallman’s may not have been the best software on offer, but some sacrifice of technological efficiency was a price worth paying for emancipation. Some discomfort might even be desirable, for Stallman’s goal, as he put it in his 1998 essay “Why ‘Free Software’ is Better Than ‘Open Source,’” was to ask “people to think about things they might rather ignore.”
As O’Reilly discovered a long time ago, memes are for losers; the real money is in epistemes.
Underpinning Stallman’s project was a profound critique of the role that patent law had come to play in stifling innovation and creativity. Perhaps inadvertently, Stallman also made a prescient argument for treating code, and technological infrastructure more broadly, as something that ought to be subject to public scrutiny. He sought to open up the very technological black boxes that corporations conspired to keep shut. Had his efforts succeeded, we might already be living in a world where the intricacies of software used for high-frequency trading or biometric identification presented no major mysteries.
Stallman is highly idiosyncratic, to put it mildly, and there are many geeks who don’t share his agenda. Plenty of developers contributed to “free software” projects for reasons that had nothing to do with politics. Some, like Linus Torvalds, the Finnish creator of the much-celebrated Linux operating system, did so for fun; some because they wanted to build more convenient software; some because they wanted to learn new and much-demanded skills.
Once the corporate world began expressing interest in free software, many nonpolitical geeks sensed a lucrative business opportunity. As technology entrepreneur Michael Tiemann put it in 1999, while Stallman’s manifesto “read like a socialist polemic . . . I saw something different. I saw a business plan in disguise.” Stallman’s rights-talk, however, risked alienating the corporate types. Stallman didn’t care about offending the suits, as his goal was to convince ordinary users to choose free software on ethical grounds, not to sell it to business types as a cheaper or more efficient alternative to proprietary software. After all, he was trying to launch a radical social movement, not a complacent business association.
By early 1998 several business-minded members of the free software community were ready to split from Stallman, so they masterminded a coup, formed their own advocacy outlet—the Open Source Initiative—and brought in O’Reilly to help them rebrand. The timing was right. Netscape had just marked its capitulation to Microsoft in the so-called Browser Wars and promised both that all future versions of Netscape Communicator would be released free of charge and that its code would also be made publicly available. A few months later, O’Reilly organized a much-publicized summit, where a number of handpicked loyalists—Silicon democracy in action!—voted for “open source” as their preferred label. Stallman was not invited.
The label “open source” may have been new, but the ideas behind it had been in the air for some time. In 1997, even before the coup, Eric Raymond—a close associate of O’Reilly, a passionate libertarian, and the founder of a group with the self-explanatory title “Geeks with Guns”—delivered a brainy talk called “The Cathedral and the Bazaar,” which foresaw the emergence of a new, radically collaborative way to make software. (In 1999, O’Reilly turned it into a successful book.) Emphasizing its highly distributed nature, Raymond captured the essence of open source software in a big-paradigm kind of way that could spellbind McKinsey consultants and leftist academics alike.
Even before the coup, O’Reilly occupied an ambiguous—and commercially pivotal—place in the free software community. On the one hand, he published manuals that helped to train new converts to the cause. On the other hand, those manuals were pricey. They were also of excellent quality, which, as Stallman once complained, discouraged the community from producing inexpensive alternatives. Ultimately, however, the disagreement between Stallman and O’Reilly—and the latter soon became the most visible cheerleader of the open source paradigm—probably had to do with their very different roles and aspirations. Stallman the social reformer could wait for decades until his ethical argument for free software prevailed in the public debate. O’Reilly the savvy businessman had a much shorter timeline: a quick embrace of open source software by the business community guaranteed steady demand for O’Reilly books and events, especially at a time when some analysts were beginning to worry—and for good reason, as it turned out—that the tech industry was about to collapse.In those early days, the messaging around open source occasionally bordered on propaganda. As Raymond himself put it in 1999, “what we needed to mount was in effect a marketing campaign,” one that “would require marketing techniques (spin, image-building, and re-branding) to make it work.” This budding movement prided itself on not wanting to talk about the ends it was pursuing; except for improving efficiency and decreasing costs, those were left very much undefined. Instead, it put all the emphasis on how it was pursuing those ends—in an extremely decentralized manner, using Internet platforms, with little central coordination. In contrast to free software, then, open source had no obvious moral component. According to Raymond, “open source is not particularly a moral or a legal issue. It’s an engineering issue. I advocate open source, because . . . it leads to better engineering results and better economic results.” O’Reilly concurred. “I don’t think it’s a religious issue. It’s really about how do we actually encourage and spark innovation,” he announced a decade later. While free software was meant to force developers to lose sleep over ethical dilemmas, open source software was meant to end their insomnia.
The coup succeeded. Stallman’s project was marginalized. But O’Reilly and his acolytes didn’t win with better arguments; they won with better PR. To make his narrative about open source software credible to a public increasingly fascinated by the Internet, O’Reilly produced a highly particularized account of the Internet that subsequently took on a life of its own. In just a few years, that narrative became the standard way to talk about Internet history, giving it the kind of neat intellectual coherence that it never actually had. A decade after producing a singular vision of the Internet to justify his ideas about the supremacy of the open source paradigm, O’Reilly is close to pulling a similar trick on how we talk about government reform.
To understand how O’Reilly’s idea of the Internet helped legitimize the open source paradigm, it’s important to remember that much of Stallman’s efforts centered on software licenses. O’Reilly’s bet was that as software migrated from desktops to servers—what, in another fit of buzzwordophilia, we later called the “cloud”—licenses would cease to matter. Since no code changed hands when we used Google or Amazon, it was counterproductive to fixate on licenses. “Let’s stop thinking about licenses for a little bit. Let’s stop thinking that that’s the core of what matters about open source,” O’Reilly urged in an interview with InfoWorld in 2003.
So what did matter about open source? Not “freedom”—at least not in Stallman’s sense of the word. O’Reilly cared for only one type of freedom: the freedom of developers to distribute software on whatever terms they fancied. This was the freedom of the producer, the Randian entrepreneur, who must be left to innovate, undisturbed by laws and ethics. The most important freedom, as O’Reilly put it in a 2001 exchange with Stallman, is that which protects “my choice as a creator to give, or not to give, the fruits of my work to you, as a ‘user’ of that work, and for you, as a user, to accept or reject the terms I place on that gift.”
This stood in stark contrast to Stallman’s plan of curtailing—by appeals to ethics and, one day, perhaps, law—the freedom of developers in order to promote the freedom of users. O’Reilly opposed this agenda: “I completely support the right of Richard [Stallman] or any individual author to make his or her work available under the terms of the GPL; I balk when they say that others who do not do so are doing something wrong.” The right thing to do, according to O’Reilly, was to leave developers alone. “I am willing to accept any argument that says that there are advantages and disadvantages to any particular licensing method. . . . My moral position is that people should be free to find out what works for them,” he wrote in 2001. That “what works” for developers might eventually hurt everyone else—which was essentially Stallman’s argument—did not bother O’Reilly. For all his economistic outlook, he was not one to talk externalities.
That such an argument could be mounted reveals just how much political baggage was smuggled into policy debates once “open source software” replaced “free software” as the idiom of choice. Governments are constantly pushed to do things someone in the private sector may not like; why should the software industry be special? Promoting accountability or improving network security might indeed disrupt someone’s business model—but so what? Once a term like “open source” entered our vocabulary, one could recast the whole public policy calculus in very different terms, so that instead of discussing the public interest, we are discussing the interests of individual software developers, while claiming that this is a discussion about “innovation” and “progress,” not “accountability” or “security.”According to this Randian interpretation of open source, the goal of regulation and public advocacy should be to ensure that absolutely nothing—no laws or petty moral considerations—stood in the way of the open source revolution. Any move to subject the fruits of developers’ labor to public regulation, even if its goal was to promote a greater uptake of open source software, must be opposed, since it would taint the reputation of open source as technologically and economically superior to proprietary software. Occasionally this stance led to paradoxes, as, for example, during a heated 2002 debate on whether governments should be required to ditch Microsoft and switch to open source software. O’Reilly expressed his vehement opposition to such calls. “No one should be forced to choose open source, any more than they should be forced to choose proprietary software. And any victory for open source achieved through deprivation of the user’s right to choose would indeed be a betrayal of the principles that free software and open source have stood for,” O’Reilly wrote in a widely discussed blog post.
To weaken Stallman’s position, O’Reilly had to show that the free software movement was fighting a pointless, stupid war: the advent of the Internet made Stallman’s obsession with licenses obsolete. There was a fair amount of semantic manipulation at play here. For Stallman, licenses were never an end in themselves; they mattered only as much as they codified a set of practices deriving from his vision of a technologically mediated good life. Licenses, in other words, were just the means to enable the one and only end that mattered to free software advocates: freedom. A different set of technological practices—e.g., the move from desktop-run software to the cloud—could have easily accommodated a different means of ensuring that freedom.
In fact, Stallman’s philosophy, however rudimentary, had all the right conceptual tools to let us think about the desirability of moving everything to the cloud. The ensuing assault on privacy, the centralization of data in the hands of just a handful of companies, the growing accessibility of user data to law enforcement agencies who don’t even bother getting a warrant: all those consequences of cloud computing could have been predicted and analyzed, even if fighting those consequences would have required tools other than licenses. O’Reilly’s PR genius lay in having almost everyone confuse the means and the ends of the free software movement. Since licenses were obsolete, the argument went, software developers could pretty much disregard the ends of Stallman’s project (i.e., its focus on user rights and freedoms) as well. Many developers did stop thinking about licenses, and, having stopped thinking about licenses, they also stopped thinking about broader moral issues that would have remained central to the debates had “open source” not displaced “free software” as the paradigm du jour. Sure, there were exceptions—like the highly political and legalistic community that worked on Debian, yet another operating system—but they were the exceptions that proved the rule.
To maximize the appeal and legitimacy of this new paradigm, O’Reilly had to establish that open source both predated free software and was well on its way to conquering the world—that it had a rich history and a rich future. The first objective he accomplished, in part, by exploiting the ambiguities of the term “open”; the second by framing debate about the Internet around its complex causal connections to open source software.
“Open” allowed O’Reilly to build the largest possible tent for the movement. The language of economics was less alienating than Stallman’s language of ethics; “openness” was the kind of multipurpose term that allowed one to look political while advancing an agenda that had very little to do with politics. As O’Reilly put it in 2010, “the art of promoting openness is not to make it a moral crusade, but rather to highlight the competitive advantages of openness.” Replace “openness” with any other loaded term—say “human rights”—in this sentence, and it becomes clear that this quest for “openness” was politically toothless from the very outset. What, after all, if your interlocutor doesn’t give a damn about competitive advantages?The term “open source” was not invented by O’Reilly. Christine Peterson, the cofounder of Foresight Institute (a nanotechnology think tank), coined it in a February 1998 brainstorm session convened to react to Netscape’s release of Navigator’s source code. Few words in the English language pack as much ambiguity and sexiness as “open.” And after O’Reilly’s bombastic interventions—“Open allows experimentation. Open encourages competition. Open wins,” he once proclaimed in an essay—its luster has only intensified. Profiting from the term’s ambiguity, O’Reilly and his collaborators likened the “openness” of open source software to the “openness” of the academic enterprise, markets, and free speech. “Open” thus could mean virtually anything, from “open to intellectual exchange” (O’Reilly in 1999: “Once you start thinking of computer source code as a human language, you see open source as a variety of ‘free speech’”) to “open to competition” (O’Reilly in 2000: “For me, ‘open source’ in the broader sense means any system in which open access to code lowers the barriers to entry into the market”).
Unsurprisingly, the availability of source code for universal examination soon became the one and only benchmark of openness. What the code did was of little importance—the market knows best!—as long as anyone could check it for bugs. The new paradigm was presented as something that went beyond ideology and could attract corporate executives without losing its appeal to the hacker crowd. “The implication of [the open source] label is that we intend to convince the corporate world to adopt our way for economic, self-interested, non-ideological reasons,” Eric Raymond noted in 1998. What Raymond and O’Reilly failed to grasp, or decided to overlook, is that their effort to present open source as non-ideological was underpinned by a powerful ideology of its own—an ideology that worshiped innovation and efficiency at the expense of everything else.
It took a lot of creative work to make the new paradigm stick. One common tactic was to present open source as having a much longer history that even predates 1998. Thus, writing shortly after O’Reilly’s historic open source summit, Raymond noted that “the summit was hosted by O’Reilly & Associates, a company that has been symbiotic with the Open Source movement for many years.” That the term “open source” was just a few months old by the time Raymond wrote this didn’t much matter. History was something that clever PR could easily fix. “As we thought about it, we said, gosh, this is also a great PR opportunity—we’re a company that has learned to work the PR angles on things,” O’Reilly said in 1999. “So part of the agenda for the summit was hey, just to meet and find out what we had in common. And the second agenda was really to make a statement of some kind [that] this was a movement, that all these different programs had something in common.”
What they had in common was disdain for Stallman’s moralizing—barely enough to justify their revolutionary agenda, especially among the hacker crowds who were traditionally suspicious of anyone eager to suck up to the big corporations that aspired to dominate the open source scene.
By linking this new movement to both the history of the Internet and its future, O’Reilly avoided most of those concerns. One didn’t have to choose open source, because the choice had already been made. As long as everyone believed that “open source” implied “the Internet” and that “the Internet” implied “open source,” it would be very hard to resist the new paradigm. As O’Reilly—always the PR man—wrote in a 2004 essay, “It has always baffled and disappointed me that the open source community has not claimed the web as one of its greatest success stories. . . . That’s a PR failure!” To make up for that failure, O’Reilly had to establish some causal relationship between the two—the details could be worked out later on.
“Openness” was the kind of multipurpose term that allowed one to look political while advancing an agenda that had very little to do with politics.
“I think there’s a paradigm shift going on right now, and it’s really around both open source and the Internet, and it’s not entirely clear which one is the driver and which one is the passenger, but at least they are fellow travellers,” he announced in his InfoWorld interview. Compared to the kind of universal excitement generated by the Internet, Stallman’s license-talk was about as exciting as performing Mahler at a Jay-Z concert. As O’Reilly himself acknowledged, his “emphasis in talking about open source has never been on the details of licenses, but on open source as a foundation and expression of the Internet.” When something is touted as both a foundation and an expression of something else, the underlying logic could probably benefit from more rigor.
Telling a coherent story about open source required finding some inner logic to the history of the Internet. O’Reilly was up to the task. “If you believe me that open source is about Internet-enabled collaboration, rather than just about a particular style of software license,” he said in 2000, “you’ll see the threads that tie together not just traditional open source projects, but also collaborative ‘computing grid’ projects like SETIAtHome, user reviews on Amazon.com, technologies like collaborative filtering, new ideas about marketing such as those expressed in The Cluetrain Manifesto, weblogs, and the way that Internet message boards can now move the stock market.” In other words, everything on the Internet was connected to everything else—via open source.
The way O’Reilly saw it, many of the key developments of Internet culture were already driven by what he called “open source behavior,” even if such behavior was not codified in licenses. For example, the fact that one could view the source code of a webpage right in one’s browser has little to do with open source software, but it was part of the same “openness” spirit that O’Reilly saw at work in the Internet. No moralizing (let alone legislation) was needed; the Internet already lived and breathed open source. What O’Reilly didn’t say is that, of course, it didn’t have to be this way forever. Now that apps might be displacing the browser, the openness once taken for granted is no more—a contingency that licenses and morals could have easily prevented. Openness as a happenstance of market conditions is a very different beast from openness as a guaranteed product of laws.
One of the key consequences of linking the Internet to the world of open source was to establish the primacy of the Internet as the new, reinvented desktop—as the greatest, and perhaps ultimate, platform—for hosting third-party services and applications. This is where the now-forgotten language of “freedom” made a comeback, since it was important to ensure that O’Reilly’s heroic Randian hacker-entrepreneurs were allowed to roam freely. Soon this “freedom to innovate” morphed into “Internet freedom,” so that what we are trying to preserve is the innovative potential of the platform, regardless of the effects on individual users.
Stallman had on offer something far more precise and revolutionary: a way to think about the freedoms of individual users in specific contexts, as if the well-being of the mega-platform were of secondary importance. But that vision never came to pass. Instead, public advocacy efforts were channeled into preserving an abstract and reified configuration of digital technologies—“the Internet”—so that Silicon Valley could continue making money by hoovering up our private data.
Lumping everything under the label of “Internet freedom” did have some advantages for those genuinely interested in promoting rights such as freedom of expression—the religious fervor that many users feel about the Internet has helped catalyze a lot of activist campaigns—but, by and large, the concept also blunted our analytical ability to balance rights against each other. Forced to choose between preserving the freedom of the Internet or that of its users, we were supposed to choose the former—because “the Internet” stood for progress and enlightenment.
In the late 1990s, O’Reilly began celebrating “infoware” as the next big thing after “hardware” and “software.” His premise was that Internet companies such as Yahoo and E-Trade were not in the software business but in the infoware business. Their functionality was pretty basic—they allowed customers to make purchases or look up something on a map—so their value proposition lay in the information they delivered, not in the software function they executed. And all those fancy Internet services that made infoware possible were patched together with open source software. By showing that infoware was the future and that open source software was its essential component, O’Reilly sought to reassure those who hadn’t joined the movement of their pivotal role in the future of computing, if not all human progress.
The “infoware” buzzword didn’t catch on, so O’Reilly turned to the work of Douglas Engelbart, the idiosyncratic inventor who gave us the computer mouse and hypertext, to argue that the Internet could help humanity augment its “collective intelligence” and that, once again, open source software was crucial to this endeavor. Now it was all about Amazon learning from its customers and Google learning from the sites in its index. The idea of the Internet as both a repository and incubator of “collective intelligence” was very appealing to Silicon Valley, not least because it tapped into the New Age rhetoric of the 1970s, but the dotcom crash briefly forced O’Reilly to put his philosophizing on hold. When the tech bubble burst, the demand for manuals and conferences—the bulk of O’Reilly’s business—shrank, while he also had to deal with some unpleasant litigation concerning his office headquarters in Sebastopol, California. He fired a quarter of his staff, and things looked pretty dire.
Thus, a high-profile conference was born, aimed explicitly at helping VIPs in the Valley “see the shape of the future,” to be followed by many others. O’Reilly soon expanded on the idea of Web 2.0 in an essay that he coauthored with writer and entrepreneur John Battelle. O’Reilly couldn’t improve on a concept as sexy as “collective intelligence,” so he kept it as the defining feature of this new phenomenon. What set Web 2.0 apart from Web 1.0, O’Reilly claimed, was the simple fact that those firms that didn’t embrace it went bust. All Silicon Valley companies should heed the lesson of those few who survived: they must find a way to harness collective intelligence and make it part of their business model. They must become true carriers of the Web 2.0 spirit.Then, in 2004, O’Reilly and his business partner Dale Dougherty hit on the idea of “Web 2.0.” What did “2.0” mean, exactly? There was some theoretical ambition to this label—more about that later on—but the primary goal was to show that the 2001 market crash did not mean the end of the web and that it was time to put the crash behind us and start learning from those who survived. Given how much rhetorical capital had been spent on linking the idea of the web with that of open source, the end of the web would also mean the end of so many other concepts. Tactically, “Web 2.0” could also be much bigger than “open source”; it was the kind of sexy umbrella term that could allow O’Reilly to branch out from boring and highly technical subjects to pulse-quickening futurology. “We normally have lots of technical talks focusing on how to use new software, building our conferences for the hackers who are inventing the future, and the early adopters who are taking their work to the next stage,” O’Reilly wrote in a blog post announcing his very first Web 2.0 conference. “In contrast, Web 2.0 is our first ‘executive conference’—a conference aimed at business people, with the focus on the big picture.”
O’Reilly’s explanation of the crash is curious. First of all, some tech companies that did go under (Global Crossing comes to mind) couldn’t harness collective intelligence, as they were in the telecommunications business. Most memorable dotcom failures—cases like Pets.com—went under because they were driven by foolish business models and overly exuberant investors. (Pets.com would have made an even worse proposition if it had followed O’Reilly’s playbook and become a Web 2.0 company.) Furthermore, companies that didn’t follow the Web 2.0 mantra—like Barnes & Noble, which O’Reilly singled out as a company that, unlike Amazon, wasn’t learning from collective intelligence—didn’t go under at all.
By 2007, O’Reilly readily admitted that “Web 2.0 was a pretty crappy name for what’s happening.” Back in 2004, however, he seemed pretty serious, promoting this concept left and right. The label caught on; like “open source,” it was ambiguous and capacious enough to allow many alternative uses and interpretations. O’Reilly’s partners in organizing the conference duly trademarked the term “Web 2.0,” but this news wasn’t well received by their fellow travellers (a similar effort to trademark “open source” by the Open Source Initiative failed). Once “Web 2.0” was established as a term of cultural reference, O’Reilly could venture outside Silicon Valley and establish its relevance to other industries. Much as “open source software” gave rise to “open source politics” and “open source science,” so did “Web 2.0” expand its terminological empire. O’Reilly eventually stuck a 2.0 label on anything that suited his business plan, running events with titles like “Gov 2.0” and “Where 2.0.” Today, as everyone buys into the 2.0 paradigm, O’Reilly is quietly dropping it. Last year his “Where 2.0” conference on geolocation was rebranded as just “Where.” The exceptional has become the new normal.
Sorting through the six thousand or so academic papers that cite O’Reilly’s essay on Web 2.0 is no easy feat. It seems that anyone who wanted to claim that a revolution was under way in their own field did so simply by invoking the idea of Web 2.0 in their work: Development 2.0, Nursing 2.0, Humanities 2.0, Protest 2.0, Music 2.0, Research 2.0, Library 2.0, Disasters 2.0, Road Safety 2.0, Identity 2.0, Stress Management 2.0, Archeology 2.0, Crime 2.0, Pornography 2.0, Love 2.0, Wittgenstein 2.0. What unites most of these papers is a shared background assumption that, thanks to the coming of Web 2.0, we are living through unique historical circumstances. Except that there was no coming of Web 2.0—it was just a way to sell a technology conference to a public badly burned by the dotcom crash. Why anyone dealing with stress management or Wittgenstein would be moved by the logistics of conference organizing is a mystery.
O’Reilly stuck a 2.0 label on anything that suited his business plan.
O’Reilly himself pioneered this 2.0-ification of public discourse, aggressively reinterpreting trends that had been happening for decades through the prism of Internet history—a move that presented all those trends as just a logical consequence of the Web 2.0 revolution. Take O’Reilly’s musings on “Enterprise 2.0.” What is it, exactly? Well, it’s the same old enterprise—for all we know, it might be making widgets—but now it has learned something from Google and Amazon and found a way to harness “collective intelligence.” For O’Reilly, Walmart is a quintessential Enterprise 2.0 company simply because it tracks what its customers are buying in real time.
That this is a rather standard practice—known under the boring title of “just-in-time delivery”—predating both Google and Amazon didn’t register with O’Reilly. In a Web 2.0 world, all those older concepts didn’t matter or even exist; everything was driven by the forces of open source and the Internet. A revolution was in the making!
This was a typical consequence of relying on Web 2.0 as the guiding metaphor of the age: in the case of Enterprise 2.0, a trend that had little connection to the Internet got reinscribed in the Internet frame, as if attaching the label of 2.0 was all that was needed to establish the logical parallels between the worlds of retail and search. This tendency to redescribe reality in terms of Internet culture, regardless of how spurious and tenuous the connection might be, is a fine example of what I call “Internet-centrism.”
And soon Web 2.0 became the preferred way to explain any changes that were happening in Silicon Valley and far beyond it. Most technology analysts simply borrowed the label to explain whatever needed explaining, taking its utility and objectivity for granted. “Open source” gave us the “the Internet,” “the Internet” gave us “Web 2.0,” “Web 2.0” gave us “Enterprise 2.0”: in this version of history, Tim O’Reilly is more important than the European Union. Everything needed to be rethought and redone: enterprises, governments, health care, finance, factory production. For O’Reilly, there were few problems that could not be solved with Web 2.0: “Our world is fraught with problems . . . from roiling financial markets to global warming, failing healthcare systems to intractable religious wars . . . many of our most complex systems are reaching their limits. It strikes us that the Web might teach us new ways to address these limits.” Web 2.0 was a source of didactic wisdom, and O’Reilly had the right tools to interpret what it wanted to tell us—in each and every context, be it financial markets or global warming. All those contexts belonged to the Internet now. Internet-centrism won.
In his 1976 book Crazy Talk, Stupid Talk, Neil Postman pointed to a certain linguistic imperialism that propels crazy talk. For Postman, each human activity—religion, law, marriage, commerce—represents a distinct “semantic environment” with its own tone, purpose, and structure. Stupid talk is relatively harmless; it presents no threat to its semantic environment and doesn’t cross into other ones. Since it mostly consists of falsehoods and opinions “given by one fallible human being about the remarks of another fallible human being,” it can be easily corrected with facts. For example, to say that Tehran is the capital of Iraq is stupid talk. Crazy talk, in contrast, challenges a semantic environment, as it “establishes different purposes and assumptions from those we normally accept.” To argue, as some Nazis did, that the German soldiers ended up far more traumatized than their victims is crazy talk.
For Postman, one of the main tasks of language is to codify and preserve distinctions among different semantic environments. As he put it, “When language becomes undifferentiated, human situations disintegrate: Science becomes indistinguishable from religion, which becomes indistinguishable from commerce, which becomes indistinguishable from law, and so on. If each of them serves the same function, then none of them serves any function. When such a process is occurring, an appropriate word for it is pollution.” Some words—like “law”—are particularly susceptible to crazy talk, as they mean so many different things: from scientific “laws” to moral “laws” to “laws” of the market to administrative “laws,” the same word captures many different social relations. “Open,” “networks,” and “information” function much like “law” in our own Internet discourse today.
Postman’s thinking on the inner workings of language was heavily influenced by the work of Alfred Korzybski, a Polish count now remembered—if at all—for his 1933 book Science and Sanity. Korzybski founded a movement called general semantics. While it has inspired many weird and dangerous followers—Scientology’s L. Ron Hubbard claimed to have been a fan—it also earned the support of many serious thinkers, from cyberneticians like Anatol Rapoport to philosophers like Gaston Bachelard. For Korzybski, the world has a relational structure that is always in flux; like Heraclitus, who argued that everything flows, Korzybski believed that an object A at time x1 is not the same object as object A at time x2 (he actually recommended indexing every term we use with a relevant numerical in order to distinguish “science 1933” from “science 2013”). Our language could never properly account for the highly fluid and relational structure of our reality—or as he put it in his most famous aphorism, “the map is not the territory.”
Korzybski argued that we relate to our environments through the process of “abstracting,” whereby our neurological limitations always produce an incomplete and very selective summary of the world around us. There was nothing harmful in this per se—Korzybski simply wanted to make people aware of the highly selective nature of abstracting and give us the tools to detect it in our everyday conversations. He wanted to artificially induce what he called a “neurological delay” so that we could gain more awareness of what we were doing in response to verbal and nonverbal stimuli, understand what features of reality have been omitted, and react appropriately.
To that end, Korzybski developed a number of mental tools meant to reveal all the abstracting around us; he patented the most famous of those—the “structural differential”—in the 1920s. He also encouraged his followers to start using “etc.” at the end of their statements as a way of making them aware of their inherent inability to say everything about a given subject and to promote what he called the “consciousness of abstraction.”
There was way too much craziness and bad science in Korzybski’s theories for him to be treated as a serious thinker, but his basic question—as Postman put it, “What are the characteristics of language which lead people into making false evaluations of the world around them?”—still remains relevant today.
Tim O’Reilly is, perhaps, the most high-profile follower of Korzybski’s theories today. O’Reilly was introduced to Korzybski’s thought as a teenager while working with a strange man called George Simon in the midst of California’s counterculture of the early 1970s. O’Reilly and Simon were coteaching workshops at the Esalen Institute—then a hotbed of the “human potential movement” that sought to tap the hidden potential of its followers and increase their happiness. Bridging Korzybski’s philosophy with Sri Aurobindo’s integral yoga, Simon had an immense influence on the young O’Reilly. Simon’s rereading of general semantics, noted O’Reilly in 2004, “gave me a grounding in how to see people, and to acknowledge what I saw, that is the bedrock of my personal philosophy to this day.” (In 1976 the twenty-two-year-old O’Reilly edited and published notebooks by Simon after the latter died in an accident; even by the highly demanding standards of the 1970s, those notebooks look outright crazy.)
O’Reilly openly acknowledges his debt to Korzybski, listing Science and Sanity among his favorite books and even showing visualizations of the structural differential in his presentations. It would be a mistake to think that O’Reilly’s linguistic interventions—from “open source” to “Web 2.0”—are random or spontaneous. There is a philosophy to them: a philosophy of knowledge and language inspired by Korzybski. However, O’Reilly deploys Korzybski in much the same way that the advertising industry deploys the latest findings in neuroscience: the goal is not to increase awareness, but to manipulate. If general semanticists aimed to reveal the underlying emptiness of many concepts that pollute the public debate, O’Reilly is applying some of Korzybski’s language insights to practice some pollution of his own.
O’Reilly, of course, sees his role differently, claiming that all he wants is to make us aware of what earlier commentators may have overlooked. “A metaphor is just that: a way of framing the issues such that people can see something they might otherwise miss,” he wrote in response to a critic who accused him of linguistic incontinence. But Korzybski’s point, if fully absorbed, is that a metaphor is primarily a way of framing issues such that we don’t see something we might otherwise see.
In public, O’Reilly modestly presents himself as someone who just happens to excel at detecting the “faint signals” of emerging trends. He does so by monitoring a group of überinnovators that he dubs the “alpha geeks.” “The ‘alpha geeks’ show us where technology wants to go. Smart companies follow and support their ingenuity rather than trying to suppress it,” O’Reilly writes. His own function is that of an intermediary—someone who ensures that the alpha geeks are heard by the right executives: “The alpha geeks are often a few years ahead of their time. . . . What we do at O’Reilly is watch these folks, learn from them, and try to spread the word by writing down (or helping them write down) what they’ve learned and then publishing it in books or online.”
The name of his company’s blog—O’Reilly Radar—is meant to position him as an independent intellectual who is simply ahead of his peers in grasping the obvious. Some regular contributors to the Radar blog have titles like “correspondents,” giving the whole operation a veneer of objectivity and disinterestedness, with O’Reilly merely a commentator knowledgeable enough to provide some context to busy Silicon Valley types. An Edwin Schlossberg quotation he really likes—“the skill of writing is to create a context in which other people can think”—is cited to explain his willingness to enter so many seemingly unrelated fields. As Web 2.0 becomes central to everything, O’Reilly—the world’s biggest exporter of crazy talk—is on a mission to provide the appropriate “context” to every field.
In a fascinating essay published in 2000, O’Reilly sheds some light on his modus operandi. The thinker who emerges there is very much at odds with the spirit of objectivity that O’Reilly seeks to cultivate in public. That essay, in fact, is a revealing ode to what O’Reilly dubs “meme-engineering”: “Just as gene engineering allows us to artificially shape genes, meme-engineering lets us organize and shape ideas so that they can be transmitted more effectively, and have the desired effect once they are transmitted.” In a move worthy of Frank Luntz, O’Reilly meme-engineers a nice euphemism—“meme-engineering”—to describe what has previously been known as “propaganda.”
The essay’s putative goal is to show how one can meme-engineer a new meaning for “peer-to-peer” technologies—traditionally associated with piracy—and make them appear friendly and not at all threatening to the entertainment industry. Leading by example, O’Reilly invokes his success in rebranding “free software” as “open source.” The key to success, he notes, was to “put a completely different spin on what formerly might have been considered the ‘same space.’” To make that happen, O’Reilly and his acolytes “changed the canonical list of projects that we wanted to hold up as exemplars of the movement,” while also articulating what broader goals the projects on the new list served. He then proceeds to rehash the already familiar narrative: O’Reilly put the Internet at the center of everything, linking some “free software” projects like Apache or Perl to successful Internet start-ups and services. As a result, the movement’s goal was no longer to produce a completely free, independent, and fully functional operating system but to worship at the altar of the Internet gods.
Another apt example of O’Reilly’s meme-engineering is his attempt to establish a strong intellectual link between the development of Unix—a proprietary operating system that Stallman sought to replace with free software—and the development of open source and the Internet. Thus, for instance, O’Reilly claimed that Unix was built and improved in the spirit of open source because its academic cheerleaders were already swapping code with each other in the early 1970s. That such exchanges were just a regular part of the freewheeling academic culture and had little to do with philosophical attitudes toward code doesn’t weaken the argument; in fact, this is recast as an advantage, as now the open source model can be presented as just a natural extension of the scientific method. (Since O’Reilly himself played an important role in the production of Unix manuals, his own contribution to the Internet and open source suddenly looks even more significant.)
But O’Reilly’s meme-engineering around Unix doesn’t just stop at the purely discursive level. In his talks and writings, O’Reilly often points to one highly technical 1984 book—The Unix Programming Environment—as proof that, at least with respect to collaboration, Unix was some kind of proto-Internet. Indeed, the Wikipedia page for the book states that “the book is perhaps most valuable for its exposition of the Unix philosophy of small cooperating tools with standardized inputs and outputs, a philosophy that also shaped the end-to-end philosophy of the Internet. It is this philosophy, and the architecture based on it, that has allowed open source projects to be assembled into larger systems such as Linux, without explicit coordination between developers.”
Could it be that O’Reilly is right in claiming that “open source” has a history that predates 1998? Well, Wikipedia won’t tell us much here: in a recent Berkeley talk, O’Reilly admitted that he was the one to edit the Wikipedia page for the book. O’Reilly is perfectly positioned to control our technology discourse: as a publisher, he can churn out whatever books he needs to promote his favorite memes—and, once those have been codified in book form, they can be easily admitted into Wikipedia, where they quickly morph into facts. What’s not to like about “collective intelligence”?
Or take O’Reilly’s meme-engineering efforts around cyberwarfare. In a recent post on the subject, he muses on just how narrowly we have defined the idea of “cyberwarfare” and suggests we expand it to encompass conflicts between states and individuals. Now, who stands to benefit from “cyberwarfare” being defined more broadly? Could it be those who, like O’Reilly, can’t currently grab a share of the giant pie that is cybersecurity funding? If O’Reilly’s meme-engineering efforts succeed, we might end up classifying acts that should be treated as crime, espionage, or terrorism under the ambiguous label of “war.” Such reframing would be disastrous for civil liberties and privacy and would only exacerbate the already awful legal prosecution of hacktivists. It probably won’t be long before a “cyberwarfare correspondent” is added to O’Reilly’s media empire.Seen through the prism of meme-engineering, O’Reilly’s activities look far more sinister. His “correspondents” at O’Reilly Radar don’t work beats; they work memes and epistemes, constantly reframing important public issues in accordance with the templates prophesied by O’Reilly. Recently, for example, O’Reilly has been interested in the meme of “the industrial Internet,” forming a partnership with GE to participate in events and cover the company on the blog. Once “the industrial Internet” meme is out of the bag, only a lack of imagination prevents O’Reilly’s writers from seeing it absolutely everywhere. Here is how one of them describes a company that might not otherwise fit the boundaries of the meme: “I’m sure [its founder] wouldn’t use the words ‘industrial Internet’ to describe what he and his team are doing, and it might be a little bit of a stretch to categorize 3Scan that way. But I think they are an exemplar of many of the core principles of the meme and it’s interesting to think about them in that frame.” Five years down the road, would you be surprised if there is, in fact, something called “the industrial Internet” and that the primary goal of most activism around it is to defend the freedom of GE to “innovate” on it as it pleases?
In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.
The exact nature of these connections is rarely explained in full, but this is all for the better, as the reader might eventually interpret connections with their own agendas in mind. This is why the name of the meme must be as inclusive as possible: you never know who your eventual allies might be. “A big part of meme engineering is giving a name that creates a big tent that a lot of people want to be under, a train that takes a lot of people where they want to go,” writes O’Reilly. Once the meme has been conceived, the rest of O’Reilly’s empire can step in and help make it real. His conferences, for example, play a crucial role: “When you look at any of our events, there’s ultimately some rewriting of the meme map in each of them. Web 2.0 was about distinguishing companies that survived the dotcom bust from those that didn’t. Strata is about defining the new field of data science. Velocity is about making clear that the applications of the web depend on people to keep them running, unlike past generations of software that were simply software artifacts.”
There is considerable continuity across O’Reilly’s memes—over time, they tend to morph into one another. Thus, as he puts it, “‘open source’ was a great reshaping of the meme for its day, moving us off some of the limitations of ‘free software,’ but it may not be the end of the story.” O’Reilly has gradually lost interest in “open source” and “Web 2.0,” moving on to new memes: “government as a platform” and “algorithmic regulation.” We can only guess what comes next. Such dexterity not only helps in organizing new events and investing in cool start-ups; it also, as those six thousand papers that cite Web 2.0 attest, leaves a huge imprint on our culture.
All the familiar pathologies of O’Reilly’s thinking are on full display in his quest to meme-engineer his way to “Government 2.0.” The free software scenario is repeating itself: deeply political reform efforts are no longer seen as “moral crusades,” but are reinvented as mere attempts at increasing efficiency and promoting innovation.
Before O’Reilly went searching for a big-tent meme, there was little cohesion to the many disparate efforts to use technology to transform government. Some hoped that digitization would help reduce bureaucracy and allow everyone to fill out tax returns online. Others awaited the arrival of electronic town halls that would permit citizens to deliberate on the substance of policies that affect them. Yet another group hoped that digitization might make governments more transparent and accountable by forcing them to put some of the documents obtained through the Freedom of Information Act (FOIA) online. Finally, there were those who believed that increasing the availability and liquidity of government information would lead to new entrepreneurial projects and boost the economy.
Many of these efforts started long before the web and had no obvious connection to Internet culture, let alone Web 2.0. Occasionally, these four efforts—aiming at greater efficiency, deliberation, transparency, and innovation—overlapped, but mostly they have been driven by two very different agendas. One cohort, interested in increasing efficiency and spurring innovation, pursued campaigns that were mostly economic in character; these folks were not particularly interested in the political nature of the regimes they were seeking to reform. Singapore—where anyone can file their paperwork in minutes—was their role model.
The other cohort, interested in deliberation and transparency, was primarily concerned with transferring power from governments to citizens and increasing the accountability of public institutions. They argued that citizens have a right to obtain information about how their governments operate. Such explicitly political demands became the cornerstone of various Right to Information campaigns. This second group wouldn’t accept authoritarian Singapore as a role model, since most of its e-innovations do very little to promote meaningful citizen participation in policy-making or increase accountability.
Most modern governments, not surprisingly, prefer the economic aspects of digitization reform to the political ones. Innovative schemes, like smart parking systems, can help at election time; lengthy disclosures of government deliberations are likely to cause headaches. Right-leaning governments have an extra reason to celebrate the economism of the first cohort: publishing aggregate information about the performance of individual public service providers may help convince the electorate that those services should be provided by the private sector.
A clique of techno-entrepreneurs has hijacked our language and, with it, our reason.
By the early 2000s, as O’Reilly and his comrades were celebrating open source as a new revolutionary approach to everything, their discussions wandered into debates about the future of governance. Thus, a term like “open government”—which, until then, had mostly been used as a synonym for “transparent and accountable government”—was reinvented as a shortened version of “open source government.” The implication of this subtle linguistic change was that the main cultural attributes of open source software—the availability of the source code for everyone’s inspection, the immense contribution it can make to economic growth, the new decentralized production model that relies on contributions from numerous highly distributed participants—were to displace older criteria like “transparency” or “accountability” as the most desirable attributes of open government. The coining of the “open government” buzzwords was meant to produce a very different notion of openness.
Initially, O’Reilly had little role in this process; the meme of “open source” was promiscuous enough to redefine many important terms without his intervention. But in 2007, O’Reilly hosted yet another summit, attended by technologists and civic hackers, to devise a list of key principles of open government. The group came up with eight principles, all focused on the purely technical issue of how to ensure that, once data was released by the government, nothing would hold it back. As long as this “open data” was liquid and reusable, others could build on it. Neither the political process that led to the release of the data nor its content was considered relevant to openness. Thus, data about how many gum-chewers Singapore sends to prison would be “open” as long as the Singaporean government shared it in suitable formats. Why it shared such data was irrelevant.
With Obama’s election, Washington was game for all things 2.0. This is when O’Reilly turned his full attention to government reform, deploying and manipulating several memes at once—“Gov 2.0,” “open government,” and “government as a platform”—in order to establish the semantic primacy of the economic dimension of digitization. A decade earlier, O’Reilly had redefined “freedom” as the freedom of developers to do as they wished; now it was all about recasting “openness” in government in purely economic and innovation-friendly terms while downplaying its political connotations.
O’Reilly’s writings on Gov 2.0 reveal the same talented meme-engineer who gave us open source and Web 2.0. In his seminal essay on the subject, O’Reilly mixes semantic environments without a shred of regret. Both Web 2.0 and Gov 2.0, he argues, return us to earlier, simpler ways, away from the unnecessary complexity of modern institutions. “Web 2.0 was not a new version of the World Wide Web; it was a renaissance after the dark ages of the dotcom bust, a rediscovery of the power hidden in the original design of the World Wide Web,” he writes. “Similarly, Government 2.0 is not a new kind of government; it is government stripped down to its core, rediscovered and reimagined as if for the first time.”
Once it’s been established that new paradigms of government can be modeled on the success of technology companies, O’Reilly can argue that “it’s important to think deeply about what the three design principles of transparency, participation, and collaboration mean in the context of technology.” These were the very three principles that the Obama administration articulated in its “Open Government Directive,” published on the president’s first day in office. But why do we have to think about their meaning in “the context of technology”? The answer is quite simple: whatever transparency and participation used to mean doesn’t matter any longer. Now that we’ve moved to an era of Everything 2.0, the meaning of those terms will be dictated by the possibilities and inclinations of technology. And what is technology today if not “open source” and “Web 2.0”?
Here, for example, is how O’Reilly tries to reengineer the meme of transparency:
The word “transparency” can lead us astray as we think about the opportunity for Government 2.0. Yes, it’s a good thing when government data is available so that journalists and watchdog groups like the Sunlight Foundation can disclose cost overruns in government projects or highlight the influence of lobbyists. But that’s just the beginning. The magic of open data is that the same openness that enables transparency also enables innovation, as developers build applications that reuse government data in unexpected ways. Fortunately, Vivek Kundra and others in the administration understand this distinction, and are providing data for both purposes.
Vivek Kundra is the former chief information officer of the U.S. government who oversaw the launch of a portal called data.gov, which required agencies to upload at least three “high-value” sets of their own data. This data was made “open” in the same sense that open source software is open—i.e., it was made available for anyone to see. But, once again, O’Reilly is dabbling in meme-engineering: the data dumped on data.gov, while potentially beneficial for innovation, does not automatically “enable transparency.” O’Reilly deploys the highly ambiguous concept of openness to confuse “transparency as accountability” (what Obama called for in his directive) with “transparency as innovation” (what O’Reilly himself wants).
How do we ensure accountability? Let’s forget about databases for a moment and think about power. How do we make the government feel the heat of public attention? Perhaps by forcing it to make targeted disclosures of particularly sensitive data sets. Perhaps by strengthening the FOIA laws, or at least making sure that government agencies comply with existing provisions. Or perhaps by funding intermediaries that can build narratives around data—much of the released data is so complex that few amateurs have the processing power and expertise to read and make sense of it in their basements. This might be very useful for boosting accountability but useless for boosting innovation; likewise, you can think of many data releases that would be great for innovation and do nothing for accountability. The language of “openness” does little to help us grasp key differences between the two. In this context, openness leads to Neil Postman’s “crazy talk,” resulting in the pollution of the values of one semantic environment (accountability) with those of another (innovation).
O’Reilly doesn’t always coin new words. Sometimes he manipulates the meanings of existing words. Cue his framing of “participation”:
We can be misled by the notion of participation to think that it’s limited to having government decision-makers “get input” from citizens. This would be like thinking that enabling comments on a website is the beginning and end of social media! It’s a trap for outsiders to think that Government 2.0 is a way to use new technology to amplify the voices of citizens to influence those in power, and by insiders as a way to harness and channel those voices to advance their causes.
It’s hard to make sense of this passage without understanding the exact meaning of a term like “participation” in the glossary of All Things Web 2.0. According to O’Reilly, one of the key attributes of Web 2.0 sites is that they are based on an “architecture of participation”; it’s this architecture that allows “collective intelligence” to be harnessed. Ranking your purchases on Amazon or reporting spammy emails to Google are good examples of clever architectures of participation. Once Amazon and Google start learning from millions of users, they become “smarter” and more attractive to the original users.
This is a very limited vision of participation. It amounts to no more than a simple feedback session with whoever is running the system. You are not participating in the design of that system, nor are you asked to comment on its future. There is nothing “collective” about such distributed intelligence; it’s just a bunch of individual users acting on their own and never experiencing any sense of solidarity or group belonging. Such “participation” has no political dimension; no power changes hands.
Occasionally, O’Reilly’s illustrations include activities that demand no actual awareness of participation—e.g., a blog that puts up links to other blogs ends up improving Google’s search index—which is, not coincidentally perhaps, how we think of “participation” in the market system when we go shopping. To imply that “participation” means the same thing in the context of Web 2.0 as it does in politics is to do the very opposite of what Korzybski and general semantics prescribe. Were he really faithful to those principles, O’Reilly would be pointing out the differences between the two—not blurring them.
So what are we to make of O’Reilly’s exhortation that “it’s a trap for outsiders to think that Government 2.0 is a way to use new technology to amplify the voices of citizens to influence those in power”? We might think that the hallmark of successful participatory reforms would be enabling citizens to “influence those in power.” There’s a very explicit depoliticization of participation at work here. O’Reilly wants to redefine participation from something that arises from shared grievances and aims at structural reforms to something that arises from individual frustration with bureaucracies and usually ends with citizens using or building apps to solve their own problems.
There is nothing “collective” about Amazon’s distributed intelligence; it’s just a bunch of individual users acting on their own.
As a result, once-lively debates about the content and meaning of specific reforms and institutions are replaced by governments calling on their citizens to help find spelling mistakes in patent applications or use their phones to report potholes. If Participation 1.0 was about the use of public reason to push for political reforms, with groups of concerned citizens coalescing around some vague notion of the shared public good, Participation 2.0 is about atomized individuals finding or contributing the right data to solve some problem without creating any disturbances in the system itself. (These citizens do come together at “hackathons”—to help Silicon Valley liberate government data at no cost—only to return to their bedrooms shortly thereafter.) Following the open source model, citizens are invited to find bugs in the system, not to ask whether the system’s goals are right to begin with. That politics can aspire to something more ambitious than bug-management is not an insight that occurs after politics has been reimagined through the prism of open source software.
Protest is one activity that O’Reilly hates passionately. “There’s a kind of passivity even to our activism: we think that all we can do is to protest,” he writes. “Collective action has come to mean collective complaint. Or at most, a collective effort to raise money.” In contrast, he urges citizens to “apply the DIY spirit on a civic scale.” To illustrate the DIY spirit in action, O’Reilly likes to invoke the example of a Hawaiian community that, following a period of government inaction, raised $4 million and repaired a local park essential to its livelihood. For O’Reilly, the Hawaiian example reveals the natural willingness of ordinary citizens to solve their own problems. Governments should learn from Hawaii and offload more work onto their citizens; this is the key insight behind O’Reilly’s “government as a platform” meme.
This platform meme was, of course, inspired by Silicon Valley. Instead of continuing to build its own apps, Apple built an App Store, getting third-party developers to do all the heavy lifting. This is the model that governments must emulate. In fact, notes O’Reilly, they once did: in the 1950s, the U.S. government built a system of highways that allowed the private sector to build many more settlements around them, while in the 1980s the Reagan administration started opening up the GPS system, which gave us amazing road directions and Foursquare (where O’Reilly is an investor).
O’Reilly’s prescriptions, as is often the case, do contain a grain of truth, but he nearly always exaggerates their benefits while obfuscating their costs. One of the main reasons why governments choose not to offload certain services to the private sector is not because they think they can do a better job at innovation or efficiency but because other considerations—like fairness and equity of access—come into play. “If Head Start were a start-up it would be out of business. It doesn’t work,” remarked O’Reilly in a recent interview. Well, exactly: that’s why Head Start is not a start-up.
The real question is not whether developers should be able to submit apps to the App Store, but whether citizens should be paying for the apps or counting on the government to provide these services. To push for the platform metaphor as the primary way of thinking about the distribution of responsibilities between the private and the public sectors is to push for the economic-innovative dimension of Gov 2.0—and ensure that the private sector always emerges victorious.
O’Reilly defines “government as a platform” as “the notion that the best way to shrink the size of government is to introduce the idea that government should provide fewer citizen-facing services, but should instead consciously provide infrastructure only, with APIs and standards that let the private sector deliver citizen facing services.” He believes that “the idea of government as a platform applies to every aspect of the government’s role in society”—city affairs, health care, financial services regulation, police, fire, and garbage collection. “[Government as a platform] is the right way to frame the question of Government 2.0.”
One person who is busy turning the “government as a platform” meme into reality is David Cameron in the U.K. Cameron’s “Big Society” idea is based on three main tenets: decentralization of power from London to local governments, making information about the public sector more transparent to citizens, and paying providers of public services based on the quality of their service, which, ideally, would be measured and published online, thanks to feedback provided by the public. The idea here is that the government will serve as a coordinator of sorts, allowing people to come together—perhaps even giving them seed funding to kick-start alternatives to inefficient public services.
Once-lively debates about reform are replaced by governments calling on their citizens to use their phones to report potholes.
Cameron’s motivation is clear: the government simply has no money to pay for services that were previously provided by public institutions, and besides, shrinking the government is something his party has been meaning to do anyway. Cameron immediately grasped the strategic opportunities offered by the ambiguity of a term like “open government” and embraced it wholeheartedly—in its most apolitical, economic version, of course. At the same time that he celebrated the ability of “armchair auditors” to pore through government databases, he also criticized freedom of information laws, alleging that FOI requests are “furring up the arteries of government” and even threatening to start charging for them. Francis Maude, the Tory politician who Cameron put in charge of liberating government data, is on the record stating that open government is “what modern deregulation looks like” and that he’d “like to make FOI redundant.” In 2011, Cameron’s government released a white paper on “Open Public Services” that uses the word “open” in a peculiar way: it argues that, save for national security and the judiciary, all public services must become open to competition from the market.
Here’s just one example of how a government that is nominally promoting Tim O’Reilly’s progressive agenda of Gov 2.0 and “government as a platform” is rolling back the welfare state and increasing government secrecy—all in the name of “openness.” The reason why Cameron has managed to get away with so much crazy talk is simple: the positive spin attached to “openness” allows his party to hide the ugly nature of its reforms. O’Reilly, who had otherwise praised the Government Digital Service, the unit responsible for the digitization of the British government, is aware that the “Big Society” might reveal the structural limitations of his quest for “openness.” Thus, he publicly distanced himself from Cameron, complaining of “the shabby abdication of responsibility of Cameron’s Big Society.”
But is this the same O’Reilly who once claimed that the goal of his proposed reforms is to “design programs and supporting infrastructure that enable ‘we the people’ to do most of the work”? His rejection of Cameron is pure PR, as they largely share the same agenda—not an easy thing to notice, as O’Reilly constantly alternates between two visions of open government. O’Reilly the good cop claims that he wants the government to release its data to promote more innovation by the private sector, while O’Reilly the bad cop wants to use that newly liberated data to shrink the government. “There is no Schumpeterian ‘creative destruction’ to bring unneeded government programs to an end,” he lamented in 2010. “Government 2.0 will require deep thinking about how to end programs that no longer work, and how to use the platform power of the government not to extend government’s reach, but instead, how to use it to better enable its citizenry and its economy.” Speaking to British civil servants, O’Reilly positions open government as the right thing to do in times of austerity, not just as an effective way to promote innovation.
After The New Yorker ran a long, critical article on the Big Society in 2010, Jennifer Pahlka—O’Reilly’s key ally, who runs an NGO called Code for America—quickly moved to dismiss any parallels between Cameron and O’Reilly. “The beauty of the government as a platform model is that it doesn’t assume civic participation, it encourages it subtly by aligning with existing motivations in its citizens, so that anyone—ranging from the fixers in Hawaii to the cynics in Britain—would be willing to get involved,” she noted in a blog post. “We’d better be careful we don’t send the wrong message, and that when we’re building tools for citizen engagement, we do it in the way that taps existing motivations.”
But what kinds of “existing motivations” are there to be tapped? O’Reilly writes that, in his ideal future, governments will be “making smart design decisions, which harness the self-interest of society and citizens to achieve positive results.” That, in fact, is how his favorite technology platforms work: users tell Google that some of their incoming email is spam in order to improve their own email experience. In other words, it’s self-interest through and through. “The architecture of Linux, the Internet, and the World Wide Web are such that users pursuing their own ‘selfish’ interests build collective value as an automatic byproduct,” writes O’Reilly. This is also how the likes of Eric Raymond explain the motivation of those contributing to open source projects—they do it for strictly selfish reasons. “The ‘utility function’ Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers,” Raymond writes in The Cathedral and the Bazaar. He goes on to say that “one may call their motivation ‘altruistic’, but this ignores the fact that altruism is itself a form of ego satisfaction for the altruist.” If it sounds like Ayn Rand, that’s because Raymond explicitly draws on her crazy talk.
When pressed, O’Reilly the good cop refuses to acknowledge that his thinking about open government is not very different from Raymond’s thinking about open source software. When earlier this year Nathaniel Tkacz, a media academic, noted these similarities, O’Reilly complained that he was “a bit surprised to learn that my ideas of ‘government as a platform’ are descended from Eric Raymond’s ideas about Linux, since: a) Eric is a noted libertarian with disdain for government b) Eric’s focus on Linux was on its software development methodology.” Well, perhaps O’Reilly shouldn’t act so surprised: as Tkacz points out, O’Reilly’s writings on “government as a platform” explicitly credit Raymond as the source of the metaphor. O’Reilly in 2011: “In The Cathedral & the Bazaar, Eric Raymond uses the image of a bazaar to contrast the collaborative development model of open source software with traditional software development, but the analogy is equally applicable to government.”
But is it really? Applied to politics, all this talk of bazaars, existing motivations, and self-interest treats citizenship as if it were fully reducible to market relations—yet another form of crazy talk. And it doesn’t easily square with the aspirations to active citizenship implicit in the “DIY spirit on a civic scale.” Of course, with some clever PR, one can say that the Hawaiians who rebuilt their park had some “existing motivations,” like having to earn a living to stay alive. But if the bar for “existing motivations” is set so low, then there are no limits to dismantling the welfare state and replacing it with some wild DIY hacker culture. Why do we need an expensive health care system if people have “existing motivations” to self-monitor at home and purchase drugs directly from Big Pharma? Why bother with police if we can print out guns at home—thanks, 3-D printers!—and we are already highly motivated to stay alive?
Once we follow O’Reilly’s exhortation not to treat the government as “the deus ex machina that we’ve paid to do for us what we could be doing for ourselves,” such questions are hard to avoid. In all of O’Reilly’s theorizing, there’s not a hint as to what political and moral principles should guide us in applying the platform model. Whatever those principles are, they are certainly not exhausted by appeals to innovation and efficiency—which is the language that O’Reilly wants us to speak.
O’Reilly is perfectly positioned to control our technology discourse: he can churn out whatever books he needs to promote his favorite memes.
The fundamental problem with O’Reilly’s vision is that, on the one hand, it’s all about having the private sector build new services that were unavailable when the government ran the show. Thus, it’s all about citizen-consumers, guided by the Invisible Hand, creating new value out of thin air. But O’Reilly also likes to invoke “DIY spirit on a civic scale” to call on citizens to take on functions that were previously performed by the government (even if poorly); here, we are not building new services—we are outsourcing public services to the private sector. O’Reilly’s logic in a nutshell: the government didn’t have to build its own Foursquare—hence, disaster response should be delegated to the private sector. Is the government meant to be a platform for providing services or for stimulating innovation? It’s certainly both—but the principles that ought to regulate its behavior in each case are certainly different.
For O’Reilly, the memes of “Government 2.0” and “government as a platform” serve one major function: they make him relevant to the conversation about governance and politics, allowing him to expand his business into new territories. The Internet and open source have become universal connectors that can relate anything to anything. “Just as the interstate highway system increased the vitality of our transportation infrastructure, it is certainly possible that greater government involvement in health care could do the same,” he writes. Got it? But what if the dynamics of building highways are different from those of providing health care? What then?
O’Reilly’s attempts to meme-engineer how we think about politics are all the more disturbing for the deeply reductionist, anti-democratic flavor of his own politics. Positivist to his core, O’Reilly believes that there is just one right answer to policy dilemmas, and that it’s the job of the government (for him, it’s all just “government”) to produce legislation that gets at this “right” answer and then pass the necessary measures to make it happen. The means don’t much matter; it’s all about the ends—and the ends are perfectly knowable, as long as we have the data.
O’Reilly’s latest meme, which he calls “algorithmic regulation,” was inspired by—what else?—the Internet. This idea, writes O’Reilly, “is central to all Internet platforms, and provides a fruitful area for investigation in the design of 21st century government.” This is how he explained it in a recent talk at the Long Now Foundation:
If you look at, say, the way spam is regulated on the Internet, that’s the beginnings of a kind of an immune system response to a pathogen and works a lot like biology: you recognize the signature of something new and hostile and you fix it. . . . You compare that to how government regulation works, and you go: “It’s just badly broken!” Somebody puts out some rules, and there’s no method of enforcement.
Not a very sharp definition yet, but this is how many of O’Reilly’s memes start. Once he’s cornered the meme, his “correspondents” will do the rest, highlighting it in their blog posts and reports. (“In the future, better outcomes might come . . . through adopting what Tim O’Reilly has described as ‘algorithmic regulation,’ applying the dynamic feedback loops that web giants use to police their systems against malware and spam in government agencies entrusted with protecting the public interest,” writes Alex Howard, the “Government 2.0” correspondent of O’Reilly Radar.)
Quite appropriately, the only political institution that corresponds to O’Reilly’s vision for “algorithmic regulation” is a central bank. Central banks have very clear, numerical targets—they know what’s “right” and don’t have to bother with deliberations—and they try to meet those targets with just a few specific tools at their disposal. They love feedback and think like Google. According to O’Reilly, the way they regulate is “kind of like the way Google regulates. They kind of say: I have an outcome in mind and a couple of knobs and levers. Periodically, I might get a few new knobs and levers, and I tweak them to get the outcome. I don’t just sort of say: This is a rule and I’m going to follow it regardless of whether it has a good outcome or a bad outcome.” Central banks are elegant and simple; they just do stuff, instead of succumbing to, well, politics. “[In central banks] we have a couple of levers, and we keep tweaking them to see if we can get where we want to go. And that’s really how I would like to see us thinking about government regulatory processes.”
Expanding on this notion of “algorithmic regulation,” O’Reilly reveals his inner technocrat:
I remember having a conversation with Nancy Pelosi not long after Google did their Panda search update, and it was in the context of SOPA/PIPA. . . . [Pelosi] said, “Well, you know, we have to satisfy the interests of the technology industry and the movie industry.” And I thought, “No, you don’t. You have to get the right answer.” So that’s the reason I mentioned Google Panda search update, when they downgraded a lot of people who were building these content farms and putting low quality content in order to get pageviews and clicks in order to make money and not satisfy the users. And I thought, “Gosh, what if Google had said, yeah, yeah, we have to sit down with Demand Media and satisfy their concerns, we have to make sure that at least 30 percent of the search results are crappy so that their business model is preserved.” You wouldn’t do that. You’d say, “No, we have to get it right!” And I feel like, we don’t actually have a government that actually understands that it has to be building a better platform that starts to manage things like that with the best outcome for the real users. [loud applause]
Here O’Reilly dismisses the entertainment industry as just “wrong,” essentially comparing them to spammers. But what makes Google an appropriate model here? While it has obligations to its shareholders, Google doesn’t owe anything to the sites in its index. Congress was never meant to work this way. SOPA and PIPA were bad laws with too much overreach, but to claim that the entertainment industry has no legitimate grievances against piracy seems bizarre.
Underpinning O’Reilly’s faith in algorithmic regulation is his naive belief that big data, harnessed through collective intelligence, would allow us to get at the right answer to every problem, making both representation and deliberation unnecessary. After all, why let contesting factions battle it out in the public sphere if we can just study what happens in the real world—with our sensors, databases, and algorithms? No wonder O’Reilly ends up claiming that “we have to actually start moving away from the notion that politics really has very much to do with governance. To the extent that we can fix things without politics, we’d be much better off.” It’s the ultimate conceit of Silicon Valley: if only we had more data and better tools, we could suspend politics once and for all.
The magic “feedback” that O’Reilly touts so passionately is really the voice of the market—and occasionally he lets that slip: “Government programs must be designed from the outset not as a fixed set of specifications, but as open-ended platforms that allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.” But we prefer to have complete specifications at the outset not because no one had thought of building dynamic feedback systems before O’Reilly but because this is the only way to ensure that everyone’s grievances are addressed before the policies are implemented.
His treatment of feedback as essentially an Internet phenomenon is vintage O’Reilly. As long as “algorithmic regulation” is defined against a notion like Web 2.0, O’Reilly feels no need to engage with the vast body of thought on feedback systems and the sociology of performance indicators. That most of the ideas behind algorithmic regulation were articulated by the likes of Karl Deutsch and David Easton in the 1960s would probably be news to O’Reilly. Nor is his intellectual equilibrium perturbed by the fact that the RAND Corporation was pitching something very similar to “algorithmic regulation” to American cities in the late 1960s in the hopes of making city governance more cybernetic. The plans, alas, didn’t work; the models could never account for the messy reality of urban life.
It’s the ultimate conceit of Silicon Valley: if only we had more data and better tools, we could suspend politics once and for all.
A decade before he wrote Science and Sanity, Alfred Korzybski wrote another weird book—Manhood of Humanity. He, too, was very keen on feedback. “Philosophy, law and ethics, to be effective in a dynamic world must be dynamic; they must be made vital enough to keep pace with the progress of life and science,” he proclaimed. Korzybski’s solution, surprisingly, also lay in turning government into an algorithmically driven platform: “A natural first step would probably be the establishment of a new institution which might be called a Dynamic Department—Department of Coordination or a Department of Cooperation—the name is of little importance, but it would be the nucleus of the new civilization.” Like O’Reilly’s “government as a platform,” this new department would aspire to enable citizens. “Its functions,” wrote Korzybski, “would be those of encouraging, helping and protecting the people in such cooperative enterprises as agriculture, manufactures, finance, and distribution.”
Korzybski envisioned this new scientific government to consist of ten sections, which ranged from the Section of Mathematical Sociology or Humanology (“composed of at least one sociologist, one biologist, one mechanical engineer, and one mathematician”) to the Section of Mathematical Legislation (“composed of (say) one lawyer, one mathematician, one mechanical engineer”) and from the Promoters’ Section (“composed of engineers whose duty would be to study all of the latest scientific facts, collect data, and elaborate plans”) to the News Section (its task would be “to edit a large daily paper giving true, uncolored news with a special supplement relating to progress in the work of Human Engineering”).
For all his insight into the nature of language and reality, Korzybski was a kooky technocrat who believed that science could resolve all political problems. He would certainly agree with O’Reilly that there is one right way to decide on pending legislation and that any issues and controversies that come up in deliberations are just semantic noise—clever meme-engineering by the parties involved. Scientism is still scientism, even when it’s clothed in the rhetoric of big data.
At least O’Reilly is perfectly clear about how people can succeed in the future. Toward the end of his Long Now Foundation talk, he admits that
[the] future of collective intelligence applications is a future in which the individual that we prize so highly actually has less power—except to the extent that that individual is able to create new mind storms. . . . How will we influence this global brain? The way we’ll influence it is seen in the way that people create these viral storms . . . . We’re going to start getting good at that. People will be able to command vast amounts of attention and direct large groups of people through new mechanisms.
Yes, let that thought sink in: our Mindstormer-in-Chief is telling us that the only way to succeed in this brave new world is to become a Tim O’Reilly. Anyone fancy an O’Reilly manual on meme hustling?
[*Author’s note] In researching this essay, I tried to read all of O’Reilly’s published writings: blog posts, essays, tweets. I read many of his interviews and pored over the comments he left on blogs and news sites. I watched all his talks on YouTube. But I decided against interviewing him. First of all, I don’t believe in interviewing spin doctors: the interviewer learns nothing new while the interviewee gets an extraordinary opportunity to spin the story even before it’s published. Second, my goal in writing this essay was not to profile O’Reilly. Of course, I could have told you all about the wonderful jams—plum, blackberry, raspberry, peach—that he likes to make in his spare time. I left out such trivia on purpose, as my main interest has been O’Reilly the thinker, not O’Reilly the human being. Serious thinkers can be judged by their published output alone. Third, the only two emails that I ever received from him hinted at his penchant for heavy-handed manipulation of the media. The first email arrived long before I started working on this essay. It was a complaint about something I had written about him in the past, a throwaway line in a long essay—a complaint I believe to be without merit. The second email came right after I finished writing the first draft, which, by coincidence, happened to be on the very day that O’Reilly and I had a brief but feisty exchange on Twitter (he initiated it). In that second email, he offered to explain all his positions to me face to face—an opportunity I turned down, having just spent three months of my life reading his tweets, blog posts, and essays. That said, I have no doubt that everything in this essay will be meme-engineered against me.