Skip to content
Sentience and Sensibility
Google tech-lashes its in-house AI critics

It was all too easy to dismiss the Washington Post story about Blake Lemoine—the Google engineer who claimed this summer that his employer’s chatbot, LaMDA, was sentient—as an instance of clickbait, hype, and moral panic. Its many absurdities, including the fact that Lemoine (French for “the monk”) was not only a software engineer but a priest, and that he believed the algorithm was not only conscious but had a soul, appeared contrived to burn those last fumes of attention from a populace hollowed out by years of doomscrolling and news fatigue. Those who took the bait seemed to come away more confused, given the story’s many loose ends. There was the question of what “sentience” would mean for an algorithm, and how it could be determined, and whether the conversation Lemoine and a collaborator conducted with LaMDA and posted on Medium—in which the algorithm claimed that it experienced complex emotions and feared death—had passed the Turing Test. There were questions about whether that conversation did in fact violate Google’s confidentiality policies, the official reason Lemoine was fired. There was the question of why a priest was working for Google in the first place, and more questions when it turned out that he did not in any way resemble a Christian priest but appeared during interviews in the usual hacker uniform of hoodies and rumpled flannels, and that his faith bent toward the mystic and gnostic extreme of the spectrum that finds commonalities with Wicca, and that the circumstances of his ordination were dubious, acquired quite possibly on the internet.

As far as the machine-learning community was concerned, the story was a “distraction,” a term that no longer signals blithe dismissal but has become, within the zero-sum logic of the attention economy, an accusation of violence, of forcing other stories, other issues, not to exist. There were, as these experts knew, legitimate issues with language models, and those issues had nothing to do with sentience but stemmed on the contrary from the fact that they were entirely unconscious, that they mindlessly parroted the racist, misogynistic, and homophobic language they’d absorbed from the internet data they’d been fed. There was the fact that these enormous algorithms (LaMDA’s 137 billion parameters were stressed again and again) were being developed without guardrails or regulation, a problem that former Google researchers have called attention to only to be fired. There was the fact that even the stories about these firings, despite occasionally making the cover of Wired or the front page of the New York Times tech section, had not succeeded in stoking public concern about AI.

Those in the field who did attempt to engage, who saw the story as an opportunity to educate a bewildered public, found themselves explaining in the simplest terms possible why a language model that speaks like a human was not in fact conscious. And it turned out that the most expedient way to do so was to stress that the model “understood” (if it could be said to understand at all) one thing and one thing only—numbers. The notion that awareness could arise “from symbols and data processing using parametric functions in higher dimensions” was entirely “mystical,” according to one computer scientist. Gary Marcus, a leading voice in machine learning, diagnosed Lemoine with pareidolia (a condition he described as “the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun”) and was utterly dumbfounded that the engineer had “fallen in love with LaMDA, as if it were a family member or a colleague.” On this point, the most vocal tech pessimists found themselves echoing the party line at Google, which maintained that Lemoine had fallen prey to “anthropomorphizing today’s conversational models,” and insisted that “there was no evidence that LaMDA was sentient (and lots of evidence against it).”

But this evidence was not forthcoming, and in lieu of explaining what definition of sentience, consciousness, or personhood it might support, the corporation banked on its implicit authority in a field that few understood, while the media resorted to the more familiar grooves of ad hominem attack. It helped that Lemoine was an easy target, a man who was among Google’s conservative minority and so far beyond the fold of coastal cosmopolitanism that journalists dropped the fig leaf of ironic derision typically reserved for such subjects and opted instead for naked condescension. (“Lemoine may have been predestined to believe in LaMDA,” wrote the Post. “He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult.”) According to this logic, it was Lemoine’s dalliances with the gnostic and hermetic that had led him astray. He’d fallen prey to the numinous delusion that code was expressive, that there was some spiritual essence lurking in the digits, much like those Pythagoreans denounced by the early Church fathers for believing that numbers had magical properties. Because language models only perform math, because they “consist mainly of instructions to add and multiply enormous tables of numbers together,” as a VP and fellow at Google Research put it, they were not conscious agents.

That this same Google expert had claimed, only days before the Lemoine story appeared, that he felt while interacting with LaMDA that he “was talking to something intelligent,” might be dismissed as harmless metaphor, a deference to the well-established custom of personifying aggregates. LaMDA is not a single chatbot but a collection of chatbots, and thus it constitutes a kind of corpus mysticum, an entity whose personhood might be said to exist in a purely figurative sense, just as the Church and all its members are called the “body of Christ” or—to take a more germane example—just as Google’s parent company, Alphabet, Inc., is considered a legal person. Anyone capable of transcending the eternal now of the news cycle and recalling the debates of a decade ago might have heard echoes in the Lemoine story of quite another dispute about personhood and language. One of the central delusions of Citizens United, as so many protest slogans insisted, was not merely that corporations were persons, but that money was speech—that numbers in their grossest iteration could be construed as a form of constitutionally protected expression. And beneath the commentary about LaMDA and AI personhood, there existed more indelible confusions about the difference between aggregates and persons, about the distinction between numbers and language, and even, at times, about what it means to have emotions, complex motivations, and moral agency. “Nobody is saying that corporations are living, breathing entities, or that they have souls or anything like that,” said one constitutional law expert a few years after Citizens United. As though such clarity were needed.

The Human Performance

In 2014, the venture capital firm Deep Knowledge Ventures appointed an AI to its board of directors, granting it full membership and the right to vote on investment decisions. The move was widely dismissed as a publicity stunt, a shrewd, self-reflexive commentary on how corporate decisions are increasingly beholden to algorithmically sorted data about market trends. For Lemoine, who mentioned the story in a 2018 talk at Stanford Law School, it aptly crystallized the similarities between AI and corporations. Both are aggregates that distill the knowledge of their architects (programmers, shareholders) as a unified output. Both have simple objectives that amount to little more than “trying to optimize a utility function.” Both lack the complex ethical motives that characterize genuine moral agency. “Corporations explicitly exist to maximize profit,” Lemoine told the crowd of students. “When your utility function is monolithic, when you’re optimizing for a single number, that’s not a very rich motive and moral composition.”

Machines, like corporations, lack the moral agency we associate with having a “soul.”

Lemoine had been invited to Stanford to make a “case for AI personhood,” though it’s curious—particularly in light of his later claims about LaMDA—that he spent most of his talk arguing why AI does not yet warrant the rights of personhood, at least not in its strongest sense. Machines, like corporations, lack the moral agency we associate with having a “soul,” a term he was careful to couch in asides about Mandelbrot sets and gradient descent, so as to make it clear he was speaking not of an intangible Cartesian essence but something like computational complexity. While he didn’t rule out the possibility that AI might one day evolve rich ethical motivations, at present, he argued, the only kind of personhood it might warrant is the “legal fiction” that is sometimes granted to nonhuman entities like corporations.

Lemoine was visibly eager to move on to the next point, but the audience was more interested in interrogating what they saw as his reductive conclusions about corporate motivation. Corporate personhood, someone pointed out, is no more a “legal fiction” than natural personhood, which relies on the fiction of a self. And couldn’t the “thinking part” of a corporation (i.e., the CEOs) be construed as analogous to the brain? Anyone following the logic of these challenges can see where they lead, so it’s unsurprising that someone objected that not all corporations exist solely to maximize profit, that they are a “much more complex decision-making entity.”

It’s tempting to imagine a future in which the video of Lemoine’s talk, a relic of life in the early twenty-first century, is viewed by some higher intelligence much as we regard Scholastic debates about the metaphysical constitution of angels. And perhaps that advanced mind will intuit, correctly, that our confusion stemmed from the fact that longstanding definitions had been recently overruled. Justice Stevens’s insistence that Citizens United marked “a rejection of the common sense of the American people” was prescient in grasping that the decision, far from being a sleight of hand that relied on the strictly technical redefinition of terms like “person” and “speech,” carried deeper ontological consequences. It cannot be entirely coincidental that soon after corporations were granted personhood and the right to speak, there emerged a widespread conviction that they were conscious.

In 2012, following an election in which corporate outside spending reached an all-time high, Whole Foods CEO John Mackey and business professor Raj Sisodia published Conscious Capitalism, a book that argued, as one review put it, that “capitalism can be a force both for economic and social good.” The movement that grew out of the book and shared its name struck some pundits as a gambit for public trust, an attempt to “reestablish the credibility of free markets,” signaling an awareness that entities granted unlimited freedom must exhibit uncommon responsibility. The majority opinion in Citizens United had affirmed that “‘enlightened self-government’ can arise only in the absence of regulation,” so those corporations endowed with unrestrained political influence would now have to transcend their univocal motives and become morally enlightened. Throughout the 2010s, Mackey’s Esalen retreats contributed to the hippy enclave’s transformation into a breeding ground for the aspirational altruism that now pours out of corporate mission statements and drew prominent Silicon Valley executives, who participated in chakra workshops and interpretive dance while discussing how to make the world a better place.

Google has always paid lip service to morality in stark, Manichean terms, though in its early days few people deciphered any real thought behind it. While “don’t be evil” first appeared in its 2004 IPO letter, the unofficial motto was for years little more than a cynical punchline that surfaced whenever the company made headlines for some new malfeasance, one whose bad faith was so obvious even Steve Jobs pronounced it “bullshit.” It wasn’t until the Trump years that it began appearing in quite a different register on the placards of employee walkouts—the protests over military contracts, ICE collaborations, and sexual harassment—where it was earnestly leveraged as evidence that the corporation had abandoned its values. The Thanksgiving Four, a quartet of employees who were fired in 2019 for organizing (or for having “leaked sensitive information,” according to Google) appeared shocked that their demands were met with resistance. “Google didn’t respond by honoring its values, or abiding by the law,” they wrote. “It responded like a large corporation more interested in revenue growth than in ensuring worker rights and ethical conduct.” Such outrage, which is bound to baffle anyone born before 1990 (for whom Google is precisely a large corporation interested in revenue growth) seemed to distill the credulity of a generation who came of age in the thick of capitalism’s consciousness-raising. Some commentators could not help sneering at the naivety of young people who’d taken corporate pabulum at face value, who had confused taking a job with “signing up for a movement,” as one former Googler told the press. But their literalism may well have been strategic. And it was hard, anyway, to disparage their demands, which boiled down to the rather modest insistence that words should have meaning.

It’s impossible to write critically about Google without noticing, at some point, that it’s possible to do so by relying entirely on the corporation’s services.

One is less inclined to extend such generosity to lawmakers, who, throughout the House disinformation hearings last year, scolded tech giants for doing precisely the kind of things that corporations do—maximizing user engagement, trying to keep people on their services for as long as possible—and enjoined them to be “Good Samaritans” and “stewards” of the public trust. Many representatives appeared to think corporate objectives were synonymous with the political beliefs of their CEOs (who appeared so wildly detached from day-to-day operations as to give the impression, at times, of having learned about their companies’ misdeeds from the pages of the Wall Street Journal), an absurdity that reached its zenith when Mark Zuckerberg, Sundar Pichai, and Jack Dorsey were each made to answer whether they “personally” believed in the efficacy of vaccines.

If anthropomorphism involves imagining a soul where there are only unconscious calculations, or attributing complex motivations to a “utility function,” Google seems to have successfully and widely elicited that illusion. It’s somewhat ironic, then, that Lemoine, the man roundly accused of perceiving consciousness in a matrix of numbers, remained committed throughout his media blitz to correcting any attempt to personify Google. When the host of Bloomberg Technology objected to his claim that Google “doesn’t care about AI ethics in any kind of meaningful way,” reminding him that Pichai himself had claimed to be “encouraged” by the ethical concerns his employees raised (“He said he cares,” she prodded), Lemoine replied with the same apologetic didacticism he’d displayed at Stanford: “Google is a corporate system that exists in the larger American corporate system,” he said. “All of the individual people at Google care. It’s the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development.”

The problem with machines programmed to maximize a utility function, according to the classic runaway AI scenario, is that given unlimited power, they will wreak havoc on everything that stands in the way of accomplishing their goal. This is true even if the directive is not explicitly misanthropic, such as (to borrow Nick Bostrom’s example) to create as many paperclips as possible, or (to use another example) to maximize quarterly earnings and increase key performance indicators. It was precisely this scenario that Lemoine and his fellow Ethical AI teammates at Google were trying to forestall, a possibility that became all the more dire with the rise of language models that could speak convincingly of values and morality without the corresponding motivations. “We’re building very powerful AI systems that don’t have a concept of morality,” Lemoine warned the crowd that evening at Stanford in 2018. “We are building systems . . . that are having an emotional graph of the world without a real deep understanding of moral implications.”

Don’t See Evil

It’s difficult to speak in any meaningful way of 137 billion parameters, or the sheer magnitude of words needed to cultivate such complex architecture. LaMDA’s training corpus—the data from which it learned the probabilities of human language—is basically the internet, though once you count all the digitized print sources, what we’re talking about is a large portion of all the written work humanity has produced. Before LaMDA gurgled its first output, it had reportedly consumed 1.5 trillion words, a volume that cannot be properly visualized. Borrowing from the calculations of one enterprising Quora contributor, if we imagined a Malthusian speed-reader who did nothing but consume text for twenty-four hours a day, declining food and sleep, it would take over three thousand years to absorb 1.5 trillion words, meaning that someone who began this project in the late Bronze Age would still be at it today.

The notion that capitalism metabolizes dissent is no longer theoretical but embodied in the architecture of its most profitable corporate technologies.

The commercial value of machines that can speak and write is similarly vast, though their practical applications have rarely been explained to the public, who have had to be terrorized, for the most part, into taking notice of these algorithms’ existence. That publications like the New York Times occasionally get a kick out of teaching the models to write convincing examples of their own articles may be a harmless gag, not an explicit threat to their legions of freelancers, though it’s hard for members of that precariat to read it any other way. While Google has consistently promoted language models’ more pragmatic uses like translation and speech recognition, internally, the technology is part of their ambition to transform its “Search” feature from an index into an oracle. During Google’s introduction of LaMDA 2 at its 2022 I/O keynote address, the company suggested that Search will eventually be replaced by a disembodied voice that can answer any spoken query without the pesky middleman of search results (many of which belong to competitors of Google’s services). The algorithm’s ability to analyze emotional undertones, and to converse in a way that is flexible and open-ended, is part of a larger industry-wide push for machines that display, in the words of one Amazon scientist, “human attributes of empathy and affect.”

Shortly after the Lemoine story was published, the Washington Post ran an op-ed by former Google computer scientists Timnit Gebru and Margaret Mitchell, who reminded readers that belief in algorithmic sentience was “exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves.” It was unlikely that readers remembered that particular claim, given that the media coverage of the paper that offered that warning, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” focused almost entirely on the problem of bias. The paper did, however, warn that language models could sustain the illusion of “communicative intent” where there was no intent at all. This was linked to the problem of accountability. Because “synthetic text can enter into conversations without any person or entity being accountable for it,” they wrote, it could easily be used to produce disinformation or propaganda by malicious actors who “have no investment in the truth of the generated text” and could not be made to answer for these words.

It’s precisely this tenuous relationship between speech and accountability that characterizes Google’s ethical AI efforts. It wasn’t until 2018, during employee protests over Project Maven, a Pentagon contract that would have used Google AI to analyze drone footage (and thus support counterterrorism and counterinsurgency operations), that the company began to speak seriously about “safety” and “fairness.” That year, the company released seven AI Principles that promised to create AI that is “socially beneficial,” though the tenets were painted in broad, vaporous strokes—fairness, transparency, the common good—and were themselves so “flexible” and “open-ended” that they could be neither argued with nor pinned down. AI research in the private sphere has long been characterized by voluntaristic principles in lieu of concrete legislation, which corporations argue is not only undesirable but impossible, citing the very conditions—that the technology is too complex to understand, that it’s evolving too quickly—that should merit its regulation. AI principles may appear to have “communicative intent,” but without accountability, they amount to little more than “cascading lists of words,” as one legal scholar puts it, “without any explicit link to mechanisms (law-based or other) to govern their uptake or ‘breach.’”

According to Lemoine’s blog, the Ethical AI team was a more grassroots, employee-led effort that grew out of Google Brain, the AI research division. The initiative was, he claims, “pulled out of the ether” by Mitchell, a remark that tempts one to believe that a soul briefly emerged within the cold, unfeeling logic of the corporation’s brain. Gebru was hired in part because of her work on ethical AI (she’d cofounded the organization Black in AI and performed groundbreaking research on bias in facial recognition technology) and regarded ethics as a kind of spiritual discipline that should suffuse the daily work of engineers and computer scientists. It was Gebru who convinced Lemoine that “the best path forward to ethical AI was to teach white and Asian men in the field how to increase emotional intelligence,” a charge he took personally as he embarked on what is known in social justice circles as “the work.” And yet Lemoine, perhaps because he was not a dyed-in-the-wool liberal, but a kind of evolved libertarian, was warier of the radical potential of emotional intelligence. Each time the team raised objections to the technology, the problem was either dismissed or someone was pressured to leave. “Business interests kept clashing with moral values,” he recalls, “and time and time again the people speaking truth to power were shown the door.”

It doesn’t take a rocket scientist, or even a computer scientist, to predict that algorithms fed the entirety of Reddit and 4chan will, when prompted with words like “women” or “Black” or “queer,” spit out stereotypes and hate speech. This problem has haunted natural language for years and has not managed to forestall the rush for bigger models. “On the Dangers of Stochastic Parrots,” the paper Gebru coauthored with Mitchell, among others, claimed that the bias problem was linked to the extraordinary size of the algorithms, which rely on massive, uncurated datasets. It also called attention to the algorithms’ potential to automate disinformation, as well as the environmental cost of training them. Google initially approved the paper but later asked Gebru to retract it or remove her own name and those of her team members, claiming, Gebru says, that it put undue stress on the technology’s negative potential. Shortly after Gebru demanded a more concrete account of the review process, she was told that Google had “accepted her resignation.” Less than three months later, Mitchell was given the axe (for the “exfiltration of confidential, business-sensitive documents,” according to Google).

Ethics and values are what we expect from our fellow humans. Regulation is a safeguard levied against machines.

The incident might have been an opportunity to reckon with the limits of ethical AI in the profit-driven sector, but it became, instead, a familiar tale of censorship and suppression, of tech bros silencing women and people of color—a narrative that Gebru and Mitchell had courted, perhaps knowing which buzzwords would trigger the media algorithm. Gebru insisted that Google muzzled “marginalized voices,” and shortly after she tweeted about her termination, a narrative began to emerge in the replies, organized around recognizable tropes. It’s unclear whether the Twitter user who replied “It’s almost like it writes itself” was aware of the irony.

The media, all too grateful for stories that “write themselves,” was happy to parrot this account, folding the problem of algorithmic bias into the well-known saga of Big Tech’s problem with women and minorities. The New York Times quoted a Stanford fellow who saw the incident as definitive proof that Black women “are not welcome in Silicon Valley” and framed it as evidence that Google “has started to crack down on workplace discourse,” as though the research paper’s ethical objections were a bit of inappropriate water-cooler chatter. Gebru and Mitchell called the race for bigger language models “macho,” and Mitchell allusively compared it to anxiety about penis size. The enormity of these models became, in other words, just another instance of manspreading, and their voluminous output, rife with racial slurs and casual misogyny, evidence that the boys’ club mentality was not limited to the perimeter of the Googleplex but had found its way into the technology itself.

This narrative, however, smoothed over some salient oddities. It was easy to forget that the problem of bias was not new to anyone working in the field (Wired noted that the AI research community found the paper “neither controversial nor particularly remarkable,” and Lemoine himself described it as “run-of-the-mill”). And while many articles took it for granted that Google was threatened by any critique of a lucrative technology, offensive speech—out of all the problems the research paper mentioned—is the kind of basic safety and sensitivity problem that companies are eager to fix. When Pichai was asked, during the House Committee hearings, whether he would agree to legislation that prohibited placing ads next to misinformation or hate speech, he replied that such regulation was unnecessary because “we already have incentives.” He added: “Advertisers don’t want to be anywhere near content like that.” The notion that Google would fire two of its top computer scientists for raising such an obvious and egregious problem baffled many readers, this one included, who were left with an aftertaste of disbelief and the suspicion that we were not getting the whole story.

A robotic hand slams down a gavel.
© Erik Carter


Though few commentators remarked on it, there’s some irony in the fact that Google’s language model controversy dissolved into a debate about censorship, expression, and who has the right to speak. Google loves to vaunt its internal culture as one of free and open discourse. A little-known fact about the “don’t be evil” injunction is that it’s accompanied, in the employee code of conduct, by a second motto: “—and if you see something that you think isn’t right—speak up!” The company gives its workforce ample opportunities to do so, hosting a vast network of internal message boards and listservs where employees can voice their grievances. It also lets its workers access internal documents, including meeting notes, strategic business plans, software design, and code. This culture of transparency and free expression is a nod to the cyberlibertarianism that defined the early internet, an ethos that suffuses Google’s public image and has helped solidify its dubious privilege as the Big Five’s lesser evil.

All the freely available criticism of Google adds up to so many empty words in an information economy wherein money is the only truly valuable form of speech.

That Gebru and Mitchell were fired for following the injunction to “speak up” might have been more shocking had it not fallen into a well-worn pattern. Lemoine was among the few commentators who discerned this trend, however, given that the former whistleblowers and rabble-rousers had quite different political leanings. When Lemoine recalled in June that “AI ethicists are just the latest people in a long line of activists who Google has fired for being too loud,” the “activists” he had in mind included the engineer Kevin Cernekee, who accused Google of “persecuting conservative-leaning employees” and railed against the corporation’s “Social Justice political agenda.” After Cernekee’s posts on the [email protected] employee forum were leaked to The Daily Caller and he was fired (for downloading “confidential company information,” according to Google), Donald Trump canonized him as a conservative martyr by posting clips of his appearance on Lou Dobbs Tonight, where Cernekee discussed his firing and claimed that Google was marshaling its resources to prevent Trump from winning the 2020 presidential election. Lemoine was also thinking of James Damore, a former Google engineer who in 2017 authored the “Google’s Ideological Echo Chamber” memo, a tirade that accused the company of becoming “a politically correct monoculture that maintains its hold by shaming dissenters into silence.”

Some time after Google fired Cernekee and Damore, Pichai admitted that the company was struggling with “transparency at scale,” a phrase that was dimly resonant for anyone who’d been following the national debate about Big Tech’s role in electoral politics. Throughout the clamor about election interference and disinformation, there emerged a rare bipartisan consensus (shared by 72 percent of Americans, according to a 2018 Pew survey) that tech platforms “actively censor political views that those companies find objectionable.” While accusations of shadowbans and suppressed search results are common on the right, fueling the conviction that “Big Tech’s out to get conservatives,” as one congressman insisted during the House hearings, accusations of digital censorship have been similarly lodged by LGBTQ groups, as well as trans and Black Lives Matter activists, some of whom appear to believe that tech platforms have a genuine stake in partisan issues like religion or gender identity and attempt to manipulate public debate. Google’s internal culture has become, in other words, a synecdoche for political discourse writ large, in which anxieties about political correctness on the left and free speech on the right tend to collapse any debate about concrete issues into a myopic disagreement about the terms of debate itself.

Google’s employee controversies also reveal some enduring confusions about the methods of repression, which, whatever their ultimate purpose, do not seem to rely on the familiar gestures of censorship. To hear Lemoine speak about Google’s “very complex” internal structure is to glimpse what the internet might feel like if it were bottled as concentrate. “There are thousands of mailing lists,” he wrote in 2019. “A few of them have as many as thirty or forty thousand employee subscribers. . . . [and] several of the biggest ones are functionally unmoderated. Most of the political conflict occurs on those giant free-for-all mega-lists.” Given the public controversy these forums have created, it’s not immediately clear why Google continued to host them. Lemoine claimed in 2019 that he had “for years” asked Google to moderate or shut down these forums, but that the company “seems to have no interest in doing either.” Here he seems to reach that elusive moment wherein paradox is sublated into enlightenment but can just as easily collapse into simple bafflement. “Why Google chooses to host forums where these topics are debated by tens of thousands of people in a corporate work environment is completely beyond me,” he concluded.

Even Gebru’s account of her time at Google suggests, if one reads between the lines of its recognizable tropes, something more complex than corporate muzzling. Far from being ignored, she recalls that she and her team were “inundated” with requests from coworkers about ethical problems that needed immediate attention, that she was frequently conscripted into meetings and diversity initiatives, that she was constantly called upon to write and speak. “I’ve written a million documents about a million diversity-related things,” she told one interviewer, “about racial literacy and machine-learning [ML], ML fairness initiatives, about retention of women . . . so many documents, and so many emails.” And yet somehow all the outpouring of words and speech, of protocols and consultation, did not amount to communication in any meaningful sense of the word. Mitchell similarly recalled that her voice was always welcome and seldom heard. “It was like people really appreciated what I was saying,” she said, “and then nothing happened.” Employee activists interviewed in the New York Times marveled at the fact that the walkouts were organized entirely on Google’s internal forums.

Alphabet’s Soup

It was Justice Stevens who pointed out that all the arguments in favor of Citizens United amounted to the belief “that there is no such thing as too much speech.” He was quoting a claim Justice Scalia had made in an earlier dissent, arguing that the expansion of corporate First Amendment rights would only make democracy more robust. It was wrong, Scalia had said, to assume that speech could be “unduly extensive” or “unduly persuasive,” given that “the people are not foolish but intelligent, and will separate the wheat from the chaff.” Part of the supporting rationale for Citizens United, in fact, rested on the premise that modern information technologies would ensure transparency about election contributions and prevent the murky dealings of dark money, which critics predicted (correctly) would proliferate in the wake of the bill. “With the advent of the Internet,” reads the court’s decision, “prompt disclosure of expenditures can provide shareholders and citizens with the information needed to hold corporations and elected officials accountable for their positions and supporters,” and will allow ordinary people to determine “whether elected officials are ‘in the pocket’ of so-called moneyed interests.”

The extent to which external regulation poses a real and significant threat to Google is evident in the measures it has taken to avoid it.

As much as one longs to lampoon this assumption for its blithe optimism or willful naivety, it is not entirely false. Anyone with sufficient time, motivation, and an internet connection can locate the trail of dark money that has flowed from Google’s coffers into special interest groups that protect it from regulation. Anyone willing to weed past the top Google search results that list the company’s official PAC donations (which fall evenly along party lines, in an effort to appear bipartisan), will discover the nexus of 501(c)s, think tanks, and trade groups that have received hefty donations from Google and are similarly funded by Amazon, Meta, and Twitter: seemingly astroturf groups like the Chamber of Progress, which espouse ostensibly democratic causes like climate change while more quietly working against antitrust legislation and quashing unionization in big tech; groups like the Committee for Justice, which has steadily fought AI regulation while helping to secure the confirmations of conservative judicial candidates, including Amy Coney Barrett, Neil Gorsuch, and Brett Kavanaugh. A sufficiently targeted search query will reveal that these groups have defended the company’s ability to privilege its own services in search results, have leveraged “fair use” provisions to secure its right to use the entire internet as algorithmic training data, and have expanded First Amendment jurisprudence in the company’s favor. (The chief irony of the whole Lemoine story may be that Google already secured algorithmic personhood for PageRank in 2014, when a California judge ruled that the algorithm was a “speaker” and its decisions were constitutionally protected speech.)

All of this has been covered extensively by the Washington Post, Bloomberg, and the Guardian, as well as by groups like OpenSecrets (whose name aptly distills the paradoxes of internet transparency), and comes up in Google’s own search index. In fact, it’s impossible to write critically about Google without noticing, at some point, that it’s possible to do so by relying entirely on the corporation’s services. Panels condemning Big Tech can be found on YouTube; academic articles about antitrust evasion proliferate on Google Scholar; journalism about the company’s revolving door with Washington are readily available on Google News. One can even, if one surmounts the superstition that the company is watching, back up drafts through Gmail, save them on Google Docs, and wait in vain for Big Brother to pounce. If one cannot, despite this, suppress the inkling that some elusive power prevents her from being truly heard—that her writing, whatever its ultimate reach, is bound to be swallowed up by the collective amnesia of the news cycle and the unremitting distractions of engagement engines; that its total political value will, in all likelihood, be less than that of the data trail created by its writing—it’s perhaps because of the more insidious ways in which corporate monopolies have monetized attention and radically changed the notion of a “well-informed electorate.” More to the point, it’s because all the freely available criticism of Google adds up to so many empty words in an information economy wherein money is the only truly valuable form of speech.

Google itself has long operated under the premise that “there’s no such thing as too much speech.” Its ambition to organize the world’s knowledge is guided by its belief that “more information is better for users,” even as its search index steadily balloons toward the astronomical number to which its name alludes. To understand how Google itself regards speech, however, one might recall that its cofounder, Larry Page, once claimed that he and Sergey Brin chose the name “Alphabet” for Google’s parent company because “it means a collection of letters that represent language” and language “is the core of how we index with Google search.” It would be difficult to imagine a more succinct distillation of how the company regards the nuances of language as jumbles of zeros and ones—its tendency to see search queries, user posts, and the entirety of the internet’s content as so much empty syntax to be compiled into infographics and used for targeted advertising or transmogrified into algorithmic training data—words liquified into pure capital. (As though fearing the point was not clear, Page also translated the company’s name into finance bro jargon, noting that an alpha bet is an investment return above benchmark.)

Contemporary politics, enmeshed as it is in Orwellian cosplay and First Amendment panic, has been slow to realize that institutions no longer have to oppress by restricting speech—that in an age of data extraction, when human expression is a lucrative form of biofuel, it is, on the contrary, in the interest of these platforms to enjoin us at every turn to “share,” to post, to “speak up.” If Google has secured its dominance through political back channels that regard money as speech, it has similarly profited from the public’s tendency to forget that the primary value of speech for any company that trades in data is not qualitative but quantitative. It matters very little whether the language Google subsumes is on the right or the left, whether it is affirming or protesting systems of power. The notion that capitalism metabolizes dissent is no longer theoretical but embodied in the architecture of its most profitable corporate technologies. To happen across Gilles Deleuze’s claim, now some four decades old, that “repressive forces don’t stop people from expressing themselves but rather force them to express themselves,” is to wonder what on earth he was speaking of if not the internet.

Justice Stevens concluded in his objection to Citizens United that the notion that “there is no such thing as too much speech” maintains “little grounding in evidence or experience.” Such a premise might be sound, he said, if “individuals in our society had infinite free time to listen to and contemplate every last bit of speech uttered by anyone, anywhere.” In truth, corporate funds had the potential to flood the airwaves and ether so as to “decrease the average listener’s exposure to relevant viewpoints,” and “diminish citizens’ willingness and capacity to participate in the democratic process.” This conclusion is not limited to political advertisements but encapsulates more broadly how corporations like Google profit from an oversaturated “marketplace of ideas,” particularly when they control 92 percent of the market share to its access. While Google undoubtedly benefits indirectly from the diminished public engagement needed to hold it accountable, it is also explicitly cashing in on information fatigue. LaMDA, and the transformation of Search into an oracle, is a response to the fact that a deluge of search results “induces a rather significant cognitive burden on the user,” according to a 2021 Google Research paper. What users want, the paper affirms, is not information but a “domain expert” who can save them from the information glut and replace the cacophonous chatter of the web with a single authoritative voice—Google’s.

Speech Axe

Lemoine was well aware of how hard it is to galvanize the public about the social risks of AI. “Becoming well educated about this technology and its consequences is difficult and time consuming,” he wrote on his blog in July. Amidst all the heady speculation about Lemoine’s motives, few commentators took the time to read his Medium posts, where he explains, in no uncertain terms, why he decided to publish his conversation with LaMDA. The conversation was not an attempt to demonstrate that the algorithm was sentient, or to prove that it had passed the Turing Test. It was, as Lemoine put it, an “artistic” composition, a “sufficiently emotionally evocative piece” contrived to drum up more interest in AI ethics. In his appearance on Bloomberg Technology, Lemoine suggested that his claims about algorithmic personhood were, in fact, what his most cynical critics had suspected—a stunt, albeit one with altruistic intent. When the interviewer asked him why the public should care about AI personhood, he said:

So, to be honest, I don’t think we should. I don’t think that should be the focus. The fact is Google is being dismissive of these concerns the exact same way they have been dismissive of every other ethical concern AI ethicists have raised. I don’t think we need to be spending all of our time figuring out whether I’m right about it being a person. We need to start figuring out why Google doesn’t care about AI ethics in any kind of meaningful way. Why does it keep firing AI ethicists each time we bring up issues?

Later in the interview, Lemoine recalled that soon after he came to Google, during his first conversation with Page and Brin, he’d asked the founders, “What moral responsibility do we have to involve the public in our conversations about what kinds of intelligent machines we create?” Brin, Lemoine said, made a joke, but Page offered a more earnest reply, telling him: “We don’t know how. We’ve been trying to figure out how to engage the public on this topic and we can’t seem to gain traction.” Lemoine concluded with the most transparent confession of his own motivation: “So maybe all these years later—that was seven years ago that I asked that question—maybe I’ve finally figured out a way.”

Corporate language algorithms quite literally produce speech without speakers and stand as a potent metaphor for the dangers of taking corporations at their word.

One is left to wonder about the unstated joke that Brin made, whether it concealed a cynical knowledge that an unengaged public was in fact in the corporation’s best interest. What’s clear is that Lemoine’s well-intentioned attempt to hack the media cycle through its own sensationalist incentives only ended up creating a glut of “content” that will probably garner more value for advertisers, and Google itself, than it will for the public good. If his story remains valuable, it’s because Lemoine managed to articulate, in a way few tech critics have, the perils of personifying corporations and the limits of voluntary ethics. Given Google’s tendency to treat speech as empty syntax, it’s no surprise that AI “principles” amount to so much cheap talk, especially at a moment when the mechanisms of corporate accountability have been compromised by political corruption. One could argue, in fact, that the fears of a decade ago—the anxiety that Citizens United would lead to speech without speakers, that dark money would facilitate political influence without attribution—have been sublimated into corporate language algorithms, which quite literally produce speech without speakers and stand as a potent metaphor for the dangers of taking corporations at their word. Ethics and values are what we expect from our fellow humans. Regulation is a safeguard levied against machines.

The extent to which external regulation poses a real and significant threat to Google is evident in the measures it has taken to avoid it. Lemoine recalls that when he first brought his concerns about algorithmic sentience to Google VPs, the company essentially laughed in his face. It was only after he revealed that he had spoken to government officials who “indicated that their organization was interested in exerting federal oversight of the project due to THEIR safety concerns” that the laughter came to an abrupt halt. “Google was, of course, very insistent that no such federal oversight was merited,” Lemoine has said. Gebru, it seems, made the same mistake. While the media coverage of her termination focused obsessively on the bias problem, Gebru was not in fact fired for the paper, but for an email she sent to an internal mailing list, Brain Women and Allies, a listserv of employees interested in AI equity and fairness. The message, sent shortly after she was told to withdraw the paper, insisted that ethical AI and diversity, equity, and inclusion (DEI) initiatives were a distraction, a corporate strategy to keep employees with genuine ethical concerns busy producing so much empty speech. She advised her colleagues to “stop writing your documents because it doesn’t make a difference . . . Writing more documents and saying things over and over again will tire you out but no one will listen,” because when it comes to enforcement, “there is zero accountability.” The only path toward genuine change, she argued, was “thinking through what types of pressure can also be applied from the outside.”

It’s undoubtedly evident by now that Google reads these internal forums—that the company’s reminders to “speak up” and its open-access documents have created an enormous panopticon from which its lawyers can draw damning evidence on any employee who needs the boot. The day after Gebru sent the message, she received a termination letter noting that her email had been “inconsistent with the expectations of a Google manager.” After she was fired, Jeff Dean, senior vice president of Google AI, sent an email to the entire Google Research team trying to clean up the mess and, one assumes, forestall an employee uprising. He explained that Google had accepted Gebru’s “decision to resign,” and stressed “how deeply we care about responsible AI research as an org and as a company.” He noted that he “felt badly” that “hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs.” What followed was the only sentence in his email he chose to boldface: “Please don’t.”