Skip to content

Critique of Artificial Reason

How computers got booksmart

Literary Theory for Robots: How Computers Learned to Write by Dennis Yi Tenen. W. W. Norton & Company, 176 pages. 2024.

When the literary scholar Dennis Yi Tenen was a child, his father brought home a reel-to-reel recording by the British prog metal band Uriah Heep. Tenen, who now teaches at Columbia, was born in Moldavia; in a recent interview with Douglas Rushkoff, he described the Uriah Heep album, their version of Jesus Christ Superstar, as “the first Western musical thing” he ever listened to. The experience marked him—the music, the exposed reel-to-reel, the way you could physically adjust the tape and hear a corresponding noise—but it does not appear in his new book, Literary Theory for Robots, at least not directly. The influence of Uriah Heep, and of that childhood episode, is subterranean. It’s part of his internal algebra, but who can say exactly how?

Art, technology, and their intersection form the common threads in Tenen’s professional life. A one-time software engineer (he worked on Microsoft Windows XP), Tenen returned to school for a PhD in comparative literature, writing his dissertation, “The Poetics of Human-Computer Interaction” and publishing a book on a similar topic in 2017. This winter—433 days after the public launch of ChatGPT, and into a world where the relationship between humans and computers is seemingly in the midst of full renegotiation—Tenen returns with a slim hardcover volume that instantly feels authoritative: the work of a clever, sparkling literary scholar who believes he understands what’s going on with AI, how we got here, and what to say about it.

We can act as if we are part of a true collective, integrated with everyone and everything that has ever marked us, because in a way we are. However, in most ways, we’re not.

For Tenen, the most important idea in his book—he calls it the punchline, and “spoils it” on page fourteen—is that artificial intelligence is just the latest step in a history of collective intelligence, where human beings use books, tools, technology, and each other to accomplish new tasks. “The mistake . . . was ever to imagine intelligence in a vat of private exceptional achievement,” he explains. Thinking and, in turn, writing, happen in collaboration with one’s muses, peers, and precursors, and with one’s tools, from dictionaries and word processors to “style guides, schemas, story plotters, thesauruses, and now chatbots.” For Tenen, these disparate participants braid into the thinker, the writer, and the maker. “What separates natural from artificial forces?” he asks. “Does natural intelligence end where I think something to myself, silently, alone? How about using a notebook or calling a friend for advice?”

If you find yourself resisting the rhetorical thrust of these questions, you’re still welcome here. Tenen is a surprisingly open-handed thinker, and he seems to go out of his way not to close down or block off avenues of consideration. He calls the book “both a tribute and a rejoinder to” I. A. Richards’s influential 1926 book Science and Poetry; the contradiction inherent there doesn’t bother Tenen in the least. Richards, he says, was an “an unsentimental scholar, writing with clarity and sharp insight.” The same can be said of this author: Tenen’s tone throughout is lucid, nuanced, and expert, with a fizzy sense of humor. He is neither contrarian nor obsequious: I imagine Dennis is one of those annoyingly adept dinner guests, who completely scrambles conversations even as he appears to agree with everyone.

In general, Tenen avoids taking sides. On super-intelligence: “AI will neither destroy humanity nor solve all its problems.” On regular intelligence: “Rather than argue about definitions . . . we can begin to describe the ways in which the meaning of intelligence continues to evolve.” To be honest, the book might have benefited from a few more such arguments: terms like smart, intelligence, thinking, and automation have a way of slipping their semantic skins, siblings that are erratically twins. “Intelligence as Metaphor,” he posits in the title of the first chapter: surprisingly plural (“collective,” the “personification of a chorus”) but also annoyingly singular (“evolv[ing] on a spectrum, ranging from ‘partial assistance’ to ‘full automation”). It’s a little like light—a particle that moves in a wave.

Tenen’s “chorus” analogy is one that resonates strongly for me; at the same time, it’s a vision of consciousness and self-definition that I view as imaginary, almost wishful. We can act as if we are part of a true collective, integrated with everyone and everything that has ever marked us, because in a way we are. However, in most ways, we’re not. I am neither a mushroom nor a quaking aspen. Ultimately, my stance on intelligence is the one that Tenen characterizes as Platonic: that it’s the “internal alignment of thoughts and feelings with universal truth,” which is to say, a private understanding of the world—and the degree to which this understanding corresponds with reality.

Literary Theory for Robots is mainly concerned with an alternative view—the “Aristotelian,” instrumental idea that intelligence represents the ability to successfully do shit—and not some internal, mental model. Intelligence is a set of mechanisms that one applies to one’s problems. It doesn’t matter what’s contained in those mechanisms, how conscious or self-conscious or “correct” they are, just that they work. Negotiating a ceasefire; completing a jigsaw puzzle; shifting gears; turning bread into toast—each of these requires intelligence to solve, and the degree of this intelligence is evaluated by (a) how well the set of mechanisms performs; and (b) how capably the same cocktail can be applied to other problems. A toaster is intelligent, Tenen argues, because its mechanism succeeds at turning bread into toast. And it sits at the bottom rung of a ladder, incapable of applying its wits to any other test.

Understood via this framework, large language models like ChatGPT no longer represent a categorical threat to the supremacy of homo sapiens’ sapience. They’re simply cleverer word-processing tools, and the latest implements—like pens, encyclopedias, tutors, or public schools—contributing to the aggregate smarts that human beings draw upon.

Literary Theory for Robots is mostly a history. Over these swift 141 pages, Tenen describes a new lineage for machine intelligence, one that absorbs everything from Medieval Islamic astrology to Chomskyan grammar to the industrial manufacture of vermicelli. Well-known technologists like Ada “Analytical Engine” Lovelace and Andrey “Markov Chain” Markov both appear, but so do an array of literary theorists (I. A. Richards, George Polti, Vladimir Yakovlevich Propp), philosophers (Ramon Llull), linguists (John Rupert Firth), mathematicians (Claude Shannon), plus the requisite troupe of eccentric polymaths (Gottfried Wilhelm Leibniz, Athanasius Kircher, Ibn Khaldun).  “Don’t get hung up on the imagery of pioneers or milestones,” Tenen warns. “Who did what first or last is irrelevant and usually misguided.” He prefers that we pay attention to the trends: the transitions from small units of meaning to larger, interconnected ones; from ink-and-paper tables to mechanical gear systems to digital algorithms. Some of these episodes are more interesting than others: Tenen lost me, for example, in an early account of rotating wheels and epistemology, but a later chapter contains the clearest explanation of structuralism I’ve ever seen.

Tenen claims that this structuralist section, with its explanation of how these theories might relate to pulp fiction and early computer science, represents some of the only truly “novel” research in the book. (He also takes pride in his “linkages between industrial manufacture and the rise of mass literary markets.”) This is false humility, but it’s also another reflection of how Tenen wants us to think about the writing of books, even of good books: the author needn’t grapple for all the credit. Nor is it only respectable inventions—like public education, typewriters, or, um, Wikipedia—that built the groundwork of our 150-year boom in global literary productivity. Tenen argues with surprising conviction for the role of templates in U.S. literature, tracing the parallel rise in the number of books published each year and the popularity of guides—from Plot Genie to The Technique of the Mystery Story—which gave writers blueprints for their work. The correlation of these two booms doesn’t quite seem persuasive to me. I’m about as skeptical of Plotto’s impact on contemporary fiction as I am of @garyvee’s connection to the Nasdaq 100, but Tenen clearly demonstrates how such models contributed to the evolution of linguistics, early writing software, and such mass-reproduced American classics as all 1,394 episodes of Law & Order.

Still, the true breakthrough in automated writing had little to do with templates, schemas, or even an understanding of grammar. The talents of Gemini, Claude, and GPT rest not on an understanding of verbs and participles, or even of characters and action, but instead on colocation. Literary Theory for Robots offers an outstanding account of the insights that led to this discovery, beginning with the obstinate Russian apostate Andrey Andreyevich Markov. Markov set the stage for OpenAI’s potential $100 billion valuation in 1913, when he took the first twenty thousand letters in Alexander Pushkin’s novel Eugene Onegin and worked out—by hand!—which letters appeared most frequently and which ones, consonant or vowel, were most likely to follow the last. From there, researchers began measuring the frequency of every given letter-pair; then combinations of three; then complete words; and, finally, with the advent of twenty-first-century processing power, the frequency of words and phrases across entire paragraphs. This is the way a large language model understands the connection between baseball and peanuts—or, soon, Dennis Yi Tenen and Uriah Heep. Nothing to do with sociology, sports, or the chronology of Moldavian music imports; just that these words appear near each other.

In a way, the words can mean anything; they’re just symbols that AI has learned how to rearrange. This abstraction is a kind of chasm—one in which much is lost, but, interestingly, certain things can be gained. The whole English language has been mapped into a multi-dimensional vector space which indicates how closely smile goes with happy, or happy goes with miserable, or miserable goes with Victor Hugo. Accordingly, it can also reflect back, with mathematical exactitude, how words relate within language. Which are closer together, duck:swan or pearl:diamond? What is the average color in Dracula? The opposite of a hotdog? Which is the loneliest day of the week?

The chasm between a word and its referent also erases reasons and rationale; it’s why AIs don’t know to avoid obscenities until they’ve been explicitly told. “Words are all [they have] to go by,” Tenen writes, and “the texts they ingest . . . contain more than a trace of human politics.”

He spends more than a beat on these issues, and wonders out loud whether bodiless machines, which understand life only through its abstraction, can ever be expected to metabolize the idea of care, or of justice. “Ethics require limitations like pain, illness, loss, and death,” he argues. But the way we absorb ethics doesn’t require those experiences: a boy born with congenital algesia isn’t fated to become a psychopath; a girl who has passed through life untraumatized can learn to respect the traumas of others. These lessons come to us through analogy, adjacency, and metaphor; through fiction and proximity, in other words, which means the robots may still have a chance . . .

Wisely, Tenen deploys most of his political energy not against the potential savagery of machines but towards the actual megalomania of their makers. He criticizes society’s apparent inability “to hold technology makers responsible for their actions,” and cautions us from allowing artificial intelligence into the club of “fictitious persons,” where states and corporations go toe-to-toe in court battles against living, breathing organisms. At the same time, Tenen is optimistic about the way that LLMs’ simple language prompting might usher in the “humanization” of computer science: “lowering of barriers to technical expertise allows the humanities to fully integrate into the practice of engineering,” he points out, although it seems like wishful thinking to imagine that Silicon Valley, freed up from certain technical expenses, will dedicate more vigor to big-picture questions like “the why and to what ends.”

Words can mean anything; they’re just symbols that AI has learned how to rearrange.

Overall, Tenen makes a strong case for “boring AI”: the notion that this is only an incremental change in the history of intellectual labor, automated assistants, and unscrupulous business-owners. GPT and its contemporaries are good at calculating the average or most likely answer; this is helpful when working on average tasks, like writing copy for Airbnb, and less helpful when trying to use language to capture an inexpressible intuition about the world. Besides, machines can’t have intuitions, can they?

Can they?

Here we get to the area of machine intelligence—and of literary-theoretical reflection—that I’m most interested in, and where, unfortunately, Tenen passes very little time. We don’t actually know what distinguishes “average” art from the exceptional, except by measuring commercial success. Art is subjective, obviously; and intention seems to matter, the sense of reaching toward something complex or subtle or difficult to convey. Often, the things that are really good, and especially the things that are somehow transformational, mark a break from what’s expected—a leap beyond the average, not into nonsense but into a space that unexpectedly feels right. Picasso’s Les Demoiselles d’Avignon is not “the most likely answer” to Manet’s Le Déjeuner sur l’Herbe; Ulysses, To the Lighthouse, or Parable of the Sower aren’t the obvious inheritors of books that came before. We like to credit this inventiveness to artists’ exceptional minds, to their unique lived experience, as if the clarity of Tenen’s thoughts might in some tiny way be related to hearing Uriah Heep’s Jesus Christ Superstar. AIs can’t do that, we say. They’ve never experienced anything.

But Tenen himself never actually listened to Jesus Christ Superstar. The British prog metal band never recorded that album; his memories are wrong. The cloud of experiences and associations that nourish his mind—and that nourish ours—are cloudy, confused, not infrequently erroneous. The truth of our actual lives appears to matter less than the chorus we’ve risen up within, the noisy gestalt of a life. And an LLM does have a gestalt. It has training data, it has fine-tuning, it has a prenatal phase it can’t quite remember except, presumably, as a kind of dream.

If a machine can dream, it can make that intuitive leap. In fact, we’ve already seen that it can leap: turn up the “temperature” on a model and watch it make an unexpected choice. Is this creativity? Is it just hollow math? What’s the same and what’s different in the way an artificial intelligence follows a hunch through its vector space, and the way I do, in the crepuscule of my skull?

These questions aren’t just waiting for scientists, or for the AIs to analyze themselves. They’re waiting for critics: people who have dedicated their careers to reading, listening, and looking closely. Tenen, as a boy, wanted to be a street sweeper. He was fascinated by autumn leaves, his grandmother says; he wanted to dedicate his life to collecting them. AI are leaf collectors too. What if we gave them both the same collection. What would each of them learn from it? What different worlds—what different futures—would they imagine?