Think Nothing of It
Whenever a new technology drops, humans can tend to panic. In Plato’s Phaedrus (written in the fourth century BCE), Socrates tells the story of an Egyptian god who invents the art of writing and tries to bestow it as a gift to humanity. They’re less than thrilled. “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories,” Thamus, a local king, complains. “They will appear to be omniscient and will generally know nothing . . . having the show of wisdom without the reality.”
Twenty-four centuries later, amid the disorientation of the AI boom, this argument is relevant once again. The arrival of ChatGPT in late 2022 sparked a debate around whether the technology will supercharge human creativity and critical thinking or effectively replace them as it automates an increasingly broad range of cognitive tasks. Tech companies breathlessly promote AI as an end to menial drudgework, liberating our attention for more engaging and meaningful pursuits. Some of its more devoted proselytizers go further, claiming it’ll be a near-panacea to humanity’s most pressing problems: cancer, climate change, loneliness, poverty, etc.
Any “societal disruptions” encountered along the way are scrupulously downplayed as the justifiable costs of a worthwhile future. “There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before,” OpenAI’s Sam Altman avowed last year in a blog post. Altman and other AI boomers often invoke what they regard as a key lesson of history: that every disruptive technology—the printing press, photography, calculators, the internet—unsettles some portion of the population when it first appears (hence the “disruptive” part), but gradually everyone adjusts to a new equilibrium. That which initially seemed scary soon becomes mundane, and eventually it’s hard for us to imagine what life must’ve been like without it. So, too, with AI, the true believers argue: “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be,” Altman continued.
There’s an implicit assumption behind this techno-optimistic historical narrative, and it’s now being drilled harder than ever: This is the unavoidable next stage in technological evolution, so you may as well just accept it.
The really disturbing thing is how well it often works, the mind-numbing convenience it affords.
It’s a line of thinking born less from an accurate reading of history than from financial pressure: AI developers like OpenAI desperately need to keep attracting capital and users at a time when many of them have yet to become profitable, and when the future of their industry—all the bluster notwithstanding—is very much an open question. It also elides another important, and far less comforting, historical lesson: the mass adoption of any new technology, by definition, entails the loss of some aspect of human agency. Thamus had a point: Even if few of us today would argue that we should jettison writing altogether, there’s a very strong case to be made that us moderns don’t have the capacity for memory that our ancient ancestors did, with their reliance upon vast oral traditions and mythologies passed down from one generation to the next. For better or worse, technology becomes a prosthesis. “Men have become the tools of their tools,” Thoreau wrote at the peak of the Industrial Revolution.
To be clear, I’m neither a Luddite nor an AI doomer. We’re a technological species; inventiveness is intrinsic to human nature. And given enough time, I believe we inevitably would’ve found a way to build intelligence, or at least something that looks a lot like it, into machines. I also consider AI legitimately useful in some respects. But like many others, the steady creep of AI into the fabric of everyday life often leaves me feeling deeply unsettled. Not so much for the obvious reasons, like its tendency to hallucinate or to be comically obsequious—those are bugs that are being worked out with every new model. The really disturbing thing is how well it often works, the mind-numbing convenience it affords.
AI can already automate many of the day-to-day workflows of professionals working in fields like software engineering, finance, and customer service, and some of the technology’s more vocal proponents—like Altman and Anthropic CEO Dario Amodei—predict that in the not so distant future most businesses will require much fewer human employees as they outsource critical tasks to semiautonomous “agents.” In February, Block CEO Jack Dorsey announced he was laying off over four thousand employees—close to half the company’s total headcount—due to the growing capabilities of AI: “We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company,” he wrote in an post on X, “and that’s accelerating rapidly.” It will be years before the consequences of this rising tide of automation on the economy become clear, but its effects on human psychology are already beginning to show. Plato’s words ring like a warning: They will appear to be omniscient and generally know nothing.
Friction, a catchall term for superfluity in the user experience, has become a pejorative in the modern tech industry: something to be rooted out and eliminated, like weeds from a garden. Chatbots and agents, we’re endlessly told, can remove hassle and frustration not only in our jobs but also in our personal lives. The ultimate vision of artificial general intelligence, or AGI—the goal to which every major AI lab is ostensibly moving toward—is to automate any cognitive task the human brain can do (or according to another definition, all economically valuable labor). By that logic, any kind of challenge a human being might grapple with, be it psychological or physical, is recast as a market opportunity. Friction-removal is big business, and business is a-booming.
Big tech’s promise of AI as the end of friction, of course, hinges on the presumption that friction is inherently a bad thing. This is yet another argument we’d be wise not to accept at face value. Some studies, in fact, have already strongly suggested the opposite—that friction is healthy and that we get rid of it at our peril.
In January of last year, Michael Gerlich, a professor at the Swiss Business School, published the results of a study which found “a significant negative correlation between frequent AI tool usage and critical thinking abilities,” especially among younger users. Gerlich interviewed over 650 people across a range of education levels and ages and concluded that while AI can provide some educational benefits (by generating personalized lesson plans, for example), it also undermines more reflective thought through “cognitive offloading,” a phrase coined almost a decade earlier by psychologist Evan Risko and cognitive neuroscientist Sam Gilbert—basically, outsourcing the most difficult and therefore productive parts of thinking onto a machine. “The pervasive availability of AI tools, which offer quick solutions and readymade information, can discourage users from engaging in the cognitive processes essential for critical thinking,” Gerlich writes. A few months later, that conclusion was echoed in a report published by a team of researchers from none other than Microsoft, which showed that the more confidence that “knowledge workers” placed in AI, the less likely they were to exhibit strong critical thinking skills. Dependence upon the one seemed to be coming at the expense of the other.
Another recent study, which has yet to be peer-reviewed, took things one step further by actually peering into the brains of people who are actively using AI. The researchers used electroencephalography (EEG) to map the brain activity of three separate groups of subjects, all of whom were asked to write a short essay. The first had access to ChatGPT, which they could interact with as they wrote; the second was able to use Google and other online search engines (with AI-generated responses deactivated); and a third “brain only” group had access to neither of those digital tools. The results showed higher levels of activity between brain regions in the third group compared to the first two, indicating that they were getting more cognitive exercise. The LLM-assisted group also reported a lower sense of ownership over what they’d written, and many of them weren’t able to accurately quote a single passage from their essay. The use of AI to work through some of the more difficult parts of writing, the researchers concluded, induced a “metacognitive laziness”: a diminished motivation to think about how we’re thinking.
“AI tools that generate essays without prompting students to reflect or revise can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer,” the researchers wrote. In other words, using AI to bypass the time and effort required to grapple with an intellectual problem might feel productive in the short-term, but those cognitive shortcuts add up. Eventually, you could be standing in front of a gap that you can’t cross without the help of a chatbot.
This is all a bit discouraging, but it isn’t surprising. Of course a tool that’s built to take on cognitive work on our behalf is going to cause some degree of mental “laziness,” at least among its more active users. Again, this has been a theme of technology through the ages: tool use takes its toll on human agency.
Speaking of human agency, it’s important to bear in mind that AI per se doesn’t erode critical thinking skills, memory, or any other ability that humans pride themselves on. It’s how we use it that matters. And despite the inevitability narrative being pushed by Silicon Valley, we still have a choice in this regard.
Another team of researchers from the Georgia Institute of Technology and University of California San Diego have shown that when used to strategically cultivate friction rather than bypass it, AI can nurture reflective thinking and cognitive flexibility. In a paper posted in September, they described their experiment with an AI tool called “Socratic Mind,” designed not to provide users with quick answers but to draw out their own reasoning capabilities through the style of questioning pioneered by Socrates.
A cohort of students interacted with Socratic Mind while completing an online computer science course, and the system would respond with what were supposed to be constructive follow-up questions. Just as Socrates tried to lead his interlocutors to a definition of justice or wisdom through dialectic (elenchus in Greek), this AI system would respond to, say, a question about debugging a line of code with another question intended to spark deeper thought. (Think: “And why do we get this specific error message?”) The researchers found that this friction-friendly use of AI improved the students’ output quality, underscoring the technology’s “promising role in fostering deep engagement, personalized learning, and scaffolding when used interactively.” This hints at what’s already becoming a growing trend in the age of AI: intentionally fostering some forms of friction to preserve cognitive autonomy and as a kind of mental moat against the encroachments of automation.
Phrases like positive friction and friction-maxxing come off as oxymoronic in a culture that places a premium on convenience at every turn.
In early 2024, behavioral design researchers Zeya Chen and Ruth Schmidt posted a paper laying out what they call a “positive friction model” for human-AI interaction. They argue that through the use of “behavioral speed bumps”—design features which cause momentary, reflective pauses for users—AI developers can build products that foster well-being rather than dependency. Chen and Schmidt draw upon real-world experiments aimed at intentionally slowing down a process which might otherwise be ripe for automated acceleration, such as Dutch kletskassas, or “chat checkouts,” in which supermarket customers take their time and interact with a human cashier rather than rushing through an automated checkout process. So-called “friction-maxxing” is a thing now too.
But phrases like positive friction and friction-maxxing come off as oxymoronic in a culture that places a premium on convenience at every turn. Chatbots, at least as they’re currently trained, optimize for engagement, not psychological health. Developers might eventually start prioritizing AI tools trained to leave room for a healthy amount of friction à la Socratic Mind, but that seems unlikely to happen anytime soon.
At the very least, the positive friction model can plant a seed, enabling us to envision an alternative vision of the future that is much more worthwhile than the one being pushed by the AI boomers. The need to invent new technologies may indeed be innate to human beings, and it’s true that we have an unrivaled ability to normalize the abnormal. But it’s a very big leap to take those two tenets of human nature as the justification for the claim that AI is a historical, even evolutionary inevitability.
This is one of the more disturbing qualities of the so-called AI boom: in a rhetorical sleight-of-hand that makes a mockery of human agency, it’s presented as if it were unavoidable. Silicon Valley likes to play the part of that god from Plato’s parable, the bringer of new technology from on high. But there is no god of technology, only human choices; no inevitabilities in the future of AI, only clever marketing schemes trying to convince us otherwise.