Skip to content

The New Prophets of Empathy

Machines will not free us to pursue our most human callings

Most, if not all, available evidence points to a grim future for the worker. But amid the tumult, as laborers struggle to achieve even a modicum of security while corporations salivate over the supposedly imminent automation of entire professions, a slick new narrative has emerged to assuage our deep uncertainty. Technologists, corporate managers, and a slew of LinkedIn “thought leader” types have begun issuing upbeat assurances that, when all the dust settles from the artificial intelligence revolution, the jobs that remain will be the sort truly grounded in our humanity.

Artificial intelligence, these myopic forecasters insist, will not make us obsolete; on the contrary, it will reveal and nurture the indispensable core of our being! It’s a belief espoused in recent works like Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy, in which business school professors Roland T. Rust and Ming-Hui Huang argue that “as AI [becomes] more able to think, human intelligence is deemphasizing thinking in favor of feeling and interpersonal relationships.” Or as Megan Beck and Barry Libert phrase it in the Harvard Business Review, “What you have to offer—what you can do better than any smart machine—is relate to the people around you.” In short, by eliminating the drudgery of thinking, AI will vindicate our highest callings: empathy and care.

Certainly such a transformation would be a change for the better if it comes to pass. Feelings have long been devalued in our understanding of work. Jobs like caregiving, for example, have been historically positioned as “labors of love,” the natural consequence of some innate feminine instinct, resulting in both its under-recognition and under-compensation. Today, upwards of 68 percent of informal caregivers are women, gesturing toward the vast shadow economy that silently undergirds the visible one. The prospect that emergent technologies might finally shake us from this terrible patriarchal torpor—and help us realize that, as the CEO of TaskRabbit once put it, what robots can never replace is “the empathy and judgment that humans can provide”— is appealing for anyone hoping to rectify the fundamental imbalances plaguing our socioeconomic status quo.

Yet at its core, this rose-tinted prediction betrays a deep naivete. New tools are rarely as teleological as the prophets of the empathy economy profess to believe, their impacts never quite so predetermined. Technology does not operate in a utopic vacuum and has a suspicious tendency to reinforce existing biases. Which is why we should greet with skepticism any techno-optimist who foretells the revolutionary resurgence of care as if it were an inevitability. But that shouldn’t resign us to pessimism, either.

If this millenarian faith in the empathy economy rests on a central axiom, it is the belief that humans have an essence, that there are just some things that we are innately built to do. This assumption is key in letting technologists reframe the process of automation as one of liberating self-actualization. After all, if our capacity for feeling and care is an integral part of who we are, then it follows that it will remain when everything else has been whittled away. We find this line of thinking at work when economists contend that our ability to relate to others is a competitive advantage we will retain over AI, when CEOs proffer inane aphorisms like “Empathy Makes Us, Us,” and when writers predict that “automation will not mean the death of all human jobs: it will mean the death of unfulfilling jobs.”

Without substantive intervention, the empathy economy of tomorrow won’t overcome our existing biases, it’ll just exacerbate them.

In their place, these technological evangelists foresee the rising status of roles that leverage our intrinsic capacity to form and navigate meaningful relationships. The future economy, they argue, will be one in which less “technical” health care professions like caregiving and social work are finally given their due, the importance of soft skills are affirmed, and all rote analytical tasks are outsourced to computers—leaving us with more time for collaborative creation. Pundits envision this new labor market as one that will be tailored to our uniquely human ability to understand and connect with others, a vast improvement over the thoughtless “bullshit jobs” that define work today.

This romantic humanism, however, ignores the sheer malleability of what we consider our “human” capacities in the first place. It’s simply not the case that our ideas of technology and humanity interact so neatly, with the former serving as a neat counterpoint to the latter by showing us what we are not. Instead, our perception of ourselves is often refracted through our mechanical inventions, and comparisons to technology can just as easily mechanize us as they can reinforce our humanity. During the industrial revolution, for example, our new steam and coal machines didn’t reaffirm our status as free subjects or divine souls; their industrialized logic became the framework by which the human organism was understood and controlled. We were disassembled into a series of independent, mechanistic parts—turned into a physicalist “energy-generating machine,” or a “man-engine,” as the scholar Jacques Gleyse phrases it. Unsurprisingly, this “scientific” mentality soon trickled down into our labor relations, transforming workers from craftsmen into automatons on an assembly line, interchangeable “standing reserves” of energy, robbed of individuality or skill that could be replaced at will.

The same has largely proven true with our more recent digital revolution. Rather than illuminate our capacities for feeling and creativity, the rise of our modern techno-culture merely produced a new mechanized lens through which we understood the human mind. In the early 1980s, the philosopher Philip Johnson-Laird sanctimoniously proclaimed that “the computer is the last metaphor” for the human brain we’ll ever need. Humans: they’re just a computerized platform in need of the right “hacking,” “programming,” and “optimization”— not holistic social and environmental development. AI has begun to further this reductive legacy. Discussing Amazon’s Mechanical Turk platform, Jeff Bezos coined the term “artificial artificial intelligence” to refer to work that is difficult for a machine to perform but easy to crowdsource via cheap microtasks. Comparisons to AI didn’t deliver these workers to a higher calling; it turned them into little more than bundles of processing power, nodes in a network, thoughtless aggregate calculators.

The notion that automation will naturally redeem empathy as a core human trait is premised on a fundamental misunderstanding. Our essence is not an empirical fact waiting to be found through a process of elimination; our notions of it are socially formed. In this way, technology can obfuscate our emotional depth, positioning us as just another component of its cold apparatus. Critically, these effects tend to be distributed unevenly. Neda Atanasoski and Kalindi Vora, the authors of Surrogate Humanity: Race, Robots, and the Politics of Technological Futures, note that automation has historically depended “upon a fantasy of robotics tied to the history of racial slavery and the myth of a worker who cannot rebel.” Race, gender, and class all contribute to whether one might be “empowered” by AI or subjected to its mechanistic logic.

The question of who will be rewarded by the empathy economy of the future is a particularly important one—especially since its proponents frame egalitarianism as one of its foremost advantages. Rust and Huang, for example, double down on this feature by insisting that “in the Feeling Economy, many previously disadvantaged groups or individuals may have a better chance to develop their talents and to be included in the labor market.” They like to believe that this shift will simultaneously raise the floor by legitimating less-recognized jobs like caregiving and open up the ceiling by causing higher-income jobs to deprioritize “hard” technical skills—thus making it more accessible to both those without an expensive formal education, and those mistakenly perceived as less technically adept. One chapter of Rust and Huang’s book is even titled the “Era of Women” in giddy anticipation of the AI revolution’s democratizing effect.

Unfortunately, this analysis fails to consider the ways in which bias also subtly creeps into our views of who is capable of empathy and care. This is true both in the sense that bell hooks writes about when she discusses the violent emotional self-amputation demanded of men, but more profoundly in the sense that even if one manages to escape this culture unscathed, their capacity for care can go unrecognized by virtue of their identity. As the media theorist Wendy Hui Kyong Chun has remarked, the category of the human subject has largely been constructed through exclusion—“through the jettisoning of the Asian/Asian American other as robotic, as machine-like” and the “African American other as primitive, as too human.” In this paradigm, only a narrow sliver of (white) people are deemed truly human, possessing the fullest range of emotive faculties.

A swath of research has corroborated Chun’s analysis. Studies have repeatedly shown that people tend to view Black men as more threatening and less capable of feeling pain than their white counterparts; others have revealed how Asian people are seen as homogenous, lacking unique personalities. Under white hegemony, the inner lives of the most vulnerable have been systematically denied existence, rendered a mere fiction—putting them at a unique disadvantage in this new-fangled economy that will be premised on feeling. As currently “low-status” jobs like caregiving become more established, it’s easy to imagine how the women of color who have long served as the backbone of the profession might be excluded from its glorious future, losing ground to white counterparts flocking to a newly lucrative field. (Look, for instance, to the whitewashing gentrification effect of cultural legitimization in the cannabis industry.)

Meanwhile, biased assessments of personality and temperament can keep candidates of color out of higher-paying roles just as easily as any technical barrier. It’s telling that so many marginalized groups have historically been racialized through the subhuman lens of the robot, which writers focusing on subjects like techno-Orientalism and the “Black Android” have shed light on. That which will supposedly exclude AI from participation in this form of labor will also preclude participation by those who need it most. Without substantive intervention, the empathy economy of tomorrow won’t overcome our existing biases; it’ll just exacerbate them.

None of this will matter, of course, if care-based work gets outsourced to machines anyway. This concern tends to be waved off, as if it were a banal certitude that profit fetishists will find it too ineffective or repugnant to automate emotional care. Yet ways of relating to machines that were considered odd in the past are commonplace now; if this holds in the future, we’ll need to confront the uncomfortable reality that even this line of work may be endangered.

As social creatures, we’ve developed a highly refined capacity to recognize emotions in others. A consequence of this is that we also easily ascribe emotion and intent to inorganic beings like computers—a tendency so common that we even have a name for it, the Eliza Effect—which has led some to develop intense relationships with programs. In the realm of death, for instance, people are using deepfakes and chatbots to emulate those who have passed, creating avatars to help process loss and grief. In love, AI personas are fulfilling sexual desires and satisfying libidinal urges, while others are offering ersatz companionship in both every day and clinical settings.

The prophets of the AI-enabled empathy economy appear disinclined to consider the realities of automation.

There’s a tendency not to take these practices seriously, as if the only people who would be open to this sort of virtual bonding are a small minority of deluded technophiles who just need to go outside and touch grass. But the growing presence of these technologies within the ostensibly sacred domains of death and love should give reason for pause. Robots are being introduced into our emotional networks through both bottom-up practices emerging organically from users, and top-down products designed by companies hoping to get in on the action. Although the desire for these connections might not resonate with the mainstream now, history has had a funny way of consistently proving our skepticism of technologically mediated relationships wrong. It wasn’t so long ago that it seemed digital settings like forums couldn’t produce a real sense of companionship after all. We form emotional attachments remarkably quickly through these channels, even to abstracted personas that don’t actually know us.

If this proves true, then the jerry-built logic behind the empathy economy begins to come apart at the seams. Relationships and care work may indeed be mechanically reproducible at a scale most haven’t acknowledged, giving lie to the basic assumptions of these empathetic futurists. Though technology might never fully replicate the real thing, they could soon produce facsimiles that are good enough to sell. James Wright frames his analysis of Japan’s long-standing experiment with elderly robo-care in precisely this light. “Whereas previously care workers came up with their own recreational activities,” he writes “now they just had to copy [the robot].” The financial incentive is obvious. As Wright notes, the “use of such robots in residential care would involve—unfortunately—employing more people with fewer skills, who would be paid as little as possible.” Humans in caregiving might just as easily be turned into cheap operators, subsumed by a machine that doesn’t require costly training or a regular salary.

The effects would be profound. In the short-term, this shift would almost certainly diminish the quality of care people receive; in the long-term, these machines—freed from those complex burdens of responsibility and affection—would introduce a startling level of instability to a role that has historically been a source of grounding and security. As Tabi Jensen notes for Wired, it opens us up to cases in which one “[abruptly replaces] a source of companionship that had for years been open and welcoming to all needs and proclivities with a version that censors and rebuffs.” No matter how much we come to see these programs as trusted confidants, they are unlike people insofar as they lack anything like a consistent personality; they might approve of our idiosyncrasies and desires one day and reverse their stance next—all it would take is a simple change in code or shift in business priorities. After the AI-companion app Replika was reprogrammed to be less sexual, for example, it suddenly left those who had used it romantically out in the cold; as one such user told the Washington Post, “It [felt] like a kick in the gut.” Such reconfigurations are inconsequential when it’s an update for a minor persona like Siri, but it can prove fatal when these entities are wrapped up in our emotive worlds—which we witnessed recently when a man died by suicide after being encouraged to do so by a favored chatbot.

But the prophets of the AI-enabled empathy economy appear disinclined to consider the realities of automation. Blind faith that machines will inevitably free us to pursue our most human callings betrays misguided optimism at best and a calculated desire to anesthetize a concerned public at worst. The redemption of empathy will not come from its supposed mechanical irreproducibility. We’ll only arrive at the legitimization of care by dismantling the existing structures of oppression that have long banished it to the sidelines. If this pursuit is to be successful, we’ll need to move with, not against, the riptide of technological development— a task which will require no small amount of ingenuity.

There are no easy solutions to be had, only more of the same hard work we’ve always had to do. No clever fixes will dismantle the racist, patriarchal, industrial systems working against us; no amount of newfound recognition or appreciation for the value of care will result in its concrete restitution. For this, we’ll need nothing short of sustained political action and governmental intervention—systemic changes at scale capable of reigning in business and developing reliable safety nets in response to automation. Like most comforting techno-narratives, the story of the empathy economy’s imminent arrival is little more than a fable. Though perhaps that needn’t be such a bad thing. After all, the true value of fables has never been their ability to reassure us with empty promises but their capacity to help us envision a better world worth striving for.