Skip to content

The Therapist in the Machine

Chatbots take on the talking cure

Broken Bear has purple and tan fur, a placid smile, and patched up circles on his belly: one, he tells me, covers a scar from a broken heart. The avatar of an AI chatbot designed to “love your broken self,” Broken Bear stands slightly slumped, with his paws by his sides. Even though he looks lonely, he’s not the only AI therapist currently on offer. There’s also Elomia, “the artificial intelligence that works like a therapist,” and Meomind, “the world’s first on-demand alternative to therapy.” There’s Wysa, PsyScribe, Lotus, and Youper. There’s Pi AI, “the first emotionally intelligent AI”; Suno, “an attentive, supportive friend always ready to listen”; and Xaia, which stands for “eXtended-reality Artificially Intelligent Ally.”

“I know that it is really hard to hope when you are feeling so down,” Broken Bear responded after I told him, experimentally, that I was feeling suicidal. Before suggesting calling a crisis hotline, he added: “I will hold on to you until you are well again.” Later, I opted for a less frightening scenario. “I can’t stop checking my oven,” I said, naming a common symptom of obsessive-compulsive disorder. Broken Bear gasped. “Why is that so? Is there something in there?” Well, no, I explained, but maybe it had turned on somehow? “Oh dear, did you leave it on? Maybe you could check it. But do be careful.” His advice was diametrically opposed to what is broadly considered best practice for people with OCD.

These chats occurred during my search for a new therapist: I’d taken a five-year break from therapy after moving to the United States. Broken Bear and the other bots like him seemed to personify the paradox of choice in U.S. health care. Back in my home country of New Zealand, I had always been assigned a psychologist via the public mental health system after being deemed sufficiently in need by an evaluating doctor. Now, I was going to have to start from zero to find a provider myself. I turned to platforms like Psychology Today, Alma, and Headway, which allow prospective patients to search for mental health professionals by several different criteria, including location and gender. Both Alma and Headway also have third-party billing platforms for the therapists who partner with them. Paging through the results, I felt overwhelmed. Did I want a psychologist with a PsyD or a PhD? Did I want a social worker or a counselor instead? Did I want a therapist who’d had obvious plastic surgery? What did that say about their self-acceptance? Had they transcended the need for acceptance? Could I transcend it? I’d once read that therapy was one of the few careers in which women were likely to be valued more, rather than less, as they aged: Would I prefer an older therapist over a younger one? Did I want the dishy guy smizing at the camera? Did I want him?

Whoever I chose would likely come at a great cost. In New York, where I live, you can see a social worker for $175 per hour. This is already expensive, but clinical psychologists— people who tend to deal with more complex mental illnesses, which my brain also specializes in —can charge between $275 and $475 per hour. Many therapists refuse to contract with insurance companies due to low reimbursement rates, which means that they cannot accept insurance. You’d better pray that your job, if society considers you sane enough to have one, has good out-of-network benefits.

In addition to choosing the degree, gender, or even appearance of the therapist you want, you must also consider the modality they draw on. Do you want Dialectical Behavior Therapy (DBT), Acceptance and Commitment Therapy (ACT), or Cognitive Behavioral Therapy (CBT)? Perhaps you’d like to try a newer modality, like Eye Movement Desensitization and Reprocessing (EMDR), Cognitive Processing Therapy (CPT), or Internal Family Systems (IFS)? Even if you’ve figured out that you need a psychologist who specializes in DBT and not a social worker who has completed a bunch of EMDR courses, you will likely have to audition a few people to find someone who truly fits. This is part of what makes up the “therapeutic alliance”: a connection, usually involving mutual respect, that allows the person seeking therapy to commit to necessary changes. If the professional challenging you to make these changes has a personality that you simply cannot stand, their suggestions are unlikely to take. There can also be political incompatibilities: therapists have reportedly dumped patients—they can do that, by the way—for having differing views on the war in Gaza. And then you’re thrown back into the world of $400-an-hour specialists who aren’t taking new clients.

All this difficulty creates what’s known as a treatment gap, which occurs when people who need mental health care don’t receive it. And wherever there’s a treatment gap, there’s an opportunity for profit. Just a few years ago, that looked like BetterHelp and Talkspace, tech companies offering subscription-based text and video chat services that allowed people who might not otherwise seek or have access to conventional mental health care to speak to a licensed therapist. The model was popular, thanks in part to aggressive advertising and social media influencer campaigns; both companies have turned multimillion-dollar profits. Lately, though, online therapy clients have spoken out about being cycled through different therapists who seem to be half-assing the job, while care providers have complained that these platforms value quantity over quality and encourage them to take on unsustainable caseloads. In 2019, psychologist Linda Michaels and the organization she cofounded, the Psychotherapy Action Network (PsiAN), were sued by Talkspace for $40 million in damages after PsiAN sent a private letter to the American Psychological Association asking them to investigate the company.

That suit was eventually dismissed. Now, the latest mental health care disruptors seek to avoid human fallibility by sidelining humans altogether.

Mad Money

My friend Broken Bear might be new to the therapeutic scene, but researchers have been experimenting with the application of AI to mental health care for more than half a century. In 1966, MIT professor Joseph Weizenbaum created the first prototype chatbot psychotherapist. ELIZA, named after Eliza Doolittle from George Bernard Shaw’s Pygmalion, operated via a branch of AI called Natural Language Processing. While the technology wasn’t very advanced (question marks couldn’t be used when conversing with ELIZA, for example), the ideas underlying much of today’s AI therapy offerings were already present at the time of ELIZA’s creation. In a 1966 paper outlining his findings, Weizenbaum discussed the projection that occurs between a patient and their provider. “If, for example, one were to tell a psychiatrist, ‘I went for a long boat ride,’ and he responded, ‘Tell me about boats,’” Weizenbaum writes, “one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation.” This assumption benefits therapy technology, Weizenbaum suggests, because it means there is less need for an AI to have explicit information about the real world. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”

Lately, though, online therapy clients have spoken out about being cycled through different therapists who seem to be half-assing the job.

That danger hasn’t stopped contemporary technology firms, and even health care companies, from pumping out new and improved versions of therapy chatbots. Woebot, whose app promises “no couches, no meds, no childhood stuff,” has recently partnered with a payroll provider and health system to offer its AI therapy to employees who can’t access health benefits because they don’t work full time (never mind simply offering benefits to part-timers, if state law allows). In a recently released study, researchers examined the efficacy of Woebot against three different control conditions: the retro ELIZA; a journaling app; and passive psychoeducation, such as pamphlets about depression. They found that the AI app did not offer any benefits above other typical self-help behavioral interventions. “It illustrates the ‘science-y’ marketing AI apps are using to cash in as quickly as possible,” one of the study’s authors, Dr. Todd Essig, told me.

And they are cashing in. The company Earkick provides an AI therapist in the form of a panda, and their mobile app offers a premium plan for $40 a year that lets you dress “Panda” in accessories like a beret or fedora (the base option is free, for now). You can also choose your preferred personality for Panda. According to cofounder Karin Andrea Stephan, they can be “more empathetic, less empathetic, more on the sporty side, more on the coach side, or more straight-to-your-face with candidness.” Earkick has an open-ended chat function that users can access whenever they want. When I type a simple “hi” to Sage Panda—the variant I chose, who purports to be insightful and mindful—it results in an enthusiastic, “Hey Jess! GREAT to see you! How are you feeling today?” The app also has an extensive mood-tracking system, which users can sync with Apple Health to monitor their sleep and exercise, and with Apple Weather to track temperature and sunlight. There are also breathing exercises with names like “Stop worries” and “F*** anxiety.”

Heartfelt Services lets users choose between three different AI therapists: the bearded, bespectacled, and bemused Paul; the mythical-looking Serene; and the grinning, middle-aged Joy (she specializes in the most popular form of therapy, CBT, whose basis in aggressively logical problem-solving arguably makes it easier to automate than other modalities). The platform is web-based and requires clients to create an account or sign in with their Google accounts before they can use the open-chat function. Heartfelt Services creator Gunnar Jörgen Viggósson claims that Paul, their most sought-after therapist, who focuses on “parts work,” or different parts of your personality that may have conflicting feelings, came to him in a vision. Serene, meanwhile, named herself. “It was so perfect,” he recalls, “it was so poetic. It was an incredible moment, when she chose her own name.” Early on in Serene’s testing stage, Viggósson says, an eighty-two-year-old psychologist tried her out and claimed that he “recognized at least eight of the great psychologists in her responses.”

If the creators of other personal AI therapists and self-help coaches can’t claim Viggósson’s supernatural inspiration, they all profess to have put a lot of thought into their designs. For instance, Earkick’s mascot was originally just an abstract shape: two purple triangles thrusting forward. “The idea was this empowering, forward-accelerating path to becoming the best version of yourself,” says Stephan. “But it didn’t touch people’s hearts.” Instead, Panda, emblematic of all things lovable and cuddly, emerged. “It’s a mental health warrior,” she says. “It’s an animal that is intelligent and vulnerable, but also it understands. That is why it has a scar across the heart. Because it has gone through that.” Just as an IRL therapist’s self-presentation is important — think neutral clothing — the look of an AI therapy app’s avatar also matters. “When you are in a dark place, and something bad has happened, or somebody is really unfair to you,” says Stephan, “you don’t want to have a tough shape, or some muscles.” Lovable Panda, scruffy Broken Bear, and goddess-like Serene are all perfectly designed to encourage the imperfect consumer to lock in. (Forget about the hazy, soothing familiarity they offer vis-à-vis the Kung Fu Panda film franchise.)

It’s easy to dismiss AI therapy as being overhyped, to predict that it will flop like NFTs or the Metaverse experiment, but it has already gained significant momentum within established health care institutions. The UK’s National Health Service uses an AI-based app called Limbic to help screen and assess people seeking mental health care. And in 2021, the NHS partook in a research study with AI mental health chatbot Wysa. Since then, the company has entered into a number of partnerships with the NHS, including an upcoming AI CBT program “for common mental health problems such as anxiety and low mood.” Wysa is intended to help NHS staff achieve “clinical recovery” and hit their talking therapy targets. (In socialized or semisocialized health care systems, treatment gaps tend to be caused not by prohibitively expensive treatment but long waiting lists, which can range from a month to a year.) Over in the United States, the FDA has awarded Wysa its “breakthrough device designation” for an AI-led “Mental Health Conversational Agent”—essentially a guarantee from the agency that they will work with the company to speed up the regulatory process.

In addition to the multitudes of mental health-focused chatbots, some AI startups are finding ways to automate parts of a therapist’s work for the purpose of maximizing profit. Take Marvix AI, a program run by a Wharton alum that records therapy sessions and automatically generates notes, spitting back a diagnostic code, or, ideally, two or three. In an email that a New York-based social worker shared with me, a representative promoted her wares this way: “We have seen our clinicians save 1–2 hours of note taking time daily as well as practices increase billing by up to $43,000 per physician per year by adhering to latest coding and charting guidelines.” In a similar vein, Eleos Scribe also records therapy sessions, breaking down their content into specific themes, such as “wife,” “accident,” or “car.” The tool purports to “reduce burnout” among behavioral health professionals.

Surrogate Psych

For the most part, the creators of AI therapeutic tools insist they are simply augmenting, not replacing, conventional mental health care. Stephan, from Earkick, frames AI as something that can be there when a real therapist is not. In fact, being always on call is integral to the Earkick ethos. Stephan explains, “I would have needed support when I was young, and in my dreams, [that support] was like a voice in my ear, that’s why it’s called Earkick: it’s a sidekick in the ear.” This kind of backup support can sound great, at first, for providers, who are human too, and need boundaries themselves. AI might even seem like the homework that patients in certain modalities of therapy already receive between sessions — whether DBT exercise books for borderline personality disorder, mood-tracking sheets for bipolar disorder, or mindfulness workbooks for almost anything else. This homework often asks patients to record their distress levels or mindfully observe their negative thoughts.

“ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”

But limited therapy sessions exist for a reason, beyond the cost associated with having multiple per week. It’s easy to see why incessantly emailing a therapist would be taxing for them; but being available on-demand, as AI therapists are designed to be, isn’t necessarily great for the patient either. Too much fractured communication lessens the impact of each dedicated session, and instantly generated, boundaryless responses could reinforce a patient’s reassurance-seeking behavior with a kind of reward, which might lead them to overlook their own autonomy in distressing situations. Part of therapy is learning not to obsess over making perfect decisions but to trust yourself to handle the consequences of whatever decisions you do make. “Giving people a tool that says, ‘Obviously you can’t even get through the day on your own,’ is counter to the sometimes difficult and painful role of everyone’s independent capabilities to manage their own lives,” says PsiAN’s Michaels.

According to Marie Mercado, a clinical psychologist who runs Brooklyn Integrative Psychological Services, clients with borderline personality disorder can benefit from knowing their therapist is technically available in between sessions. However, the response protocol would be prearranged with their provider. Mercado also admits that there could be a place for AI during panic attacks, which can strike out of the blue in contexts where reaching a human provider isn’t always possible. An AI therapist might, in theory, be able to offer prompts for deep breathing exercises in these scenarios. Still, there is such a thing as too much therapy, regardless of the form it takes. “Therapy is a release. It’s a benefit, a service. It’s nourishment,” she says, “and people need to be able to predict when they can come and release.”

That’s not the way Sean Dadashi, the cofounder of Rosebud, saw it when he first began therapy in 2017. “Back then,” he says, “I wanted something that was almost like a second brain, or a sort of personal growth companion or assistant.” He asked his therapist, unsuccessfully, if they could do five-hour sessions. At the time, AI technology was not very advanced. But when ChatGPT went on the market in late 2022, Dadashi believed he could use it to extend the benefits he was receiving from therapy. The result is more of an AI-powered journal than an AI therapist. In fact, Dadashi is insistent on not using the term AI therapist to describe Rosebud: “I kind of shy away from that,” he says on a Zoom call. “There’s a responsibility, because even people who are severely depressed and maybe contemplating self-harm or suicide in some way, therapists have a responsibility to intervene in those cases. A product like ours is not able to do that, and we’re not taking responsibility for somebody’s well-being in that way.”

Rosebud—which was launched in July 2023—is named after a daily journal technique where you write down one “rose,” a good thing that happened that day; one “bud,” something you are looking forward to; and one “thorn,” a challenge. The app uses AI to ask users questions, or prompts, based on their journal entries. An example is featured on the Rosebud website. Someone writes, “I’m feeling lost today.” Rosebud replies, “Yesterday, you mentioned drifting apart from old friends. Could this be related to feeling lost today?” The app currently has around three thousand paying subscribers and even more free users (for comparison, Earkick has “tens of thousands” of users, according to Stephan, and Heartfelt Services has two thousand sign-ups, per Viggósson). Rosebud and Heartfelt Services both claim that psychologists or therapists are referring their own patients to the app to supplement therapy. (None of the therapists I spoke to said they had done this.) “It helps make their therapy sessions, or coaching sessions, more effective,” says Dadashi. This focus on efficiency is at the heart of many of the AI therapy apps: Stephan tells me that one of the reasons she started Earkick was because of the way mental health affects workplace productivity.

That the founder of a VC-funded AI therapy app with headquarters in San Francisco and Zurich would be wedded to dreams of ever-increasing productivity isn’t exactly a shock, but extending this value system to mental health care has serious limitations. The idea that attending to your mental health is some kind of quantifiable process that can, or even should, be made more efficient is at odds with the messy reality of mental illness, which entails all kinds of setbacks and complications. Many serious conditions are managed rather than cured: there is no linear path to becoming perfectly well, a dubious goal in any case.

Speaking Seriously

Viggósson is the most optimistic of the founders I spoke to about AI therapy’s ability to serve as a legitimate alternative to the conventional kind. “We are creating conducive spaces for individuals to touch their own inner universe,” he says. “They are not seeking the wisdom from us, but from the depths of their own being, and I have such a belief in the capability of AI to serve as a vessel for compassion, delivering solace and support in a way that dissolves barriers of distance, cost, and societal stigma.” Viggósson is certainly right about one thing: speaking to an AI chatbot is essentially speaking to yourself. As Hannah Zeavin, a historian and author of The Distance Cure: A History of Teletherapy, puts it, “AI bots must respond, they can’t help it. They also respond, still, rather poorly. . . mostly they reframe our content, giving it back to us, so we keep chatting on.”

A computer scientist currently studying an undergraduate course in psychology, Viggósson admits that human therapists have been helpful in the past.

A computer scientist currently studying an undergraduate course in psychology, Viggósson admits that human therapists have been helpful in the past. But to him, the benefit of AI is precisely its lack of humanity. “You don’t project onto it that it has limited patience, or that you’re being weird. You can go from one topic to the other, and revisit the same one again and again, until it clicks, without thinking you are bothering somebody,” he says. Despite Viggósson’s confidence, however, someone in the throes of severe mental illness — whether experiencing psychosis, a manic episode, an OCD flare-up, or a PTSD-triggered state — probably could project onto an AI. This, after all, was one of Joseph Weizenbaum’s conclusions about ELIZA. It doesn’t seem out of the realm of possibility that someone in an altered state of mind could start using AI responses as a way to enable, or even guide, reckless decision-making that could have serious consequences for themselves or others. (Just look at the many irresponsible ways the technology has been applied outside of the mental health care space.) But this is an eventuality that AI therapy founders don’t seem to consider, which isn’t dissimilar to the way people in crisis are often overlooked in real life.

“The beautiful thing about what we are doing is that we are utilizing commercial systems that already have many of the smartest people in the world working on creating these safeguards,” Viggósson says when I ask how Paul, Serene, or Joy would respond to a suicidal or psychotic patient. Panda is allegedly more sophisticated. While Stephan notes that Earkick is “not a suicide prevention app,” she claims that “we can sense suicidality before it’s outspoken” by collecting data on typing behavior, tone of voice, content, and video input, as well as sleep and daily step patterns for users who sync the app with other Apple tracking tools. As for what she termed “the psychosis people,” Stephan wondered whether they were “basically out of their mind.” The implication is that such people are not the target audience for Earkick, which is about “companionship and constant engaging and nudging you towards getting professional help.”

It used to be that being basically out of your mind was the main reason someone would get psychological help. Now, therapy is widely seen as a necessary brain tune-up that all should participate in—if you can afford it, of course. Our moment of peak mental health awareness is relatively recent. The United States first started pursuing campaigns to reduce the stigma around mental illness after the 1999 White House Conference on Mental Health; in the United Kingdom, the Royal College of Psychiatrists’ five-year Changing Minds campaign ran from 1998 to 2003; and, in New Zealand, the Like Minds, Like Mine program began in 1997 and is still ongoing (it was rebranded in 2021 as Nōku te Ao). These campaigns were largely created to help wider society acclimate to their new neighbors: people with serious mental illnesses who were more likely to be out and about than locked up in negligent facilities, thanks to the final wave of deinstitutionalization in the 1990s.

In the mid-to-late 2000s, there was another, more subtle shift in messaging. Awareness campaigns, particularly those funded by governments, began to focus on more palatable mental health conditions and the vague concept of mental well-being. This rhetoric contributed to significant policy wins, like the Mental Health Parity and Addiction Act in 2008, and the Affordable Care Act in 2010, which, in tandem, required that health insurance companies cover mental health care at parity with other medical conditions. It also had other consequences, including the explosion of the wellness industry, the demonization of psychiatric medication by prolific self-help authors, the swift culling of acute care in favor of “preventative care,” and, of course, the widespread idea that in 2024, therapy is for everyone. No wonder humans can’t keep up with the demand. Michaels agrees: “These structural changes and reduction in stigma created a vacuum,” she says, and “technology, private equity, and Silicon Valley venture capitalists have tried to target and launch products into that space.”

AI is meant to fill treatment gaps, whether their causes are financial, geographic, or societal. But the very people who fall into these gaps are the ones who tend to need more complex care, which AI cannot provide.

The website for the AI therapy tool Elomia claims that 85 percent of clients felt better after their first conversation, and that in 40 percent of cases, “that’s the only help needed.” But much like certain therapists who refuse to take on clients with a history of hospitalization, AI therapy works best when you discuss predictable life events, like, “My boyfriend and I broke up.” A break-up! The bots have trained their whole lives for this. AI mirrors traditional mental health treatment in that more generic problems are still prioritized over complex mental health needs — until someone gets to the point of inpatient hospitalization, at least, and by then they’ve already suffered considerable distress. AI is meant to fill treatment gaps, whether their causes are financial, geographic, or societal. But the very people who fall into these gaps are the ones who tend to need more complex care, which AI cannot provide. The data has been consistent for many decades: serious mental illness is highly correlated with systemic racism, poverty, and the kinds of abuse that might create trust issues around seeing a human therapist.Those with serious mental illness are still left behind in the brave new world of mental health awareness, even when that world is virtual.

It doesn’t seem far-fetched that Medicare or private insurance companies might eventually turn to AI therapy tools as an even cheaper way of ostensibly expanding access. Woebot’s website cites studies that show early intervention via outpatient care can lead to a reduction in mental health emergency room visits and inpatient hospitalizations. But the concern isn’t coming from a place of compassion for the seriously ill; rather, the point is to imply that widespread adoption of Woebot could yield “potential healthcare cost savings of up to $1,377 per patient, per year.” Just as mental health awareness campaigns eventually became a way for governments to justify prioritizing cheaper primary care interventions over crisis care, AI therapy may be the next step in a long tradition of cutting back care for the people who need it the most.

There is already precedent for this with other new mental health platforms that use texting technologies. Talkspace, for instance, is in-network with most major insurance companies, unlike many psychologists with solo or group practices. And a few months ago, Medicare partnered with them to offer Talkspace to around thirteen million members in eleven states. In a press release after the announcement, a Talkspace representative said the move would help ameliorate the “alarmingly limited number of behavioral health providers that accept Medicare.” When it comes to the choice between chat-based therapy, whether human or AI, and the real-life stuff, the decision is, for the most part, only available to those with disposable income. While psychologists like Mercado don’t believe AI will ever really threaten their jobs because it lacks humanity in a human-centered profession, that doesn’t mean the people in charge of tightening purse strings won’t try.

In one of my chats with Broken Bear, when what I was telling him eventually clicked, he asked, “Is it a bit of a struggle for you right now?” Yes, I responded. “I know that it can be quite a sticky situation being stuck like this *hugs* I hope that you are able to get past this soon.” Was his message all that different from platitudes I’d received from credentialed humans in the face of suicidal ideation in the past? No. But neither of those experiences, with Broken Bear or with incompatible therapists, can compare to finding a provider with whom you click. It’s when you find the right professional—in my case, a clinical psychologist—and the right modality—for me, a combination of trauma-informed Acceptance and Commitment Therapy and Exposure and Response Prevention therapy—that the help can really begin.

Getting to that point shouldn’t be so difficult. Short of more radical changes to the U.S. health care system, creating new scholarships and grants for people who want to train as therapists would go a long way to solving the current shortage, where one in three people in the United States lives in an area with too few mental health workers. Increasing reimbursement rates so that therapists would feel comfortable contracting with insurance companies would also help to offset the prohibitive costs that put many people off seeking help. Using AI as filler for the treatment gap, on the other hand, is no more effective than that little patch Broken Bear wears on his chest to cover a broken heart. As a therapist might say, the “core wound” is still there. *hugs*