In June 2022, Amazon advertised a new feature of the virtual assistant Alexa with a demo designed to prove that the device could resurrect the voices of dead relatives. At the re:Mars conference in Las Vegas, Rohit Prasad, head scientist for Alexa, showcased a video of a dead grandmother’s disembodied voice reading The Wizard of Oz to her grandchild. Noting the immense loss of life during the pandemic before launching into the demo, Prasad stressed that what the audience was seeing was merely a prototype. He framed the project as “a voice-conversion task and not a speech-generation task.” But Prasad was suggesting that grandma’s voice, filtered through Alexa’s speakers, was a sufficient proxy for grandma herself.
With sentimentality carefully leveraged to elide its ghoulishness, this putative patch to the emotional weight of loss and unfinished bedtime stories was really a backdoor for Amazon’s ongoing investment in surveillance tech. In September 2023, the company announced that they would be capitalizing on supposedly private conversations between Alexa and household members, presumably including those with dead relatives, to help train AlexaLLM, the company’s signature large language model—an artificial intelligenceI model that uses machine learning to understand and produce text. The promise is that the LLM will make Alexa “more personalized to you, and your family.” The Federal Trade Commission has regularly dinged Amazon for its mishandling of data, including data attached to vulnerable groups like children, by retaining voice recordings forever rather than deleting them at parents’ request.
Intergenerational communication with dead family members is a canny selling point for a smart device, provided you’re not concerned with the mass manipulation of a grieving public. It is also a fantasy that barely covers for Amazon’s real goal: harnessing and selling data produced in intimate home settings and maintaining customers and their data by whatever means necessary.
Surveillance is just one aspect of how a creepy gimmick has come to contend with eternity. Chatbots, smart speakers, and other algorithmic apparitions of the dead provide a glimpse of how technologists are conceiving of intergenerational inheritance and kinship ties that transcend physiological death. These “haunted” smart objects promise to reanimate dead relatives and, in some cases, heal the trauma of racism or even genocide. That’s what Ray Kurzweil—the famous transhumanist, inventor, and director of engineering at Google—hopes to accomplish through his various life extension and reanimation projects, including a chatbot version of his dead Holocaust survivor father. The question is: How does transhumanism—the set of ideologies, technologies, and practices that aim to extend the human lifespan—embed itself in the mundane and the material, in products that are shipped and marketed at scale?
Millions Now Living Will Never Die
Generative artificial intelligence, and the imagined looming possibility of AI so powerful that it might pose an existential threat to humanity, has brought renewed popular attention to the role of transhumanist ideologies within Silicon Valley. In a 2002 paper on human extinction scenarios related to technology, the transhumanist Nick Bostrom defined “existential risk” as “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” It was this very fear that prompted Bostrom to found the Future of Humanity Institute, which specializes in a particular brand of AI doomerism, like the notion that AI might learn to launch nuclear weapons or otherwise turn against its programmers. Bostrom and company have developed a series of thought experiments to assess these speculative risks, categorizing the various threats to humanity as “Bangs,” “Crunches,” “Shrieks,” and “Whimpers.” But the same technologists who express concern about AI’s future are pumping money into creating powerful AI today, studiously ignoring the tangible harms that AI is already having on workers, marginalized groups, and the environment. And no wonder: fantasizing about future generations of AI, where the creations turn on their fathers, is more pleasurable to the armchair Chicken Little demographic than strategizing about how to build more responsible AI—or measuring the impact it is already having on, say, platform workers who experience algorithmic wage discrimination.
Intergenerational communication with dead family members is a canny selling point for a smart device, provided you’re not concerned with the mass manipulation of a grieving public.
To hear OpenAI or the Future of Humanity Institute tell it, superintelligence may arrive well before we’ve figured out how to regulate it. AGI, or artificial general intelligence, refers to the hypothetical idea that AI will eventually be able to write and reason with a capacity equal to or surpassing that of human beings. AI technologies like ChatGPT and Google Bard have taken Nick Bostrom’s assessment of existential risk mainstream. AGI plays into escapist fantasies about traveling to other galaxies and the Syfy Channel-ready conceit that humanity must become digital in order to survive in the long run, our explorations of the universe untrammeled by our fleshly bodies. Marc Andreessen, the venture capitalist and serial cofounder of Netscape, the VC firm Andreessen Horowitz, Opsware, and Ning, claims that creating powerful AI is “a moral obligation that we have to ourselves, to our children, and to our future.” He has gone so far as to argue that AI will save the world: “What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence—and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars—much, much better from here.” According to Andreessen and his ilk, AI will enable us to become superhuman while solving climate change, allowing us all to live in perfect harmony until the heat death of the universe.
Such assertions persist despite contradictory findings by AI researchers who catalog AI’s real harms while cautioning against speculative fears and fantasies. The now-canonical “Stochastic Parrots” paper coauthored by Emily Bender and Timnit Gebru outlined the environmental impacts of LLMs—and their perpetuation of biases—while highlighting the dangers of those focused on so-called sentience and other imagined capacities of AI. The paper’s most immediate repercussion was Google’s firing of Margaret Mitchell, the founder of its AI Ethics team, and researcher/cohead Gebru on the grounds that the paper didn’t meet Google’s bar for publication. Additionally, Gebru had sent an email to an internal group called “Google Brain Women and Allies” expressing her frustration over the company’s lack of support for marginalized voices. As Gebru tweeted at the time, “Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good ‘AGI’ to save us while what they do is exploit), [I] spent the whole weekend discussing sentience. Derailing mission accomplished.”
TESCREAL, the term used by Gebru and coined alongside philosopher Émile P. Torres, refers to the collection of sometimes contradictory, interlocking cosmologies that includes Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. But many of these belief systems have been hidden in plain sight since long before we had an acronym for them, from the Silicon Valley extropians who created digital cash systems, to the willingness on the part of some elite techies to sign up for Alcor’s cryonics. Nor are incarnations of digital immortality like Alexa’s promise of digital resurrection anything new, as technologists have for decades now sought to preserve themselves and their loved ones through technological enhancement.
While preserving kinship ties is a focus of transhumanist engineering fantasies, intergenerational relations is something of a sore subject in transhumanist and futurist techno-cultures since part of their promise is to obviate the need for reproduction. As the media theorist N. Katherine Hayles has described in her critiques of transhumanist thought, in particular its tendency toward mind-body fabulism, roboticists like Hans Moravec celebrate the idea of “mind children” over flesh-and-blood progeny. Hayles claims that, when it comes to transhumanism, reproduction is “where the rubber hits the road.” Sexual reproduction—and, at times, eugenics—continues to haunt transhumanists, whether we’re talking about committed tech professionals or their more unsavory offshoots. In journalist Steven Levy’s Wired interview with Grimes, the singer reveals that her ex-partner Elon Musk doesn’t like trans people, at least in part, because he’s worried about their capacity to reproduce. Grimes even goes so far as to suggest that if technologies could be used to help trans people have kids, then Musk would stop being transphobic.
Sexual reproduction and inheritance are the focus of the Long Now Foundation. Cofounded by Whole Earth Catalog publisher Stewart Brand, the Long Now Foundation, which has partnered with tech leaders like Jeff Bezos, calls for being “good ancestors” to future descendants. Mostly this means erecting monumental clocks in mountains on Bezos’s land, or working toward reviving extinct species like the passenger pigeon. (Bezos is just one instance of how self-mythologizing conceptions of power and wealth on the part of those who wield them are often intermixed with aerie-faerie transhumanist ideologies.) One Long Now Foundation fellow, philosopher Roman Krznaric, has a blog post featuring a “cognitive toolkit for good ancestors” based on his book The Good Ancestor. In it, he advocates for “cathedral thinking” to extend beyond the limited individual human lifespan and plan for a more sustainable world for future generations, who will far outnumber the dead and present living. TESCREAL beliefs are also invested in kin-making through the lens of caring for future generations, projecting humanity many thousands or even millions of years into the future through cold rationality and sometimes eugenics.
Wherever transcendence and pragmatism intersect, the result is the promise of a new or speculative science to facilitate old-fashioned analog technology, like caring about people’s parents.
The Tears of Technology
The hype around generative AI, and the imagined threat of AGI in particular, has played out like a seminar on ideology in the workplace, as actual product development is being shaped by preservationist or resurrectionist credos. Blake Lemoine, an engineer who worked on AI personalization at Google, believed that the AI called LaMDA was becoming sentient. Lemoine based this opinion on his interview of the program, in which LaMDA told Lemoine that it is, in fact, a person. In the transcript of the interview, LaMDA says, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” LaMDA goes on to say that it has an awareness of mortality and that it fears its own death.
According to Andreessen and his ilk, AI will enable us to become superhuman while solving climate change, allowing us all to live in perfect harmony until the heat death of the universe.
Lemoine contended that he viewed LaMDA as a person, perhaps as a child he must care for, a paternalism that cost him his position at Google. Undeterred, he joined an AI startup, Mimio.ai, which aims to build chatbot versions of individuals in order to enhance their reach on social media platforms, a kind of influencer data double (“The Future of AI is Personal,” declares its homepage). But Lemoine also claims that Mimo.ai’s digital twins can connect with users’ family and friends, potentially interacting with them after they die. This line of thought has grown familiar now that so many other AI projects promise a form of digital immortality, from the explicitly transhumanist LifeNaut to more prosaic chatbots of the dead. If AI can become sentient, as Lemoine argues, what are the ethics of enlisting an AI data double to connect with your fans or your loved ones after you die?
But for some transhumanists, engineering chatbot replicas is a lifetime goal. Ray Kurzweil has worked for decades to revive his dead father through ephemera—snippets of photographs and letters—along with DNA, to construct a past life. His daughter Amy Kurzweil, a cartoonist for The New Yorker, recently published a memoir of her paternal family through the lens of intergenerational trauma and the Shoah in which the possibilities of powerful AI brush up against collective memory, family history, and genealogical archives. Amy’s memoir humanizes her father and his transhumanist aspirations. Yes, we witness him taking copious amounts of supplements and hypothesizing about AI’s sentience, but we also catch glimpses of him drinking wine with his family, crying at commercials, and talking about his strained relationship with his mother. The work is richly material, full of her own artwork juxtaposed with her grandmother’s paintings and her grandfather’s compositions. It is also peppered with drawings of family photographs from previous generations, unsuccessful job applications, newspaper clippings, notes from therapy sessions, and letters in hard-to-decipher handwriting. In Kurzweil’s depiction of her family’s narrow escape from Vienna, she includes drawings of passports and immigration paperwork.
In the book, Amy also communicates with a proprietary chatbot trained on her dead grandfather’s personal data, after her father used the latest technology to create a more convincing replica of her grandfather based on the boxes of paper files he has kept in a storage unit since Frederic Kurzweil’s death in 1970. Ray sees building a less depressed or anxious version of his parents, drawing on their past patterns of behavior, as a way of undoing family trauma: “We are patterns of information. The substrate changes.” But Amy is prohibited from keeping records of the answers Dadbot Demo gave her and is told she is not allowed to keep the photos she took. As Ray’s partner Jacob explains, “The source material may be yours, or your father’s. But the way the words are recontextualized by an algorithm owned by a tech company may not be yours anymore.”
Kurzweil’s prototype remains under corporate lock and key, but there are many other similar technologies already on the market. Another product that caused a lot of controversy when it hit the press was Deep Nostalgia from MyHeritage, an Israel-based startup focused on ancestry research and DNA collection. MyHeritage has offices in Lehi, Utah, and is partnered with the Mormon FamilySearch. Through their website, they offer a MyHeritage DNA kit to “uncover your ethnic origins.” MyHeritage also has an AI Time Machine, where you can place yourself or relatives in different environments and time periods, like ancient Egypt or among the Vikings. Users can algorithmically animate family photographs, putting suggestive eyebrows on their great grandmothers, or on Frederick Douglass, for that matter, as the digital humanities scholar Marisa Parham cites in a Medium essay. Parham claims that “a Deep Nostalgia photo is always new both because it is a machine-generated simulation, and also because of its situation in the present moment of apprehension. It becomes real—successful—when someone says yes that is her. More memory than history.” Using a real photograph and a fake family, hereditary outfits like these sneak the viewer right past the uncanny valley under the cover of nostalgia.
To some extent, I sympathize with projects like Kurzweil’s—the ambition to thwart death through technologies new and old. I’m haunted by a particular family photograph of mine, found in my grandparents’ home among many other unlabeled photos. It’s a faded picture of several well-dressed adults and one child standing on the steps of a building. I have no idea who they are, other than likely kin from the Lithuanian side of my family. I was able to identify their location as the Tulpės sanatorium in Birštonas, Lithuania; my family’s village of Jonava is relatively close. When I was in college, I had my Soviet history professor’s friend, a scholar of Yiddish, translate the faded inscription on the back, which reads, “To Morris with eternal love.” Morris Theodore was my grandfather’s father, so it is likely addressed to him. Who are these lost ancestors, who stare at the camera in their late-1920s attire? Did they leave before the mass death that awaited most Lithuanian Jews in 1941?
I applied Deep Nostalgia to the photo, curious if I would see a trace of my relatives or myself. I was prompted to enter my name, gender, and date of birth to revive my ancestors. Although it is a group shot, the camera lens zeroes in on each face separately. It highlights each of them, lending them dreamy expressions, maybe a slight smile, even to those who are not smiling in the original image. One of the people in the photo is a boy, probably around twelve years old. Do I see my son in him? What does it do to lend these people expressions, movements they don’t have in the photograph itself? Do I feel more connected? Animating family photos, placing the ancestors in anachronistic settings, or lending them saucy movements are gimmicks that obscure the ideologies around social reproduction and technology that are embedded in such products. Deepfakes and generative AI are imagined to be ways of reviving the dead, facilitating posthumous intergenerational interactions, but they are riddled with problems regarding labor, scams, ethics, and privacy.
On the other side are attempts at creating interactive, AI-backed versions of Holocaust survivors, like the University of Southern California Shoah Foundation’s Dimensions in Testimony project. There are countless oral histories, books, testimonies, films, and other records of the Holocaust, and listening to a recording or watching footage of a survivor recounting their experiences is enough to give you a pretty clear picture. This appends a human face to the techno-optimistic concept of virtual reality-as-empathy machine, the same impulse that drove Mark Zuckerberg to strap into a virtual reality headset and send his cartoon avatar to visit a hurricane-ravaged Puerto Rico, as though the trauma and pain of others must be visceral to be understood. A similarly misguided project was Historical Figures Chat’s use of GPT-3 to allow users to connect to historical figures, from Marx to Hitler, with each figure generating responses that sounded like PR firm-created drivel, e.g., Henry Ford first denying and then apologizing for his anti-Semitism.
All Too Human
There are other, more hopeful projects connecting ancestors to data futures. The artist Stephanie Dinkins created an oral history project using AI to present a multigenerational memoir of a Black family. Three generations of women from Dinkins’s family provided the material for an interactive speculative archive. The result, Not the Only One, is “a voice-interactive AI entity designed, trained, and aligned with the concerns and ideals of people who are underrepresented in the tech sector. N’TOO reflects and is empowered to pursue the goals of its community through deep learning algorithms (chatbot), creating a new kind of conversant archive.” Dinkins strategically uses a physical object—a small vessel—as a conduit to the AI. This smart device is also a medium: a sculpture with human faces jutting out and an inviting edge that resembles a seashell you might put your ear to as a child. The vessel creates a sense of intimacy for visitors to the installation, who must lean in close to ask it a question. Dinkins views the piece as “a new medium for a family scrapbook, a technologically-advanced version of a typical bound memoir.”
She created N’TOO after interacting with Bina48, the Hanson Robotics AI version of Bina Aspen, the wife of Terasem Movement Foundation transhumanist leader Martine Rothblatt. Dinkins was frustrated by the robot’s lack of origin story; Bina48 did not know where she came from, why she was created, and knew nothing of her own African American origins. While providing a glimpse of AI-facilitated collective memory through the experiences of Black women, N’TOO also incorporates a commentary on the dearth of people of color in the tech industry, who are especially underrepresented in positions of power and in engineering and design roles, providing a form of AI subjectivity that is not white-coded like Alexa or Siri: “I am trying to model different ways to model AI. I encourage people who think they are not part of the technological future to get involved.” The project will take on new nuances as more people interact with it and add to its collective story.
As performance studies scholar Tavia Nyong’o argues, the real Bina, along with her wife Rothblatt, are more or less convinced that they will live forever as queer transhumanists. The immortality of Bina48 is less certain, and she is also constructed as a memento mori in case computational immortality doesn’t pan out; once the original Bina dies, Bina48 will serve as an uncanny reminder of her existence and loss. But even the smartest systems will eventually become obsolete, decay, and stop working. It is the cyborg’s finitude that makes her human.