One cool morning last fall I followed a canal from my hostel in Amsterdam to the chic DeLaMar Theater. The theater was once a Big Data archive, a custodian of the records of a Nazi labor program that sent Dutch civilians to toil in German fields and factories, and was firebombed by underground resistors in 1945. Five suspects were executed for that crime against quantification, now long forgotten.
A prim doorman waved me inside the lobby, where I gave my name and claimed my lanyard. The other arrivals, mostly middle-aged and business-suited, chatted cheerfully. By all appearances, no one was in the mood for grim reminders of the barbarity beneath the surface of civilization. No, today’s production at the DeLaMar was about the bold, boundless future soon to spread, eagle-like, across the horizon of the starry sky—a future in which every square inch of the planet will be recorded, measured, and analyzed; in which a race of genetically enhanced cyborgs will rule as omniscient philosopher-kings, mining asteroids, hoarding bitcoins, and going for years without bathing, thanks to an odor-eating microbiotic spray.
Those who will have to hit the proverbial dusty highway include skeptical journalists, “linear-thinking” academics who deny the new, “exponential” reality, organized labor, government regulators, conservationists, Luddites, and other obstacles to the reign of positive-thinking, entrepreneurial doers. In their future, after all, there will be no want, only abundance—a corporate cornucopia. Work will be voluntary. Thanks to ever greater extensions of technology, the doorman will be a friendly robot, and I won’t need a lanyard because a computer chip embedded in my flesh will continually record my identity and my emotional state. Sabotage will be impossible; resistance, futile.
Ray Kurzweil’s Singularity is an overheated white paper by a zealot for the American dream of luxury and convenience.
Such heady visions made my caffeine panic acute. It was quite the ambitious agenda for a mere two-day conference, but unrealistic ambition was the coin of the realm at the Singularity Summit, which sprawled throughout six rooms of 7,330 square feet at the DeLaMar, not counting the 950-seat grand hall that was decorated like a Tron set for the occasion. The summit, which takes place once or twice a year, is a roving recruitment seminar for the California-based Singularity University (SU).
SU was founded in 2008 by two friends in futurism: Peter Diamandis, founder of the X Prize Foundation and the would-be Commodore Vanderbilt of private space colonization, and Ray Kurzweil, a noted inventor, big-selling author, and prophet of a technological end times. Kurzweil’s 672-page fever dream, The Singularity Is Near (2005), lent the institution both a name and a doctrine. By page twenty-one, the book predicts the imminent physical and metaphysical merger of humanity and computers, culminating in “epoch six” of the universe, in which the disembodied hive mind of what was once Homo sapiens transcends the laws of physics to unify the cosmos in eternal ecstasy.
Well, what can I say? Had I purchased tuition for the full package, a ten-week SU summer program at a corporate park outside Mountain View, California, I would have been looking at a $30,000 price tag. The $2,500 entry fee for the Amsterdam summit was a steal—and I even got comped. My press badge also granted access to the VIP area upstairs, where I found coffee. A weary Dutch publicist hired for the summit told me nine hundred people had registered, including seventy-five journalists. It was the largest SU event ever. As someone who hopes to live in the future, I wasn’t the only one feeling called to, you know, climb the summit of omniscience with those who claim to see the path forward so clearly.
As I had contemplated my trip, I had pictured a modest gathering of earnest hacker types, too New Agey for Comic-Con and yet too dweeby for South by Southwest. My naïveté was instantly punctured by the crystal chandeliers and red velvet and custom tailoring inside the venue. I should’ve known the future would be copacetic with corporate luxury. Other attendees, I learned, shared my fascination with the looming Singularity, but some seemed drawn by the prospect of scouting for fresh intelligence on new technologies. At least that was the rationale for their employers to pick up the tab for a midweek conference in Europe’s adult Disneyland.
The summit’s opening video had all the technomystical bombast of a Syfy original series, and quickly gave way to a banal, buzzword-packed, thanks-to-our-sponsors speech in the customary style of the expense-account set. The massive multinational auditing firm Deloitte sponsored this summit because, in a tall glass tower somewhere, accounting majors were debating the actuarial implications of indefinite lifespans and running cost-benefit analyses on extraplanetary mining expeditions. Things got more interesting when SU chief executive Rob Nail took the stage to perform an initiation of sorts. He described himself as “one of those geeky Silicon Valley guys” and dressed the part in black denim jeans, a plaid blazer, and a pair of Google Glass that pinned back his shoulder-length hair. A Stanford-educated robotics designer, Nail sold his company, then “gave up . . . to go surfing.” Attending an SU executive program ended this idle period and changed his life, like, forever.
“One of the caveats of getting involved with Singularity University is it’s very difficult to get back out,” Nail said. “You’ll find out why in a moment.”
The screen behind him displayed a black-and-white pattern like a melting checkerboard—an optical illusion. “In this photo is an image of a dog. Anybody see the dog? I’ll give you a quick hint. There’s the dog,” Nail said, showing little patience for stragglers. “Everybody see the dog now?”
“Research actually shows that in thirty years if you go back and see this image, you’ll still see the dog. In fact, you can’t not see the dog,” he went on. “So, without your permission, I literally just physically rewired your brain.” He paused to let that sink in. A few people chuckled. “Over this next two days, we’re going to do that on a totally different level. We’re going to totally rewire your brain.”
“You have a choice now,” Nail continued. “I know most of you have paid a lot of money, so you’re not going to walk out the door now—but this is the red pill, blue pill question, okay?” It was taken for granted that everyone had seen Laurence Fishburne dose Keanu Reeves in The Matrix. “Everybody signed up for the blue pill? Or the red pill, I guess,” Nail said, fumbling the reference. “If you take the blue pill, you can leave now. Thank you very much. We can send you the notes. Okay. Everybody signed in. That was my disclaimer, so you can’t complain to me later.”[*]
“I figure, since you’ve taken the red pill, why don’t we just jump straight into the rabbit hole,” Rob Nail went on in his metaphor-mixing exec-speak. “What if we could just disrupt reality completely?” Nail’s profound hypothetical might’ve inspired an interesting discussion about the limits of perception, or at least some fun stoner talk, like, What if The Matrix is real, bro? Instead, it segued into an overview of virtual reality gaming headsets—a product pitch. What do you know, a product from Google, an SU sponsor, came out the winner.
Genetically enhanced cyborgs will rule as omniscient philosopher-kings, mining asteroids, hoarding bitcoins, and going for years without bathing, thanks to an odor-eating microbiotic spray.
“I think it’s probably the most exciting time that all of humanity has ever lived,” Nail said. He held up his smartphone. “This is not a phone,” he said solemnly. “This is a teacher, this is a doctor, this is so many other things. And the fact that there’s tens of millions of apps now means we’re in a totally different realm altogether . . . which creates a lot of opportunities for all of us.”
“Ultimately,” Nail concluded, “we’re here not just to help you forecast where the future is going, but to engage in steering towards one that we all want to live in.”
For what it’s worth, I think they mean well. That is, I wouldn’t attribute exceptionally nefarious motives to anyone I met from SU. I merely suggest that their program pursues undesirable objectives in ways likely to produce disastrous outcomes.
Nail’s high-blown lunacy was unexceptional among the SU speakers. What set his enthused disrupto-babble apart was its relative benignity. Later that first morning, however, I felt the tenor of the proceedings change from silly to scary. The turning point came with the onstage arrival of SU past president Neil Jacobstein, a wry veteran of the revolving-door establishment who boasts consulting stints at NASA, the Pentagon, and large weapons contractors like Boeing. Jacobstein’s matter-of-fact delivery belied the startling contents of his speech on artificial intelligence. He had a way of normalizing the outlandish and of creating the feeling that it’s crazy to cling to romantic biological defects like mortality.
“Let’s talk about augmentation. Do we really need to augment our brains? The answer is yes,” Jacobstein said, brooking no incredulity. “The human brain hasn’t had a major upgrade in over fifty thousand years. And if your cellphone or your laptop hadn’t had an upgrade in, say, five years, you’d be appropriately concerned about that.”
On the other hand, I thought, could you trust an Apple Store Genius with your mind?
But Jacobstein was already galloping off to address planned upgrades to the organization of work. He cited a recent study claiming that within the next twenty years, 47 percent of U.S. jobs would be subject to some kind of automation. Certain professions, he noted, have obstinately resisted the trend—lawyers, teachers, and doctors. “I would say that assuming zero new technology breakthroughs, professional work—white-collar work—is ripe for disruption,” Jacobstein said. “White-collar workers often have the reaction, ‘Well, all jobs can be automated—except ours, of course.’ But they’re not immune either.” Doctors, for instance, might be put out by the X Prize to develop Star Trek-style medical “tricorders” that can diagnose diseases with a wave of the hand. At first, Jacobstein said, “we’ll probably deploy them in places with low guild protection, like Africa, rather than places with high guild protection, like the U.S.”
Then Jacobstein shared with us an illustration of a ginormous silicon brain floating through a cloudy blue sky. Perplexingly, a bright red line crossed the brain. There was no caption. But Jacobstein helpfully explained:
If you unwrap my neocortex or yours, it would be about the surface area of a large dinner napkin. One could imagine, without having the confines of the human skull, that we could build an artificial neocortex with the surface area of this auditorium. Or Amsterdam. Or Europe. Or the planet. And you might think, “Wow, that’s a little excessive.” But it turns out it’s not excessive, and the reason is . . . the accelerating wave of human knowledge.
See, in order to contain the unforeseeable side effects of this nigh-infinite computational superpower, Jacobstein went on, society must adapt, and soon. We must have the courage to think up antivirus-style surveillance programs that could, say, be deployed to monitor the world for “anomalies and misbehavior,” preserving order and preventing sabotage.
Notwithstanding the dubious record of desktop antivirus software—and putting aside the contentious matter of what constitutes misbehavior—this struck me as a bad idea. To be sure, I could see the appeal of law enforcement via algorithm, at least from the perspective of the rulers of an inescapable totalitarian superstate. For them, it’d be snazzy.
When I interviewed Jacobstein later, we had this exchange:
Me: One of your slides was a superintelligent artificial neocortex the size of a planet.
Me: Maybe it’s my feeble human brain, but I just can’t imagine what sort of governing structure could contain such a thing.
Jacobstein: I don’t think that the issues are [that] simple. . . . People, because we’re not familiar with that kind of computing power, tend to be afraid of it. . . . That doesn’t mean that there aren’t real risks associated with AI. It does mean that before we decide we need to be afraid of something, what we should first do is do our best to understand it, to focus on the upside potential and what it can do in implementing our own concepts of morality. When I look at the world today, I see a lot of problems that offend me morally. . . . All of those problems are failures of our ability to implement our own moral code. Exponential technologies like AI are going to give us an opportunity to improve our response.
Maybe I should have taken the blue pill and skedaddled.
Not Nearly Near
As the discourse burbled on and slideshows ticked ahead, one thing became clear: Singularity University was no such thing. As an unaccredited, for-profit enterprise with a pedagogy of salesmanship, SU could pass for a university only in a society that sees the institutions of knowledge as fountains of cash. Hence SU founders flaunt their elite credentials to the point of exaggeration. For instance, one SU brochure lists its chancellor as “Ray Kurzweil, PhD.” Kurzweil does not have a PhD, though he has used the appellation elsewhere. He does have (at last count) eighteen honorary doctorates. These are awarded by administrators, not faculty, often to secure a speaking engagement. (Mike Tyson, Jon Bon Jovi, and Kermit the Frog also have honorary doctorates.)
What is SU, then, if not a university? In keeping with the style of the symposium, I shall present my findings in bullet points:
• An investment hustle run by mad scientists for credulous suits.
• A religious revival for confirmed atheists and progressive capitalist utopians.
• A blue-sky strategy session for cold-blooded technocrats who see futurism as a game not of predictions, but of power.
Fine, bone-deep American traditions, all.
Founded as a nonprofit, SU was later restructured as a California “public benefit” corporation. This meant seeking funders beyond the Silicon Valley faithful and hitting up hidebound corporations like Deloitte. Based on ticket prices and attendance, I figured SU grossed $2 million on the Amsterdam summit, hardly enough to witness the future being born. According to Nail, SU counts three thousand alumni and six hundred-plus faculty speakers (none tenured, obviously). The university runs a startup incubator, and its corporate friends enjoy generous in-class promotion.
Not a university, not destined to be a great business, SU was not imminently dangerous, either, in the manner of a Heaven’s Gate or an Aum Shinrikyo. I spent a lot of time looking around during my visit but found no body parts in the summit’s freezer rooms.[**] I simply came to think that the Singularity holds more interest as a cultural byproduct of our epoch of stagnation than as a scientific theory or business proposition. Taken as pure entertainment, it’s even enjoyable. Imagine! A perpetual revelry of wishes fulfilled; a never-ending party with all of your favorite people, engrossed in far-out, hyperintellectual discussions and enjoying previously impossible sexual combinations. What’s more, this amazing future definitely makes room for you. Just like heaven, everyone can get in (except maybe Luddites who deny the gospel). Scottish author Ken MacLeod was onto something when he mocked the Singularity as “the Rapture for nerds.”
Kurzweil’s Singularity is an overheated white paper by a zealot for the American dream of luxury and convenience. Certainly, his success owes something to the American fondness for homespun prophets. His specific focus—artificial life—springs from a much older and apparently undead tradition. Ilia Stambler, an Israeli academic who has written a book-length history of “life-extensionism,” names as precedents the Greek myth of Prometheus, the Golem of Jewish folklore, the occultism of alchemy and early Freemasonry, crude androids “allegedly constructed by Albertus Magnus and Descartes,” and “Wolfgang von Kemplelen’s mechanical chess-player (proved to conceal a man).”
Stambler is himself a transhumanist who belongs to a number of the same clubs as Kurzweil and other SU faculty, so it’s noteworthy that he concludes that the scientific pursuit of immortality is a “fundamentally conservative” enterprise. Kurzweil’s forerunners in this area were a truly mixed bag of nuts. Consider Nikolai Fedorovich Fedorov, a scary Russian Orthodox librarian whose 1913 treatise, Philosophy of the Common Task, made a Christian case for physical immortality and the resurrection of the dead—all of them. Fedorov wrote that humanity would achieve victory over its enemy, death, “only with absolute, patriarchal monarchy, with a King, standing in place of the [Heavenly] Fathers.”
Yet Kurzweil’s closest precursor may be Charles Asbury Stephens, a doctor and writer of young adult fiction who, with philanthropic backing, founded a clinic in Maine in 1888 to pursue his death-defying gerontological research. Stephens also published a number of books on his cellular theories of aging, including Salvation by Science and Immortal Life. He died in 1931.
To assess their record in scientific terms, the immortalists have a failure rate of 100 percent.
Sources aside, Kurzweil’s vision inspires strong feelings of wonder or terror, depending upon your persuasion. I imagine these are the same feelings that inspired phrases like “fear of god” and “old-time religion.” In the consumer culture that nurtured Kurzweil’s fame and regards his ersatz university as an improvement on the real thing, awe is a new iPhone; heaven is a holodeck.
And hell? Hell is not having the best new toy.
The Creepshow to Come
The show-stealer in Amsterdam was a big, Texas-born biotech scientist and engineer named Raymond McCauley. Prior to joining SU, McCauley cofounded BioCurious, a Sunnyvale, California, nonprofit with a mission to normalize technologies like genetic engineering by providing a low-cost “community biology lab for amateurs, inventors, entrepreneurs, and anyone who wants to experiment with friends.”
Heaven is a holodeck. And hell? Hell is not having the best new toy.
In his opinion, humanity was headed for subspeciation—a proliferation of separate and unequal races creating something like a cyberpunk version of Tolkien’s Middle Earth. He predicted that the prevention of childhood disease would open the back door to widespread human genetic modification. “I don’t know how long before we stretch that word therapeutics to mean enhancement,” he said. “That thrills and concerns me.”
McCauley himself inspired some thrills and concerns when he theatrically summoned two young transhuman activists to surgically implant an RFID chip—a short-range tracking device capable of storing information—into his hand. I think I speak for many in the audience when I say it was the most titillating act of cyber-exhibitionism I had ever seen. Rising from the operating chair, a still-bloodied McCauley fielded audience queries on the implications of the procedure, including one from some spoilsport who asked pithily, “No more natural selection?” McCauley replied:
We’re seeing a form of natural selection to select for success, economically, to select for people who are high-performance individuals. . . . We’re having an arms race with natural selection. Genetics is pretty slow, and individuals don’t advance, races do. But we’re now more in control.
I had a chance to sit down with McCauley later. I found him to be the most genuine and engaging person I met from SU. He had recovered quickly. He showed me the stout bandage on his hand and the smartphone app he used to control the embedded chip. The app bore an all-caps warning: “DO NOT FORGET YOUR PASSWORD!”
McCauley was still figuring out what to do with his new implant. “I could have some spot on the wall that looks like a blank spot, and I can use this to unlock a hidden door or passage,” he said. I asked him about the vibe at the SU campus. “In some ways, it’s exactly like being a sophomore in college with the three smartest guys and girls in the dorm, and it’s four o’clock in the morning and no one wants to go to bed,” he said. “It’s eighty people who are the smartest people in their countries, and they’re all riffing on each other, coming up with new ideas and trying to solve the world’s problems—and make a buck while doing it.
“Man—it’s so powerfully addictive, it’s almost like a drug,” he said. “It’s almost like being in a cult. If we wore silvery jumpsuits, I would be really worried.”
“Really?” I asked, scribbling furiously in my notebook. “You don’t go without food and then do a bunch of exercise, or do repetitive chanting—right?”
“We have plenty of protein,” McCauley replied. “Groupthink tends to be very discouraged. . . . But there is this real belief in technological positivism that sometimes approaches an almost religious fervor.”
There have been signs that the Singularitarians want to up their game. Last year, an SU staffer filed papers with the Federal Election Commission to form three separate fundraising committees, each of the kind known as super-PACs. However, the applications were withdrawn after a Center for Public Integrity reporter started asking questions. SU then claimed its employee had gone rogue and filed the papers without permission, contradicting the employee’s earlier statements that SU leadership was on board with the plan.
Campaign donations are for buying access to government. They are largely unnecessary when your people are already in government. Last year, Obama appointed former Google executive Megan Smith, who contributed a chapter to a recent SU book, as the White House chief technology officer. Google chairman Eric Schmidt was deeply involved in the 2012 Obama reelection campaign. “On election night he was in our boiler room in Chicago,” David Plouffe, a former White House adviser now working as a political fixer for Uber, told Bloomberg Businessweek. In December 2014, the Democratic National Committee invited Schmidt to a “victory task force” that will set strategy for coming elections. This is all relevant because Googlethink is, for all practical purposes, SUthink. The company remains a key university sponsor, and SU cofounder Kurzweil leads Google’s AI efforts. The company’s expansion from online advertising toward more ostentatiously futuristic projects, such as driverless cars and wearable computers, reflects its founders’ enthusiasm for Kurzweil’s vision.
Superhaving It All
Kurzweil’s partner and SU cofounder, Peter Diamandis, who has laid out his own vision of a world of Abundance (his first book), a world without haves and have-nots—only “haves and superhaves”—closed the first day of the Amsterdam summit. Already, the relentless optimist shtick had grown a touch predictable, and paying customers sitting in the audience had some questions, like, what about killer robots in the future? “I’m not worried about AI,” Diamandis said confidently. “Intelligence is a good thing.” What about wars, terrorism, civil unrest? “I’m not talking about happy, la, la, la, whatever. There will be turbulence on the road to abundance.”
“In the past we had revolutions,” one man asked. “What are the new solutions?”
Did you guess technology? Congratulations. “Governments don’t get disrupted gracefully,” Diamandis said. “The technologies you’ve been hearing about today are hard to regulate and they’re impossible to stop.” Although privacy was long gone, in his view, the new world of “perfect knowledge” and universal surveillance would—somehow, magically, counterintuitively—end tyranny.
We’re going to enter a period of time where little will be done in secret. We have three private orbital satellite constellations today; that will grow to at least five. We’ll have ubiquitous drones flying, whether or not they’re legal. We’ll have . . . cameras woven into your clothing. . . . The flip side of [the loss of privacy] is a more peaceful world and a world with less oppression.
The Orwellian dystopia is actually a utopia. Got it? The final question—and I’m sorry it wasn’t mine—was, “What if Google goes crazy?” Or edits the “don’t” out of its motto?
Diamandis said that he trusted Google to guard itself against “terrorists.” Anyway, he added, “There are other search engines, if Google went down.” I was dumbfounded by his ability to miss the point. Diamandis then threw out a few lines about the democratizing power of the Internet, before musing once more on the obsolescence of the current order:
All of our governmental systems were designed for governments of one hundred, two hundred years ago. . . . We had representative democracy, we didn’t have an actual democracy. It could exist right now, but it would basically destroy the political-legal system and the people in power today to do that—and so they will not let go.
Who did he think “the people in power” were, I wondered, if they were not his billionaire pals?
“We’ve run out of time,” the emcee cut in. “A huge round of applause for Peter!”
Diamandis spoke again at the summit’s keynote—which I missed, because I was conducting an interview in a side room. Later, I learned that he, too, had gotten an RFID implant onstage. I wondered if he had felt upstaged. Maybe this is how the future gets done, chip by chip. No problems. Everyone is doing it.
As I left the DeLaMar, I considered what to do with my final day in Amsterdam. I thought I might visit the Anne Frank museum, to cheer myself up.
[*] I would recall this roguish admission later, when I came across something Singularity’s seigneur Ray Kurzweil wrote about his own career: “One of the advantages of being in the futurism business is that by the time your readers are able to find fault with your forecasts, it is too late for them to ask for their money back.”
[**] Which isn’t to say the proverbial fridge is without a whiff of chicanery. Kurzweil has a questionable side business peddling unproven dietary supplements of the sort he consumes by the dozens every day. More troublingly, the FBI and SEC found massive accounting fraud at his former software company, Kurzweil Applied Intelligence, in the early 1990s, which led to prison terms for the co-chief executive and another top employee. Kurzweil was not charged. If he was truly oblivious to the dirty dealings at his namesake company, as he claimed, then he demonstrated cluelessness sufficient to discredit his prognostications.