“Sandhogs,” they called the laborers who built the tunnels leading into New York’s Penn Station at the beginning of the last century. Work distorted their humanity, sometimes literally. Resurfacing at the end of each day from their burrows beneath the Hudson and East Rivers, caked in the mud of battle against glacial rock and riprap, many sandhogs succumbed to the bends. Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar of old long since razed, its passenger halls squashed underground—might sympathize. Vincent Scully once compared the experience to scuttling into the city like a rat. Zoomorphized, we are joined to the earlier generations.
But, I explained to my work colleagues as the Princeton local pulled out from platform eight and late-arriving passengers swished up through the carriages in search of empty seats, both the original Penn Station and its unlovely modern spawn were seen at their creation as great feats of engineering. Perhaps this was the price of engineering greatness: to reduce everyone in its orbit to mere animals. By this stage, only Sarah the Quant was paying attention. It was early June. We were on our way from Penn Station to our AI startup’s annual retreat in Princeton, New Jersey.
“Yeah that’s true,” Sarah said, nodding. “I guess I hadn’t thought of that before.”
Everyone loved Sarah. She wore black combat boots, listened to thrash metal, and was proficient in financial modeling. Sarah served as a bridge between the startup’s two main factions: the clever boys and the sandhogs. The clever boys (there were no women) were the engineers, most of them recent graduates of Princeton University’s program in operations research, responsible for designing the company’s tech “platform”; the sandhogs were the non-engineers tasked with making sense of the platform and getting people to buy it.
Sarah was mostly a sandhog—her job was to write case studies showing how data pulled from the platform could make investors money in the capital markets—but her stats chops and taste in music meant she was taken seriously among the clever boys. It did not hurt that Sarah was very smart. We all knew this because Jim Shinn, the startup’s CEO, reminded us of her intelligence almost daily. She was, he pointed out in emails, over Slack, at group meetings, and on client calls, “our clever quant,” “very clever,” “our brilliant FX strategist.”
To be fair, Jim, a tall sixty-something man who held his limbs loosely and always looked like he’d just moisturized, said similar things about everyone. Immoderation in the praise of his employees was the central plank of Jim’s management philosophy, perhaps the only plank. The young, twenty-something Princetonians who formed the core of the engineering team: “brilliant,” he said, “exceptionally talented.” The chief data scientist: “possibly the smartest guy I’ve ever met,” Jim told me—never mind that the company’s Rhodes scholar summer intern was also, possibly, the smartest guy Jim had ever met. The head of European operations, who resigned four months into the job because he realized that the company, not being ready to expand to Europe, had no need for a European head of operations: “formidable,” Jim assured us, and possessed of “superlative strategic clarity of vision.”
Bush-Branded Brilliance
Even I was touched by the grace of Jim’s exaggeration, transformed suddenly from a mid-thirties drifter with a LinkedIn account in the bottom quartile of profile views into “our very clever Aaron,” the “very talented (Aussie) Aaron Timms,” a “guru who is both a fine writer and a skillful modern media publicist in his own right,” “a very fine FX and yields journalist,” “among the most nuanced observers of data sets,” and “very, very smart.” Beyond the unarguable fact of my Australianness, none of this was even remotely true. My observation of data sets usually stops the moment I’ve noticed they’re there.
Did any of this work need to be done? No one, as far as I could tell, was clamoring for a social media-derived signals-processing tool to predict world events.
But Jim spoke with authority: the authority of age, the authority of wealth. Jim Shinn was the establishment; government, Silicon Valley, and the Ivy League were his ballasts. A graduate of Princeton, Jim got his start in the 1970s, as an analyst in the State Department’s East Asia Bureau. In 1983 he left the public sector to start a telephony company which was, people around the startup said, a pioneer in the development of voicemail. In the 1990s the company went public; later it was sold to Intel for nearly a billion dollars. Newly wealthy, Jim earned a pair of higher degrees (a Princeton PhD, a Harvard MBA) then returned to government, working as a national intelligence officer for Asia at the CIA and then as assistant secretary of defense for Asia under George W. Bush. In 2012 he’d been part of Mitt Romney’s national security advisory team. Having given back, he was now again ready to take. Predata, the artificial intelligence startup for which we all worked, was to be the glorious concluding chapter of this textbook study in American success. And here he was, a multimillionaire and higher degree hobbyist, telling us that we were brilliant.
In this overheated economy of praise, we all had to work hard to stand out, to prove our intelligence was real and not a mere artifice of rhetoric. Everyone in the company had an intelligence-signaling gimmick. Sarah demonstrated her brilliance by turning the company’s mostly haphazard and nonsensical data into semi-coherent financial trading models. The clever boys larded their conversation with reminders that they went to Princeton. The head of financial sales frequently referred to his old workplace, a successful political risk firm run by celebrity political scientist Ian Bremmer. (“I loathe Ian,” Jim once told me. “His combination of arrogance and ignorance drives me batty.”)
My own trick was less subtle. I posed as the company’s speaker of uncomfortable truths. This involved swearing a lot and engaging in long, mock-academic soliloquies about engineering and the state of the world. The “sandhogs” speech, delivered at the center of New York’s crumbling mass-transit hub, was my latest variation on the theme.
Implied in the contest between these grasping bids for approval was a bigger question: in a world of ubiquitous brilliance, what was intelligence? If everyone in the company was brilliant, no one was. Jim was a Bush 43 Republican, part of the generation whose “intelligence” gave us Saddam’s WMDs and the war in Iraq. Were we truly accomplished, or only in the “Mission Accomplished” sense? Were we doing a heck of a job, or a “heck of a job” like Brownie? Artificial intelligence was the company’s reason for being. But what if this company, and others like it, was not at the vanguard of human progress but catapulting us instead toward a new era of idiocy? What if the company, this collection of CEO-stamped super-talents, was like the GOP class of ’04 from which its leader was issued: heroically, generationally dumb, thrust into the center of the action not by the quality of its ideas but by the immovable reality of American power and wealth?
Smart, Bombed
Picking Silicon Valley’s future winners is still, some six decades since the birth of modern venture capital, more art than science. Most VC-backed startups don’t make it past the first round of funding; a good number of the AI companies financed this year will be dead by the fall. The story of Silicon Valley is as much about donkeys as unicorns, entrepretendeurs as entrepreneurs. Like all good stories, this story has the capacity to surprise. Many of the tech industry’s most memorable flops were at one point seen as great successes. Juice machine startup Juicero attracted $134 million in venture capital funding before a story by Bloomberg mocking its “juice pack” technology sent the company crashing; blood testing startup Theranos, once valued at $9 billion, is now worth less than 10 percent of that figure and has only dodged bankruptcy thanks to an emergency loan of $100 million. Thousands of tech ventures founded this year will meet a similar end to these high-thrust flameouts but will avoid the scrutiny: no media reports, no dragging tweets, no trial by meme. Failure, when it comes, will be quiet and anonymous. This part of Silicon Valley’s story remains little told.
Jim founded Predata on the bold promise to help the world get better at predicting the future. It’s been almost two-and-a-half-thousand years since the oracle at Delphi, with Persian forces approaching, advised Themistocles and the Athenians to “await not in quiet the coming of the horses” and retreat instead to Salamis. Human predictions have struggled to maintain that early high standard. Forecasting is a notoriously tough business. Most of last year’s market predictions from financial analysts turned out to be wrong, and almost every political scientist of repute misjudged the outcomes in the 2016 Brexit referendum and the U.S. election. Many of the deadliest terrorist attacks of recent times—Paris, London Bridge, Brussels Airport—have been cast as intelligence “misses,” as failures of prediction.
Predata’s technology progressed from a simple observation: the Arab Spring showed that communities (communities of protesters, in this case) organize online before they organize on the ground. (Though this insight, too, is at least as much urban legend as demonstrated fact.) The idea of turning this observation into a business came not from Jim, he told me, but from Daniel Nadler, CEO of much-hyped (and much-hated) financial technology startup Kensho, writer of imagined ancient love poems, and semi-professional avoider of mirrors. Some time in 2014, Nadler, whose own startup Jim has invested in, came up with the idea of building a signal based on social media pages about the mining sector in Chile, the world’s largest copper producer. By observing fluctuations in the signal, Jim said, Nadler was able to anticipate mining strikes and buy copper ahead of expected supply disruptions, which would be likely to push the commodity’s price higher. This, apparently, earned Nadler a handsome profit. I was never able to confirm this story with Nadler, and Jim himself became more vague on the origin story once Predata got off the ground.
From misty origins a business emerged. The company’s main epistemological bet was that the history of social media could be studied to predict the future. Through the application of machine learning, a branch of artificial intelligence, social media signals could be used to predict events—all sorts of events, from protests and strikes to terrorist attacks, election outcomes, and financial market moves—before they materialized in real life. The system’s algorithm was trained to get better at recognizing the characteristic patterns of online activity that precede events, and to alert users whenever those patterns were beginning to recur. This “predictive intelligence” tool, as Jim called it, was then to be packaged and sold to government intelligence and defense agencies, hedge funds and investors, and a host of other deep-pocketed corporate worthies.
In July 2016, fresh from a flukishly correct prediction of the outcome in the Brexit referendum, the company raised more than $3 million in venture capital. That’s a small sum by Silicon Valley standards, but for a company that was little more than an idea and some people, it represented a major declaration of market faith. Predata’s funding was part of a surge in capital toward the machine learning and neural net technologies considered the core of modern artificial intelligence. In recent times, these technologies have given us self-driving cars, cancer-detection imaging, and instant translation earbuds; they are now the roost of Silicon Valley’s wildest imaginings.
Future Shtick
Global venture capital funding for artificial intelligence startups has increased more than twenty-five times over since 2012. In 2017 it reached $15.2 billion, according to research firm CB Insights, with half of that money flowing to startups in the United States. This represents an extraordinary comeback for a technology that, by the early 2000s, was something of a museum piece. The original ambition of AI was to build machines that could replicate the intelligence of a human being. After promising early developments in the 1960s and 1970s, the field stagnated through the final decades of the twentieth century—a period known as the “AI winter”—as developers struggled to realize that ambition. Improvements in computer processing power over the last decade, however, have brought the original vision back to life.
A computer system that can predict the future fits seamlessly within AI’s new hosanna narrative. Predata began life dreaming big: big ideas, big buzzwords, big markets. Investors in the financial industry alone spend billions of dollars every year on technologies that can help them get an edge in the markets. Many of these technologies, like Predata’s, will be unfamiliar and unproven. Predata’s financial backing came from a consortium of VC firms and private investors, including the Dallas-based hedge fund manager Kyle Bass. Bass is a name in the financial industry, best known for being a “China bear” boldly forecasting the coming collapse of China’s high-growth global expansion. His investment carried more than mere monetary value; it signaled that Predata was taken seriously among people with power.
But by this early summer day in 2017, a year on from the funding round, that early goodwill was gone. “Retreat” was not just the name given to the overnight gathering at Jim’s Princeton house; it also perfectly described the state in which the company, running out of money and walking into repeated rejections, was fixed. What Predata had: a thesis, a platform virtually no one used, data, twenty employees, an office in downtown Manhattan. What it didn’t have: more than one customer, a product. Eventually, it became clear that the company’s real problem was the indulgent relationship between its absentee CEO and a group of callow, entitled engineers allergic to criticism. We were, I repeated to the group assembled on the train, cooked.
Innovation for What?
Josh, the head of business development, interjected. Because of his brief—selling our hypothetical suite of software products to a vast array of prospective clients, who proved to be still-more hypothetical—Josh exuded a naïve optimism about all things Predata. “This could be a really good opportunity to think about the data,” he said. “I think the dev team have been working on some really awesome new features.”
An opportunity to think about the data. Everyone nodded as Josh said these words, heads bobbing in time with the train. But “thinking about the data” was probably something Predata’s founders should have done before they incorporated, started hiring people, and told people they had data worth buying. The name “Predata” was intended as an elision of “predictive data.” In another sense, though, the company was authentically “pre-data,” in the way that startups ahead of financing rounds are often said to be “pre-money.”
Jim spoke with the kind of lozenged patrician purr that suggests extreme wealth and a dangled, slightly weary cosmopolitanism. Whether explaining “how anticipatory intelligence works,” offering his views on the war in Afghanistan to Charlie Rose, or enticing Princeton undergrads to take his class on “radical innovation in global markets,” Jim put the purr to good use. He was obscure enough not to be a fully public figure, but sufficiently well connected to have access to the halls of American influence.
Though the original inspiration for the company came from the mirror-avoiding poet Nadler, it was with one of the computer science majors who’d been a student in his class on radical innovation that Jim started Predata in mid-2014. Later, he brought on an old sidekick from the voicemail company as his operational enforcer, as well as a handful of Princeton engineering grads. Sagacious grayheads in the corporate suite, young talent on the factory floor. One of the engineers devised a tagline for the company: “The future may surprise you. It shouldn’t.” Who doesn’t want to predict the future, especially if the prediction can help prevent a terrorist attack? Whether hedge funds or government agencies, the intended consumers of this technology were uninteresting. The technology itself was anything but—not quite a barnburner, but at least a barn warmer.
Mixed Signals
There was only one problem. It didn’t work. It turns out that historically, there’s only been one instance in which a community has organized online to any significant degree before organizing on the ground: the Arab Spring. For the other types of events that Predata was interested in predicting, the clever boys soon discovered that the connection between online discussion and real-world activity is slim, at best, and unpredictably distributed.
I joined Predata in late 2015. By that point I’d known Jim for a couple of years, mostly through Institutional Investor, the desert-dry financial publication we both wrote for (I as a staff reporter, Jim as a regular contributor of long screeds about geopolitics and the markets). Jim emailed me regularly to offer compliments on the turgid, little-read pieces I produced for the magazine. “You are a remarkably talented analyst!” he wrote in 2013. “Great article!” came another message in early 2015. When Jim introduced his new venture to me over email in September 2015, I was caught doubly by surprise: first, by the sudden break in the oiled chain of compliments to which I’d become accustomed, and second, by the grandness of Predata’s claims about its technology. “Interesting,” I replied, “though I do wonder how genuinely anticipatory the signals are.” “Come on up to Predata some time and I’ll show you how it works,” Jim wrote back. “It really does, it’s amazing.”
A few months later I left Institutional Investor and joined Predata as a consultant; within the year I was a full-time employee. There was a non-disclosure agreement attached to my consulting contract. The letter offering me full-time employment in mid-2016 superseded all previous agreements between me and the company, and stated that I would be sent a new NDA. I never received the new NDA.
As “director of research,” a grandiose title to which I had no real right, my task was to “tell the company’s story,” Jim explained. In theory, this meant I would turn Predata’s signals into coherent political and market analysis. At the time I joined, the company had no customers—a not unusual state of affairs for a young startup—and was being financed out of Jim’s pocket. Then came the first customer: Bloomberg, the market data behemoth. Then came millions in venture capital funding.
By the time we arrived at the Princeton house, though, the $3 million was almost gone, doobied away on a hiring spree and the $500 tasting-menu dinners needed to fund a pointless attempt to expand into Europe. Predata had one customer—and that contract was about to end. The clever-boy engineers were still no closer to validating the thesis of the company, that machine learning can help to accurately predict real-world events. The pressure was beginning to tell.
Money in Repose
Jim’s house was set in the woods outside Princeton, a Northeastern pastoral latticed with narrow, empty roads. Elegant entrances to long, tapering gravel driveways snaked off these back routes, which survived as if under perpetual threat of invasion by thick platoons of conifer and oak. Money looked different out here. The smaller and more discreet the driveway entrance, the wealthier the house’s occupants.
Jim’s place, by contrast, was fairly ostentatious—the home of a man who’d arrived at money by his own hands, and possibly as an accident. A sign at the entrance to the driveway announced, in capital letters, “SHINN VILLA.” The original core of the house was built in the 1720s. Many owners had extended the residence and added their own flourishes in the centuries since.
“Welcome to la estancia,” Jim announced on our arrival, his hands full with secateurs and chopped stems as he greeted us at the door. He elongated the final “a”: estanciaah. “I’m doing my favorite thing—cutting flowers!” Jim told us SHINN VILLA had once been the home of Christian Gauss, thesis adviser and mentor to the undergraduate F. Scott Fitzgerald. “I bought it off a hedge fund guy who was really into horticulture,” he added. “The peonies in the front yard are one hundred years old.”
Contact with technology was stripping away my creativity, my capacity for independent reason. I was becoming less, not more, capable. I was becoming less human.
Once we were inside, Jim—still sweating from his gardening exploits—invited us to move outside again, this time to the patio, a stone apron with bird shit-splattered deck chairs and iron furniture from which it was possible to take in the full sweep of the property’s grounds: sloping lawns, a tennis court, a pool, a violent and unexpected eruption of roses. It was a New World pantomime of English old money sullied by sudden intrusions of gauche Californiana (an organic vegetable garden, a hot tub).
There was a silence as we all took up positions on the deck chairs and Jim disappeared back inside. No one drew attention to the fact we had begun the retreat by arranging ourselves on a surface of dried shit. Jim’s full-time residence was a multi-story apartment on Fifth Avenue on Manhattan’s Upper East Side. John, the head data scientist, who’d taken Jim’s class on radical innovation as an undergrad, told us that Jim only came down to Princeton and stayed in the Gauss house the nights before he taught classes. The rest of the time the house was empty, save for Leni, the Latina housekeeper, and Jim’s dog, a fifteen-year-old diabetic cairn terrier-pug cross-breed called Toast.
“Apparently the garden has some of the oldest Asian trees in North America,” said Matt, the head of financial sales.
“Who maintains the grounds?” I wondered, aloud.
“Money,” answered John.
The Easy Way
At the beginning of my time with Predata, the data scientists and I had clashed over the best way to interpret the company’s largely uninterpretable signals. John had the puppy-fat face of a child, though by temperament and manner (irascible and impatient) he was closer to an old man. He had a pronounced and peculiar antipathy for BuzzFeed technology reporter Will Alden, who he referred to exclusively, with scare quotes, as “journalist Will Alden.” From what I could gather, this animus derived from Alden’s bothersome professional habit of writing stories about Palantir, the secretive Peter Thiel-led tech startup, now valued at $20 billion, that performs data analysis for much of the U.S. federal government—and for which John had previously worked. I found John irritating; I’m sure he thought the same of me. Over recent months, however, as I’d come to see that the company’s technology was largely bullshit and moderated the ambition of my analytical claims accordingly, I began to sympathize with John’s position—a position born of skepticism about the power of the platform he himself had built. A détente set in; the head data scientist and I were almost friends.
Jim reappeared, carrying green cushions for the shit chairs. “These cushions fit the deck chairs perfectly,” he said, placing a couple down for his own use and offering none for anyone else. A dozen of us—the company’s “brain trust,” according to Jim—had assembled for the retreat, at which we were to take stock of the company’s progress and plan for the year ahead. Jim arranged himself to better face us, his arms placed above his head, wrists cocked. He was, he explained, hung over. The previous day, his wife had graduated from the PhD program in architectural history at Columbia University; the two of them, with their family and friends, had spent the night celebrating at the PhD terrace atop the Dream Hotel—“the one in midtown, not downtown,” Jim added.
“What did she write her dissertation on?” someone asked.
“I guess I should know this . . .” he replied, smirking, before letting the back half of the sentence slide out the side of his mouth, unvoiced.
The smirk disappeared and Jim turned serious. He explained that even though the company was just two months away from exhausting the last of its capital, there was no need for alarm. Further funding would soon be secured. “We’re doing this the easy way, by going back to our existing investors and asking for more money,” Jim continued.
Sitting up, he concluded, “we might have to take some pain on the valuation, but the good thing is we still own over 80 percent of the company.” As the words “80 percent of the company” came out of his mouth, he snort-smiled without showing any teeth and his eyes, widening, darted wildly around the group in search of approval. He seemed pleased. With time, it was not hard to see why. Jim got to his feet. “Lunch will be served in ten minutes.”
Among the Romantic Egotists
“The test of a first-rate intelligence,” F. Scott Fitzgerald said, “is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.” Fitzgerald embodied a certain idea of the prodigy in early twentieth-century America. His semi-autobiographical first novel, This Side of Paradise (published when he was twenty-three), which told the story of a young writer’s education at Princeton, brought him sudden, near-universal acclaim. The rest of his career, even through the Gatsby years, became a long and increasingly frustrated attempt to rekindle that first precocious blaze of success. Today our image of the Jazz Age—as an era of giddy, libidinous self-enrichment and cultural exploration—is inseparable from the figure of Fitzgerald, the brilliant young man who lived, if only briefly, his generation’s most interesting life.
From his correspondence it’s clear that Fitzgerald intended his definition of intelligence as little more than a self-description. Shot through the letters he sent as a young man to family, friends, editors, and agents are equal measures of self-doubt and self-regard; he saw himself as both a generational talent and a cultural waste of space, often within the same page. One moment he’s lamenting his “flabby semi-intellectual softness,” the next he’s saying of his (rejected) early manuscript, The Romantic Egotist, “no one else could have written so searchingly the story of the youth of our generation.” Chronic equivocation—on the value of a Princeton education, on the politics of selling out, on the meaning of success, on the quality of his own work—was the distinguishing mark of Fitzgerald’s intelligence.
These days, literary prodigies are fairly rare; the English-speaking world has produced few so far this century. It’s instead to the world of technology that we must turn for the richest examples of what it means to be young, brilliant, and successful today. Jeff Bezos, Mark Zuckerberg, Larry Page, and Sergey Brin . . . Silicon Valley is an empire of aging prodigies. By the power of their example these demiurges have come to dominate our collective sense of what it takes to be smart: a mastery of numbers, proficiency in STEM, the subordination of empathy to data. Intelligence today is their type of intelligence: tech-telligence. But where was the intelligence when Zuckerberg—or Zuckerberg’s button-bright cartoon avatar—took to Facebook Live late last year and introduced his company’s new VR tool against a backdrop of hurricane devastation in Puerto Rico? What kind of intelligence guided Marc Andreessen toward the claim that colonization was good for India, or Elon Musk to his bizarre crusade against public transport? The fuss over these snafus was brief and quickly forgotten. In response, Andreessen issued a smiley face-adorned tweet of apology—a classic of the “I’m sorry if you were offended” genre. Musk weakly fought back. Zuckerberg said virtually nothing. Not one of them recognized, in public at least, that what he’d done was not simply insensitive and regrettable but also, and above all, supremely idiotic.
From the evidence of these examples, all three men would fail Fitzgerald’s test of intelligence. “Any city gets what it admires, will pay for, and, ultimately, deserves,” the New York Times editorialized in October 1963, as demolition of the old Penn Station began. “We want and deserve tin-can architecture in a tinhorn culture.” Perhaps the intelligence of Silicon Valley—vacant, arrogant, unfeeling, artificial—is simply the intelligence we deserve. But that intelligence cannot flourish without enablers.
Perfect Nonsense
Jim’s eyes were half-closed and he was shaking his head with a disbelief that verged on the erotic. “That is probably the most perfect description I’ve ever heard of the challenge facing this company,” he said.
We’d left the shit chairs, eaten lunch, and were seated in a circle in the library of the Princeton house. Dakota, the company’s twenty-two-year-old data scientist, had finished explaining the point of data science in a world of unstructured information. “A good way to view the job of data science is as a means of turning clients’ attention—their cognitive and organizational resources—and intention—their desire to fill holes in a knowledge base—into information,” he’d said, stressing the hard “a” at the beginning of “attention” and the “in” of “intention.” Earlier John had argued that Predata needed to start calling itself “a data science company.” Dakota’s sermon was designed to explain what that involved.
With an IQ greater than 180, Dakota had been deemed “profoundly gifted” since childhood. As an eight-year-old, he tried explaining the concept of absolute zero to his mother; a year later, he began attending undergraduate classes at university. Local newspapers covered him like they might the birth of an exotic species in a zoo.
I posed as the company’s speaker of uncomfortable truths, which involved swearing a lot and engaging in long, mock-academic soliloquies about engineering.
Dakota exercised at his desk throughout the working day by shoulder pressing a set of small dumbbells. I liked him. He was widely viewed as the cleverest of the company’s clevers. But much of what he said was needlessly convoluted; Dakota was used to being treated as a very smart person but not, apparently, to being challenged. On this occasion, there was a simpler way to convey what he meant by his impressive-sounding distinction between “attention” and “intention”: the challenge facing the company was to give people shit as a way to help them figure shit out.
But in the delicately coded universe of Predata-speak, things could never be put quite so plainly. Even the company’s management style was a kind of evasion. Every entrepreneur today wants to be agile, every startup lean. Jim was so agile he’d managed to pull off the trick of running a company while almost never appearing in its place of work. The company’s office was spread across a handful of claustrophobic rooms in the WeWork in Soho, the fashionable neighborhood in downtown Manhattan that Jim described in emails to clients and visitors, variously, as “trendy Soho,” “barefoot Soho,” and “breathlessly chic Soho.” On the rare occasions he materialized in the office, he never stayed much longer than an hour. Easily bored, he would often lose the thread of conversation in meetings, becoming absorbed instead in some news app or the emoji keyboard on his phone, or walk out of meetings altogether within minutes of them starting. Later, you’d find him holed up in his office in the dark, answering emails or eating soup, or fixed in place at the second floor urinal, legs spread wide, letting out a deep and satisfied sigh—the pissing style of a man with no one left to impress.
Jim’s basic formula for succeeding in business was simple: hire clever boys (rarely girls), tell them they’re brilliant, and let them figure everything out. In April, it was decided that the company’s public website needed a redesign. My job was to come up with words to express our unique “mission.” I messaged Jim on Slack, “I’m rewriting this website copy and want to get your thoughts: in five sentences or less, what is Predata and what do you think the company will be doing in five years? In other words, what’s the big vision here? Where’s this all going? Who are we? Why do we exist?” Jim replied, “Geez, I’m not that good at Big Vision. Please take a crack at it and I will gladly, and minutely, edit and kibitz.” Here was a startup so lean that its CEO had stripped away all explanation of why it existed. The attention/intention binary was sure to be a showstopper in a world of such little manifest thought.
Bad Brains
AI does not want for critics. In Silicon Valley, the big cats have drawn their claws. Elon Musk is one of several tech luminaries, including Jack Ma and Bill Gates, who believe AI poses a mortal danger to human civilization. Last year Musk compared the work of building AI to “summoning the demon.” Mark Zuckerberg labeled Musk’s intervention in the AI debate irresponsible. Musk shot back with a subtweet. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”
When Musk speaks of AI, he’s mostly referring to the technology as it was originally conceived in the 1950s—as a system of symbolic logic to enable the creation of self-aware machines with similar cognitive sophistication to the human brain. This is what’s commonly referred to as “artificial general intelligence” or “strong AI.” In practice, there are no technologies or companies grouped under the rubric “AI” today that meet this description. The term “artificial intelligence” is instead used loosely to refer to a diverse group of less ambitious technologies, some of which have little in common: machine learning, deep learning and neural networks, robotics.
If the ambition of the field is to model the human brain in machine form, artificial general intelligence has made little progress in the six decades since it emerged. The scope of what can be done in AI as we understand it today—the looser, lesser AI—remains limited. Machine learning, the logic- and rule-based branch of AI supporting Predata, is little more than a technology to process data and program reactions to recognized patterns. Some argue that it should not be considered part of AI at all.
Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.
The failure to make any progress toward the development of strong AI, physicist David Deutsch has argued, stems from the AI community’s broader inability to “recognize that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be.” Human intelligence, says Deutsch, cannot be encoded by any known programming technique, yet AI developers continue to approach the problem of AI as if it can. The human mind is not a behavioristic function of inputs and outputs that can be optimized according to a defined system of logic; nor is it a neural network of intelligent, self-correcting connections. These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on. What that something else is, we don’t yet know.
AI researchers dismiss Deutsch as an outsider with no understanding of how the technology really works—a rote rebuttal in data-engineering circles. But his basic point is correct. The field of AI continues to limp along with no real understanding of what makes the human brain unique and with no agreed definition of “intelligence.” If ideal AI is “strong,” ours is the age of weak AI.
As a result, intelligence today is defined not by the properties of the human brain but by association. Intelligence is the thing intelligent people do. Since intelligence remains undefined in AI, the whole field is arguably misconceived, for now at least. A machine built to model an organ we don’t yet understand is bound to fail. More than a pedantic definitional point, this goes to the heart of how the VC billions get allocated in this new boom sector. Faced with the impossibility of determining whether a technology is intelligent or not—since we don’t know what intelligence is—Silicon Valley’s funders are left instead to judge the merit of a new idea in AI according to the perceived intelligence of its developers. What did they study? Where did they go to school? These are the questions that matter.
You Gotta Work for Your Unemployment
The library was two stories tall with a vaulted, dark wood ceiling. Shelves packed with business books and dated guides to computer programming covered one wall. A canvas depicting what looked like an enormous gerbil recast as a galaxy of asteroids was hung over the fireplace.
The data scientists still had the floor. By this point it was clear that the whole retreat had been arranged as a stage for them to lay out their vision for the company’s future, a vision in which data science was the sole component of consequence. If we were to successfully bridge the gulf between intention and attention with information, John said, it would be necessary to begin work on building an ontology. He didn’t mean ontology in the sense of Plato, Hegel, or Hamlet; this was no philosophical disquisition on the nature of being. He meant ontology in the way that engineers use the word, as a conceptual framework for understanding the relations between things.
Like all machine learning systems, the company’s platform was trained on data. Data-based inputs helped the system, via the alchemy of engineering, generate outputs such as signals and predictions. There were two main types of data fed into the system: online sources (web pages drawn from social media) and date-tagged historical sets of events (protests, strikes, terrorist attacks, missile tests, big single-day movements in financial securities, and so on). The feeding had to be done manually, by human users of the system: a tedious and repetitive process, classic sandhog work. With time and a growing pool of data, the theory went, the system would mature. Eventually, it would be able to produce better and more accurate predictions.
Josh and the company’s three in-house analysts had, for much of the previous twelve months, worked busily to add new sources and events to the system. The analysts, all of them recent Princeton graduates with degrees in humanities and the social sciences, classified these inputs according to a taxonomy—“small mining strike,” “right-wing medium-sized civil protest,” “inner-city activist protest,” etc.—that was always expanding and never fully comprehensible to anyone, least of all the analyst team. But taxonomization was yesterday’s business. The future, the data scientists asserted, was ontologification. If only the sandhog-analysts could tag sources more intensively and more intelligently (which is to say, more ontologically) we would—or rather, the machine would—be able to start seeing the hidden relationships and patterns embedded in all things. Eventually this work would become automated. In the meantime, the analysts were sweating it out every day to ensure their eventual obsolescence.
The Jargon of Inevitability
Did any of this work need to be done? Was this technology—or, for that matter, any technology—needed to better anticipate terrorist strikes or North Korean nuclear tests? No one, as far as I could tell, was clamoring for a social media-derived signals-processing tool to predict world events. Say you accept the validity of the company’s foundational problem: that we need to come up with better ways of predicting real-world events. Why is AI, why is tech, the best way to achieve that objective? I looked over at Jim. He’d let his head drop. His eyes, from what I could tell, were closed, and his microfiber Birkenstock clogs dangled from the ends of his feet like scabs clinging to fresh skin. Asleep or merely feigning sleep, he contributed nothing to the conversation.
I wandered off, as if on my way to the bathroom. In the kitchen, Jim’s housekeeper Leni was grinding together different pills into a paste for the diabetic dog Toast. (“Leni makes very good New American food,” Jim told me later that night, his tone so hushed and confessional it seemed he was letting me in on a vast and terribly important secret.) I kept walking. The house felt ironed out and unlived in, even when it was filled, as it was for the retreat, with people. Whoever decorated it had tried to make the house look like a home. The rooms were all designed as if from a catalogue. Cream couches, navy throws, gray drapes.
By the time I made it back to the library, Jim’s head was upright and his eyes had reopened. Raymond, the lead product developer, a linguine-thin Princeton grad with a permanent frown and an aversion to conversation, was in the middle of a speech about “ground truth” and “executive dashboards.” John interjected that what we really needed to do was “capture the mental models of our analysts, then put them into the system.” He gave the example of a protocol he and his “team” used to follow when he worked at Palantir, the company covered so offensively by “journalist” Will Alden.
“At Palantir,” he continued, “we spent most of our time trying to understand data. We need to do that way more. We need to figure out whether the data’s any good.”
But data was not the problem at all. For an analyst, real wisdom comes from seeing radical, unexpected change, not from identifying patterns, as Predata stolidly, without firm empirical cause, set out to do across the social mediasphere. As any student of pattern recognition can readily affirm, the valence of patterned phenomena necessarily becomes better known and loses force with repetition. Intelligence comes from creativity and adaptability. The clever boys had designed a system incapable of either.
Unable to model a computer system with true intelligence, the clever boys had built the next best thing: an unintelligent system. The platform itself consisted of a series of black screens featuring inscrutable, colored squiggly lines. These signals expressed, according to the clever boys, the “digital volatility” associated with online conversations, a concept that made little sense to any potential users of the company’s products. To include these signals in any kind of analysis, users needed to adopt the arcane vocabulary developed by the clever boys to explain what was going on, a confusion of terms that sounded like clothing measurements (“best fits,” “online fits”), food (“sector rollups”), or scene descriptions from a bad science fiction film (“anomaly detection dashboard”). No intelligence or financial analyst has ever pronounced the words “sector rollup” in the course of the working day.
The Engineers of Human Souls
Predata had built a system designed to augment tasks and functions entirely of the engineers’ own imagining. The “mental model” the clever boys were busy importing into the system bore no relation to any known cognitive process of a human analyst. It was the equivalent of building an instant translation tool that converted all known languages into a language no one had ever heard of, then trying to get users to speak in that new, redundant language instead of their own. Joined in this miscreation were two devolved forms of narrow intelligence: the narrow intelligence of weak AI and the narrow, self-consumed intelligence of data scientists.
“We’re wasting time on things that don’t matter,” I argued to the group. Sector rollups, best fits, anomaly detection dashboards . . . no one needed any of this stuff. This was not a rogue critique I’d invented on my own. We’d heard it from the company’s would-be customers as well. The data scientists disagreed.
“If you showed a Usenet listing to a focus group in 1985, I guarantee no one would tell you that they want Google, and many of them would initially prefer the listing to a poorly executed search engine,” John said. “But I am fairly certain that they’d be happy if presented with Google even as of 2001.”
Venture capital funding for artificial intelligence startups in the United States has increased more than eight times over since 2012.
“Our responsibility as data scientists is to be the ultimate arbiters of quantitative truth,” Dakota added. System built, their task now was to come to grips with the data, to scrub it, clean it, crunch it, parse it, interrogate it, manipulate it, massage it, to get to know it, and to care for it. The data scientists were like gardeners fussing to clear a manicured path in the middle of a desert—their very own Sonderweg. What mattered to them was the quantitative purity of their system, the perfection of their path, not whether it was useful or made sense in the wider world of non-engineers.
After a year in the company I discovered, to my horror, that I had begun to think about politics and policy almost exclusively in this bizarre, engineer-created jargon. “A spike in the sector rollup suggests heightened political risk over the coming two weeks, though judging by the relatively muted activity in the microeconomic subsector signal, it’s unlikely economic policy will be a driver”—that kind of thing. Just as the laborers working on Penn Station succumbed to a physical distortion at the end of each day—the bends—I began to feel the effects of a mental distortion: the delusion that says, yes, this is an acceptable way to analyze and think about politics. Inexorably, I was turning into a sandhog. Contact with technology was stripping away my creativity, my capacity for independent reason. I was becoming less, not more, capable. I was becoming less human. With time I understood that the point of the company was not to train and optimize the algorithm on human modes of thinking. It was not to humanize the machine, as the classical vision of strong AI would have it. Rather, it was to mechanize the human—to make the human learn to be more like the machine, a rigid taskmaster stripped of initiative and organic thought.
Failure, when it comes, will be quiet and anonymous. This part of Silicon Valley’s story remains little told.
This thought, in turn, opened onto a no-less terrifying scenario. Perhaps, for all its claustral, data-determinist folly, Predata really is foreseeing the human future—just not in the way it thinks it is. It’s possible that the market for a user-hostile data system that inaccurately predicts the future and turns its human operators into automatons exists after all, and is large. These scattered rustlings of a company lost in the dark might eventually reveal themselves to be the footsteps of a baby unicorn. But what will be sacrificed along the way?
After our four-hour session of revolutionary self-criticism concluded with the usual outpouring of superlative praise from Jim, the group fanned out into the back patio to decompress. I drifted away to the wine table. Were the clever boys really geniuses, or moderate intellects unable to see the world beyond their professional parapet? Jim approached and commented approvingly on my choice of the vinho verde. “I mostly drink New World wines,” he said. Then, a giddy skip in his voice, nodding toward the clever boys: “These guys are brilliant. All we need to do is just give them the space to do their thing, and we’ll unlock a lotta alpha.” At the words “a lotta alpha,” he swung his hips slightly.
At the meeting, two weeks later, at which I resigned, Jim thanked me for my service and affirmed, eyes closed, that I was “exceptionally talented.” Two weeks after I’d left the company, he sent me an email, unprompted, to thank me again. The subject line: “Mr. Timms Talent.” Three weeks on, another email dropped. “You made a big contribution to Predata, for which we’ll remain very grateful,” he wrote. “You are extremely talented.”
Unintelligent Design
Jim offered to give the four of us left in the house the next morning a lift to the station. It was time for the sandhogs to return to the city. A brisk eighteenth-century symphony struck up on the car stereo the moment we began the long journey down the driveway out of SHINN VILLA. “The car links up with my phone and magically plays a different piece of music every time,” Jim said. “I don’t know how it works.” Abruptly, the music stopped. Everyone laughed. These people loved Jim. Even when name-dropping over email (“In full disclosure, Ben Bernanke taught me Open Economy Macro at Princeton”), Jim was fun. Likability applied to the task of extracting millions of VC dollars in the pursuit of a sort-of interesting, not-really-there idea: this, of course, was its own type of genius. We continued to drive in silence.
At dinner the previous night—“The only rule is that you must drink all the wine!” Jim had shouted as we took our seats—I’d sat between the two data scientists, John and Dakota.
“If we don’t know whether the data’s any good, everyone who’s not an engineer should be fired and the company should go back to being an academic research project,” I said.
John shrugged. “Maybe. We need to build an ontology, then make predictions great again.”
I asked the data scientists what they really thought Jim was hoping to achieve with the company. Why was Jim bothering, when it would have been so easy for him to drink Chilean vinho verde and eat Leni’s New American food untroubled to the end of his days? At an age when most of his moneyed peers were sitting on boards or playing golf, he was typing emails out of a dim WeWork office in downtown Manhattan and getting schooled about GIF protocol on Slack by a group of twenty-five-year-olds. Freed for life from the yoke of labor by virtue of his immense wealth, Jim continued to impersonate productive employment. What was the point?
I already knew the answer. Jim’s identification of a foundational “problem” was as hazy as the company’s origin story. Predata did not exist to satisfy a need in the market, or so its founders could tick off some professional accomplishment. It existed for the same reason that any startup in America built on other people’s money exists: because it could. The company’s facticity was its transcendence. John shrugged again, then coughed out a defensive laugh. “It sounds like you don’t want to work here anymore,” he said. I realized he was right.
Once we reached the outskirts of Princeton proper, the music in the car resumed. Jim opened up again, offering capsule histories of different structures and landmarks dotted around the campus. This was his domain, the university he’d attended twice as a student, then come back to teach at once a wise old bag of business. As we passed a gargoyled tower, he said, “they had to close access to the roof because stressed-out grad students were jumping off the top.” Then, past a mansion: “John Nash used to live there.” Jim paused, giving us all time to appreciate this biographical nugget: the Nash of the Nash Equilibrium once ate and breathed and revolutionized decision theory mere feet away from us. “One night as an undergrad I was walking across campus after a night out, and I had to pass through the basement of the math department building. What’s that building called?”
“Fine Hall,” answered one of the clever boys.
“So the corridors of Fine Hall are lined with chalkboards,” Jim continued. “All of a sudden, as I’m walking through the basement, I hear this tak tak tak, and there’s John Nash, writing out long lines of mathematical equations on the walls. I was stoned out of my mind, so I didn’t talk to him.” We pulled into the car park at Princeton station. “He was a really talented guy.”
Three months later, Predata secured a second round of venture capital funding.