Recent reports fail to explain that the Cambridge Analytica scandal is standard practice for companies like Facebook. / The Baffler
Yasha Levine,  March 21, 2018

The Cambridge Analytica Con

How media coverage misses the mark on the Trump data scam

Recent reports fail to explain that the Cambridge Analytica scandal is standard practice for companies like Facebook. / The Baffler
w
o
r
d

f
a
c
t
o
r
y

“The man with the proper imagination is able to conceive of any commodity in such a way that it becomes an object of emotion to him and to those to whom he imparts his picture, and hence creates desire rather than a mere feeling of ought.”

—Walter Dill Scott, Influencing Men in Business: Psychology of Argument and Suggestion (1911)


This week, Cambridge Analytica, the British election data outfit funded by billionaire Robert Mercer and linked to Steven Bannon and President Donald Trump, blew up the news cycle. The charge, as reported by twin exposés in the New York Times and the Guardian, is that the firm inappropriately accessed Facebook profile information belonging to 50 million people and then used that data to construct a powerful internet-based psychological influence weapon. This newfangled construct was then used to brainwash-carpet-bomb the American electorate, shredding our democracy and turning people into pliable zombie supporters of Donald Trump.

In the words of a pink-haired Cambridge Analytica data-warrior-turned-whistleblower, the company served as a digital armory that turned “Likes” into weapons and produced “Steve Bannon’s psychological warfare mindfuck tool.”

Scary, right? Makes me wonder if I’m still not under Cambridge Analytica’s influence right now.

Naturally, there are also rumors of a nefarious Russian connection. And apparently there’s more dirt coming. Channel 4 News in Britain just published an investigation showing top Cambridge Analytica execs bragging to an undercover reporter that their team uses high-tech psychometric voodoo to win elections for clients all over the world, but also dabbles in traditional meatspace techniques as well: bribes, kompromat, blackmail, Ukrainian escort honeypots—you know, the works.

It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.

But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.

So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.

Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

What Cambridge Analytica is accused of doing, Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.

What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.

It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.

In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.

These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.

Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.

The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.

The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”

Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.

Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.

Here’s how The National Journal’s Andrew Rice described i360 in 2015:

Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.

Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”

In 2008, the hip young Blackberry-toting Barack Obama was the first major-party candidate on the national scene to truly leverage the power of internet-targeted agitprop. With help from Facebook cofounder Chris Hughes, who built and ran Obama’s internet campaign division, the first Obama campaign built an innovative micro-targeting initiative to raise huge amounts of money in small chunks directly from Obama’s supporters and sell his message with a hitherto unprecedented laser-guided precision in the general election campaign.

Obama came to power on the back of Facebook’s profiling and targeting technology. The company had such a powerful influence on the 2008 presidential race that pundits took to calling it the “Facebook Election.”

Now, of course, every election is a Facebook Election.

Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.

What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.

On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.

So our present-day freakout over Cambridge Analytica needs to be put in the broader historical context of our decades-long complacency over Silicon Valley’s business model. The fact is that companies like Facebook and Google are the real malicious actors here—they are vital public communications systems that run on profiling and manipulation for private profit without any regulation or democratic oversight from the society in which it operates. But, hey, let’s blame Cambridge Analytica. Or better yet, take a cue from the Times and blame the Russians along with Cambridge Analytica.


There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.

This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.

The truth is that the internet has never been about egalitarianism or democracy.

The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.

The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.

Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”

In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.

Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.

He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.

We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.

These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.

Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.

At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”

With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.

This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.

Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.

A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.


Simulmatics and its first-generation imitations are now ancient history—dating back from the long-ago time when computers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The internet surrounds us, engulfing and monitoring everything we do. We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.

Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.

Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.


And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.

Cambridge Analytica has been one of the lesser bogeyman used to explain Trump’s victory for quite a while, going back more than year. Back in March 2017, the New York Times, which now trumpets the saga of Cambridge Analytica’s Facebook heist, was skeptically questioning the company’s technology and its role in helping bring about a Trump victory. With considerable justification, Times reporters then chalked up the company’s overheated rhetoric to the competition for clients in a crowded field of data-driven election influence ops.

Yet now, with Robert Meuller’s Russia investigation dragging on and producing no smoking gun pointing to definitive collusion, it seems that Cambridge Analytica has been upgraded to Class A supervillain. Now the idea that Steve Bannon and Robert Mercer concocted a secret psychological weapon to bewitch the American electorate isn’t just a far-fetched marketing ploy—it’s a real and present danger to a virtuous info-media status quo. And it’s most certainly not the extension of a lavishly funded initiative that American firms have been pursuing for half a century. No, like the Trump uprising it has allegedly midwifed into being, it is an opportunistic perversion of the American way. Employing powerful technology that rewires the inner workings of our body politic, Cambridge Analytica and its backers duped the American people into voting for Trump and destroying American democracy.

It’s a comforting idea for our political elite, but it’s not true. Alexander Nix, Cambridge Analytica’s well-groomed CEO, is not a cunning mastermind but a garden-variety digital hack. Nix’s business plan is but an updated version of Ithiel de Sola Pool’s vision of permanent peace and prosperity won through a placid regime of behaviorally managed social control. And while Nix has been suspended following the bluster-filled video footage of his cyber-bragging aired on Channel 4, we’re kidding ourselves if we think his punishment will serve as any sort of deterrent for the thousands upon thousands of Big Data operators nailing down billions in campaign, military, and corporate contracts to continue monetizing user data into the void. Cambridge Analytica is undeniably a rogue’s gallery of bad political actors, but to finger the real culprits behind Donald Trump’s takeover America, the self-appointed watchdogs of our country’s imperiled political virtue had best take a long and sobering look in the mirror.

Yasha Levine is an investigative journalist and a former editor of Moscow-based newspaper The eXile. He is the author of Surveillance Valley: The Secret Military History of the Internet.

You Might Also Enjoy

Baffler Newsletter

New email subscribers receive a free copy of our current issue.

Further Reading

Heads Up: We recently updated our privacy policy to clarify how and why we collect personal data. By using our site, you acknowledge that you have read and understand this policy.