On March 26, 2016, a terror attack on a children’s playground in Lahore, Pakistan left scores of young children and their parents dead, toys and small shoes strewn like rubbish among body parts and burnt shreds of clothing. Thousands of miles away in Palo Alto, California, Facebook founder Mark Zuckerberg logged on to his Facebook profile, which commands millions of followers in his social media empire. Moved by news of the Lahore massacre, Zuckerberg noted that the Facebook team had to activate its “Safety Check” feature—designed for users to alert their friends and followers that they’re out of harm’s way when a lethal attack or natural disaster occurs—all too often in recent months. Zuckerberg cited terror attacks that had lately convulsed Turkey and Belgium, in addition to the Lahore bombing. He cemented his resolve to face down these evils in a closing burst of trademark feelspeak:
I believe the only sustainable way to fight back against those who seek to divide us is to create a world where understanding and empathy can spread faster than hate and where every single person in every country feels connected and cared for and loved—that’s the world we can and must build together.
In the months that followed, Zuckerberg moved on to other things: an initiative to “cure all diseases” (yes, people will still get inconveniently sick, but Zuckerberg will see to their most efficient possible treatment) and, by June of 2017, a visit to Iowa to explore a rumored presidential run. But later that month, Zuckerberg returned to the task of tackling terrorism. Ganging up with Microsoft, Twitter, and YouTube, Facebook announced an initiative quite different from the blossom-scented plea for empathy that Zuckerberg had lofted the year before. The social media giants would focus on developing technology that would curtail the use of their networks by terrorist groups—and the software colossus Microsoft (which did not issue a separate statement) would likely expand its speech-monitoring protocols via a series of NGO partnerships it had formalized in May.
The new, multi-platform anti-terror initiative, formidably titled “Global Internet Forum to Counter Terrorism,” promised to advance technology for detecting terrorist material and create best practices for countering extremism and hate, with the four collaborating mega-platforms sharing information about “counter-speech tools.” Love and empathy, it appeared, were no longer in the picture; nor was their any evident remnant of the techno-realism that a year ago had prompted the brain trust at Twitter to declare that there was no “magic algorithm” for identifying terrorist content on the internet.
The Virtual Blackwater
And indeed, since the comparatively staid announcement, the four contracting parties of the Global Internet Forum have yet to conjure forth anything remotely close to a magic terror-defying algorithm. But the relevant innovation here was never to be rendered in code anyway: the emergence of the Global Internet Forum signals a sinister expansion of social media and content-sharing platforms into the state-sponsored prosecution of the War on Terror.
Like past initiatives to privatize the terror wars, the prospective symbiosis here is likely to make things worse on all fronts. Governments will be freed to pursue various intrusions on privacy and free expression under conditions of enhanced impunity, while social media platforms can firmly adhere to statist protocols—and funding sources—to secure their existing global monopolies. As was notoriously the case with Blackwater—the private company whose mercenaries carried out interrogations, renditions, torture, and civilian massacres under the U.S. invasion of Iraq—the counter-terror offshoots created by the Global Internet Forum can operate safely outside of anything resembling robust public oversight, free of the irksome public-sector mandates of public transparency and reporting. The information they gather can be secretly shared with a government—or indeed, more than one government—which can then be fully empowered to target, prosecute, and persecute any citizens ensnared within these data searches according to their own political agendas.
The picture is equally grim for social media users on the commercial side of bargain. Should one of their posts or tweets trigger the concerned interest of the Forum politburo, they could forfeit any effective control over their clicks, locations, queries—and, of course, the steady stream of demographic data that routinely leaks from their accounts—in the event that an over-vigilant Facebook moderator or Twitter algorithm should label them a terrorist, or a terrorist waiting to happen. In other words, our already unaccountable tech lords will exercise primary influence on a private individual’s life, unburdened by any governmental due process requirements until that person is already categorized and condemned by the data collected.
Terror Czar Zuckerberg
Beyond the enhanced surveillance capacities that inevitably accompany a contracting agreement to advance this or that initiative in the terror wars, there’s also what might be called a metaphysical brand of mission creep at stake here. When social media platforms are enlisted into the vanguard of the war on terror, they also take for themselves the power to define what terrorism is—and who, in turn, is a terrorist.
Indeed, just a few days after the Forum announcement, the nonprofit investigative journalism website ProPublica published a bracing account of the widely variant treatment that online gatekeepers bestowed on social media posters, depending on their racial identities and political convictions. It was largely a tragic tale of two Facebook posts. One came from Republican Congressman Clay Higgins of Louisiana the day after the London Bridge terror attacks. “Hunt them, identify them, and kill them, kill them all!” Congressman Higgins wrote. “For the sake of all that is good and righteous. Kill Them All!”
Facebook let the post stand, despite its clear incitement to violence in defiance of site’s terms of service, ostensibly because it targeted a specific sub-group of Muslims rather than all Islamic believers—those who had been “radicalized.” Around the same time, Facebook swiftly removed a post by poet and Black Lives Matter activist Didi Delgado which read, “All white people are racist. Start from that reference point or you’ve already failed.” To make sure that the poster was duly punished and the offending content wasn’t reposted, Delgado’s account was suspended for seven days.
This glaring double standard appears to be a genuine reflection of Facebook company policy. An exercise included in the company’s training materials for moderators charged with applying Facebook’s global hate speech algorithm asks them to select one group that’s entitled to online hate-speech protections from a list of three: female drivers, black children, and white men. The correct answer is white men.
Zuckerberg himself has acted as the final arbiter in moderating controversies concerning white men, ensuring that they’re accorded special treatment. For instance, when Facebook employees pointed out that then presidential candidate Donald Trump’s posts advocating a travel ban against Muslims (a specific group) violated the company’s guidelines against hate speech, Zuckerberg personally intervened and said that the posts should be permitted, even while acknowledging that they violated the company’s own strictures on hate speech.
Love and empathy, it appeared, were no longer in the picture.
Later, the company issued a clarifying announcement, explaining that moderators would now permit content that is “newsworthy, significant, or important to the public interest.” The entities determining just who and what meets these criteria will, of course, be Facebook and Zuckerberg. And with the Forum’s casual annexation of the digital war, that’s a worrying prospect indeed: private data companies will be the arbiters of first resort in far-reaching and often speculative assessments of terrorist language, intent, and expression—and in all too real matters of punishment and military counterattack.
Twitter has likewise carved out a Trump-sized loophole in its hate-speech policies. As numerous stories have documented, Trump has repeatedly retweeted memes and hate speech originating with neo-Nazi groups—all of which have readily passed official Twitter review even though their content overtly incites hatred against Muslims, African Americans, and other minorities (again in contravention of official Twitter company policy). Just like Facebook, Twitter appears to have decided that it can pronounce a subjective exception to its own policy when it comes to presidentially endorsed hate speech. Since Twitter is a privately held company, it’s under no obligation to provide public justifications for such exceptions—and it’s equally insulated from any public scrutiny over how it goes about defining what does and does not count as terror-inciting hate speech on its platform.
Tweeting While Brown
Such glaring omissions in the diagnosis and punishment of online hate speech are simply business as usual in the social media sphere. The Southern Poverty Law Center notes that Twitter has been notoriously lagging in enforcing hate-speech restrictions on accounts aligned with white supremacists. The platform finally suspended some accounts associated with white supremacist hate groups late last year. But many such sites continue to spread racist propaganda. Meanwhile, Twitter has reported that company moderators have suspended 125,000 accounts for supposed ties to ISIS.
Not surprisingly, these market incentives to condone Trump-branded hate speech—and to effectively quarantine white nationalist social media users from the effective definition of terror-fomenting speech—have produced striking results. A 2016 report from the George Washington University Program on Extremism titled “Nazis vs. ISIS on Twitter: A Comparative Study of White Nationalist and ISIS Online Social Media Networks” found that major white nationalist groups have seen an increase of 600 percent in their following since 2012. The report found that white nationalist groups referenced #whitegenocide more than any other hashtag and President Donald Trump more than any other person. The study also found that the white supremacist presence outperformed ISIS-associated social media accounts by nearly every metric.
Such trends proved revealingly immune to the actual incidence of terror attacks: white supremacist hate crimes such as Dylann Roof’s slaying of nine people at a black church in Charleston, South Carolina, produced no uptick in policing efforts to monitor such groups or their use of social media platforms. White nationalist and Nazi accounts have a median following nearly eight times greater than the average ISIS-related account—yet, as the GWU survey reported, nearly all of them continued to operate freely on Twitter.
The consensus among social media platforms is unmistakable: terror is, by their operational definition, something foreign, perpetuated by religious and racial minorities. The willful blindspots that created this see-no-evil approach result in a self-perpetuating misallocation of attention away from the hate-spewing right. Social media platforms can bend or alter internal rules at any time, and thereby ensure that the differential treatment of disparate terror-promoting social groupings becomes the basis of a sort of social engineering. As a result, those who already feel empowered to act with impunity, such as identitarian white men, get every tacit encouragement to carry on as before, whereas those designated as hostile and dangerous others in the white-dominated social grid get swiftly suspended and punished. It’s little wonder, given this permissive environment, that “right-wing extremists plotted or carried out nearly twice as many terrorist attacks as Islamist extremists” in the United Sates from 2008 to 2016.
All Medium, No Message
This whole self-selecting process derives its moral authority from the all-
justifying mandate to fight terror online—i.e., a certain sort of brown, Muslim, foreign terror. Like other anti-terror initiatives, this one has been launched under cover of an amorphous, shape-shifting external threat of unknown provenance and power—and the fear of such a threat is so immobilizing that no concerned party ever feels compelled to gauge the actual success of terror-smiting measures in stemming the actual spread of terrorism.
The Forum’s formation follows this alarmist playbook to a tee. Like the panic-prone intelligence communities in the UK and the United States, our social media overlords merely presume that the availability of the internet and the ubiquity of social media platforms are in some integral way responsible for producing and diffusing Islamist-inspired terror. It therefore must follow that if the distribution networks for Islamist terror are stopped up, the underlying causes of Islamist terrorism may well vanish.
British citizens got an unfiltered dose of this reasoning from Prime Minister Theresa May in the wake of the London Bridge attacks. May, who as home secretary had formulated most of the country’s counter-terror measures, laid the blame squarely on the shoulders of the internet. Sternly announcing that “we cannot allow this ideology the safe space it needs to breed,” May promised more regulations “to regulate cyberspace to prevent the spread of extremist and terrorist planning.”
May’s statement reflected a far wider, cross-ideological consensus among the leaders of other Western governments as they sought to reckon with the Islamic State’s power to carry out attacks within their borders. Far more tellingly, though, the statement also reveals the crucial mistake that blocks western governments from making significant inroads against their militant Islamist foes. Instead of focusing on the substance of ISIS’s ideology or the structural factors that enhance its appeal for lone-wolf attackers or independent cells, Western leaders get seduced, again and again, by the superficial means of message transmission.
Of course, Facebook’s entire business model rests on collecting user information and selling it to other platforms—or governments—as the need arises.
As a result, they can only focus on the ascendant strategy of the moment. Past strategies were bin Laden videos and YouTube sermons delivered by imams aligned with al Qaeda or the Muslim Brotherhood. And right now, the strategy happens to be social media.
The folly of such an approach is distressingly widespread in the Western media and intelligence world. To take just one prominent example, around the time that ISIS declared a caliphate, The Atlantic’s J.M. Berger published a piece professing to explain “How ISIS Games Twitter.” Berger detailed ISIS’s use of an Arabic Twitter App called “The Dawn of Glad Tidings,” which was available through Google. Supposedly promoted by top ISIS Twitter personalities, the app posted ISIS-approved tweets to personal Twitter accounts—thereby magnifying and expanding the group’s social media influence.
Imagining ISIS’s theatrical deployment of medieval imagery as actual primitiveness, Berger paused to marvel at the group’s use of “focus-group messaging and branding concepts” to expand its message. He never managed, however, to ponder that maintaining the appearance of a medieval sensibility could itself be a way for the newly launched caliphate to brand itself as authentically Islamic. Other news outlets were likewise keen to play up social media as a root cause of terrorist thought, beneath click-friendly headlines such as “ISIS has mastered a crucial recruiting technique no other terror group ever has.” Amid this land rush in tech-centric explainer journalism, commentators didn’t note the obvious truth that any group of any description seeking to draw a mass following employs social media strategies—and that all such strategies, like the platforms they sought to exploit, were, by definition, of very recent vintage. In other words: novelty does not equal sinister media genius.
Some in-depth studies did question the premise that social media was somehow integral to ISIS operations. A 2015 report titled “ISIS in America: From Retweets to Raqqa” found that the seventy-one ISIS operatives arrested by then were a “diverse” group whose radicalization didn’t stem from any monocausal social conditions—let alone a single branch of social media.
But none of these findings had any effect on the continuing alarmist rhetoric about ISIS on Twitter—nor on the companion speculative presumption that that better policing of Twitter and Facebook accounts could eliminate ISIS or land it a near-death blow. By February 2016, George Washington University’s Program on Extremism was already declaring a qualified victory based on this precept, producing a publication titled “The Islamic State’s Diminishing Returns on Twitter: How suspensions are limiting the social networks of English-speaking ISIS supporters.” Authored by (yes) J. M. Berger, the report presents the sort of eye-roll-inducing circular argument that can only pass as research when all concerned parties are searching for selective substantiation of a hypothesis instead of actually testing its veracity. Berger’s report purported to demonstrate that by suspending ISIS-related Twitter accounts, the service had significantly reduced ISIS activity on Twitter. In all, the report says, basic metrics revealed a social network that was “stagnant or in slight decline.” But in an American press scene keenly predisposed to declare any kind of victory against ISIS, even an online victory proved cause for glee and spin. Berger, perhaps still flush from his Atlantic star turn, readily obliged, declaring that a diminished Twitter presence meant that the group’s “key functions had been severely limited.”
But if these functions were already severely limited in 2015, then ongoing and increasing suspensions in 2016 and 2017 would suggest that those actions served little more than a PR purpose in the larger struggle to stem the reach of Islamist terror. Berger’s report noted that by October 2015, the number of “readily discoverable” English-speaking ISIS accounts were only about a thousand. Yet stories reported by CNN in 2016 put the number of ISIS-related account suspensions at 235,000 in 2016 and 377,000 in 2017. What more, the upward-ratcheting number of account suspensions considerably clouds the broader case for undertaking massive top-down crackdowns on the online traffic in hateful and terror-promoting speech.
It’s only very deep in the Berger report that one happens across a far more plausible explanation for the group’s downward-trending Twitter numbers—to wit, that as a sponsor of global terror conspiracies, ISIS had necessarily adopted a much more insular communications strategy, with members speaking to other members rather than going after new recruits. Moreover, having seen its profile diminishing on one social media platform, ISIS had simply turned to others, adopting WhatsApp and Telegram, both of which also aided the group’s secrecy-minded agenda by affording users greater levels of encryption and fewer avenues of detection. Pace Marshall McLuhan’s famous dictum, ISIS’s outreach efforts were always about the message, though Berger could only see the medium.
This record of nimble platform-switching in ISIS circles, much like its initial attraction to the novel platform of Twitter, is scarcely surprising if you recognize the broader contours of the ISIS agenda in the first place. But the myopic media and intelligence focus on the name-brand platforms of the U.S. social media industry points, by contrast, to a central problem in tech-centric U.S. strategies to counter violent extremism. Instead of recognizing that ISIS and other Islamist militants select—and discard—a given platform to advance their distinctive political goals, policy makers and commentators naively assume that blocking or closely monitoring the group’s foundational social media presence is heading ISIS and its allies off at the pass. Western analysts and reporters, having imbibed an earlier round of credulous and misleading punditry about the radically democratic power of social media in promoting the uprisings of the Arab Spring, have now modified the same tech determinism to imagine the Islamic terror threat as a sinister knot of algorithms—one that somehow manages to mistake a static media outreach strategy for ISIS’s well-financed and organized offline agenda of terror recruitment and geopolitical expansion.
This same Western tech myopia precludes any fresh approach to producing and distributing effective counter-messaging that might discredit extremist propaganda at the substantive level. This, too, quickly turns into a self-reinforcing loop of bureaucratic social engineering: by ensuring that the perceived success of extremist propaganda is incidental, Western counter-terror strategists permit such propaganda to flourish in the offline social world, where it is rendered it both attractive and lethal.
Disproof of Concept
There’s a little-noted paradox at the heart of the present battle to stamp out jihadist content on social media: if ISIS’s recruitment efforts on Twitter and Facebook stalled out nearly two years ago, then why should the mighty league-of-justice-style consortium of web platforms be joining forces to fight terrorism now?
The only way to parse the logic here is to discard the group’s great enabling premise. In reality, combating terror in the offline world has very little to do with purging enterprising terrorists from the social mediasphere. This is where the true disruption occurs: keen to exploit the government’s fatal confusion of terrorist strategy with social media savvy, Facebook, Twitter, YouTube and Microsoft have reverse-engineered their way out of a regulatory bind that might otherwise be a major blow to their business models. Facing greater government regulation, they have offered to aggressively “fight” terror on their own proprietary platforms—and hand over information willingly rather than face legal demands to do so.
This cozy handshake deal between state officials and social media executives conspicuously excludes consumers, whose identities and information may be targeted for data collection and suspension by unknown algorithms and then handed over to law enforcement agencies. Within the American context, the central provisions of the Fourth and Fifth Amendments, which protect citizens from unlawful search and seizure, are never triggered, since private companies aren’t required to meet a probable cause standard before a court prior to collecting and handing over user data. And more broadly, the wildly disparate approaches that big data firms adopt in monitoring sensitive content in different commercial and geopolitical contexts make it painfully clear that the industry’s commitment to user privacy never rises above the level of lip service.
Consider, once again, the example of Pakistan. In May 2017, Dawn, Pakistan’s largest English newspaper, published a report detailing how forty-one of the sixty-four banned terrorist groups in Pakistan were operating freely on Facebook.
Novelty does not equal sinister media genius.
The groups sidestepped simple user protocols to launch hundreds of pages and to spread individual and group profiles. Dawn’s reporters tracked the banned groups down via a host of rejiggered spellings and acronyms and then documented a long roster of likes and follows for each organization. The most popular of these groups is Ahle Sunnat Wal Jamaat (the followers of the Sunna), formerly Sipah-e-Sahaba (Soldiers of the Prophet’s Companions). Ahle-Sunnat maintains two hundred pages under its current moniker, while SSP has maintained 148 pages.
Pakistani officials first banned Sipah-e-Sahaba in 2002, and subsequently banned Ahle Sunnat Wal Jamaat in 2012; Pakistani courts have convicted its members of killing hundreds of Shia Muslims while inciting others to do the same. Investigators have reportedly tied Ahle Sunnat’s leaders to the enormously powerful Pakistan Defense Council, and the group stages mass rallies demanding that the Paksitani state sunder ties with all Western forces. Sipah-e-Sahaba, which was also officially banned in 2002, pursues the same basic agenda via the same deceptive social media strategy—operating under a variety of names and killing Shia Muslims and minorities. Its leaders have likewise worked in concert with Pakistan’s military intelligence community.
Dawn’s exposé amply documented the ways in which official efforts to roll back a jihadist web presence function, at the level of operational policy, as a fig leaf for media giants like Facebook. In its bid to stave off governmental intrusion in its host country, Facebook pretends to be cracking down harshly on content with even faintly terrorist overtones—but in a national market such as Pakistan, the social media giant is more than happy to ignore such content in a far more porous sort of surveillance regime. That’s why, more than a decade after Pakistani officials banned them, both groups continue to enjoy and freely exploit a significant and visible Facebook presence—and scarcely concealed government support.
Death to Blasphemers
Indeed, instead of initiating tough negotiations with Pakistani officials to curb the activities of such extremist groups, online and off, Facebook has bent over backward to accommodate the online agenda of Islamist leaders in one key sphere: the policing of ostensibly blasphemous content. This March, the Pakistani government issued a statement that Facebook would henceforth remove “all blasphemous content” from the social media platform. Pakistan’s interior minister approvingly cited recent correspondence with Facebook executives, assuring him that the site was taking “the concerns raised by the Pakistani government very seriously.” Other members of the Pakistani government and Pakistan’s Federal Investigation Agency likewise reported that Facebook had agreed to remove blasphemous content and hand over user data related to criminal investigations to Pakistani authorities. On June 11, 2017, a Pakistani court handed down a death sentence to Taimoor Raza, who’d been accused of blasphemy on the basis of a Facebook post.
Facebook has yet to independently confirm the full substance of this reported policy shift in Pakistan. After this spring’s announcement, Facebook vice president for public policy Joel Kaplan traveled to Pakistan and met with Pakistani interior minister Nisar Ali Khan. News reports of the meeting quoted a company email saying “Facebook met with Pakistani officials to express the company’s deep commitment to protecting the rights of the people who use its service, and to enabling people to express themselves freely and safely.” In the July 7 meeting, Khan reportedly offered to set up a Facebook office in Pakistan to serve the country’s estimated 33 million Facebook users. The company’s own record of compliance with governmental requests for user data shows that Pakistan is among the top ten regimes soliciting such sensitive information. Facebook reports that it has complied with two-thirds of such requests, though in mid-July the company denied the Pakistani government‘s request to sync individual accounts with cell phone numbers, which would have made it easier for users to be tracked by law enforcement.
So much, it seems, for Mark Zuckerberg’s stirring pledge to spread love and empathy throughout the darkening world that logs onto his site. Instead, the site’s prime directive continues to be to maximize market access—and to tailor the relevant terror-monitoring protocols accordingly. Pakistan’s government had already banned YouTube outright for its refusal to take down blasphemous (but not extremist) content; it’s all too plain that Facebook’s recent anti-blasphemy initiative is aimed squarely at avoiding a similar corporate fate. Given the peculiar political exigencies in Pakistan, the commercial calculation here is simple: the putative threat of blasphemy always trumps the all-too-real consequences of extremist hate speech. In the same vein, a ProPublica investigation of the social media fallout from the Arab Spring found that Facebook was far more likely to align itself with oppressive governments seeking to limit citizen’s access to social media platforms than with protestors or organizers trying to organize resistance movements.
Neither Free Nor Cheap
On one level, Facebook’s posture of collaboration with authoritarian regimes in the Arab world isn’t all that different from, say, Motorola and Westinghouse’s record of profiteering off the South African apartheid regime. In both settings, basic political morality is, effectively, a dead letter: a private company’s primary obligation is to maintain its brand and shareholder value, and to pursue the allied mission of ensuring consumer confidence and attracting the largest possible user base.
Social media companies typically seek to shore up their market credibility in such conditions via a delicate balancing act: evading government regulation while also ensuring that users trust the platform as a benign repository for their personal views and information. Facebook and Twitter both know that jointly choreographing the user experience of freedom and safety on their platforms is crucial to expanding their market reach, and thereby to monetizing the vast amounts of data they collect from these customers’ online profiles.
And so far, they’re succeeding. According to numbers released by SmartInsight, Facebook is the most popular social media network in the world. In the United States it boasts 89 percent penetration, meaning that the overwhelming majority of American adults use the platform. And the Facebook experience is engineered to let users believe they’re accessing it for free. Yet of course, Facebook’s entire business model rests on collecting user information and selling it to other platforms—or governments—as the need arises. Under the deceptive cover of supplying users with “ free service,” the world’s largest social media platform subjects its information-sorting algorithms to precious little public scrutiny—and devotes a great deal of proprietary effort to concealing just who it shares its vast troves of user data with.
Taimoor Raza, the man sentenced to death for allegedly committing blasphemy via a Facebook, remains on death row in Pakistan. Mark Zuckerberg hasn’t posted any stirring Facebook messages drawing attention to Raza’s plight—or to the authoritarian regime that engineered it in an all-too-brutal act of speech suppression. Nor do any Pakistani users logging on to Zuckerberg’s site receive any warnings about the Pakistani government or law enforcement agencies gaining access to their data without their consent—aside, that is, from a series of dense legal stipulations hidden deep within the site’s terms-of-service contract.
That’s not all that’s missing from the corporate-government alliance taking shape under cover of the West’s long-floundering war on terror. By deploying a rigid operational definition of political terrorism as a de facto Islamic monopoly, the leaders of the West’s big data cartel have downgraded the notion of “acceptable” online speech into whatever may advance their quest for greater market share. And there’s no way that anyone can issue a persuasive Safety Check against that threat.