Deadly Slop

In recent months, critics have taken issue with the internet. More specifically, they have taken issue with the shit that has filled up the internet. Jia Tolentino’s feeds are flooded with “fake stories about real things” and “real stories about fake things.” The New York Times’ Michelle Goldberg, meanwhile, likens her Instagram to “a bottomless well of mushy, purposeless, dissociated slop.” Op-eds with titles like “Maybe AI Slop Is Killing the Internet After All” have graced the opinion pages of Bloomberg, the Guardian, and the Financial Times.
I’m sympathetic to this complaint. I am a millennial with a phone addiction. Tasks I used to find mind-numbing, like watching three episodes of Sex and the City in a row without getting up to check my phone, now feel exalted. When I do check my phone, I encounter reels of children in Gaza pulled from the rubble after another Israeli airstrike followed by AI-generated soft-core porn followed by ads for AI-generated zombie games. I am not surprised that the word slop only narrowly lost out to brain rot as the Oxford Word of the Year.
Yet many of these essays also strike me as incomplete. Slop is certainly characteristic of our present political moment, in which tech moguls are capitalizing on AI applications that allow creators to flood social media platforms with AI-generated content. But slop is also the correlate and counterpart of contemporary war. Across the world, rising authoritarians are hurtling into intractable military conflicts with the assistance of artificial intelligence, which is busily churning out both an endless supply of questionable military targets and low-grade memes—a deadening and deadly mess of synthetic material saturating battlefields and content streams alike.
Indeed, since the early 2000s, but accelerating over the last decade, corporate conglomerates and tech startups have flocked to active conflict zones, which are the ideal innovation labs. There, dragnet surveillance, mass incarceration, and unfettered drone warfare serve as the AI hype cycle’s underbelly. Death and destruction double as opportunities for data collection and refinement, allowing the private technology sector to stake out a monopoly over AI development in military and commercial sectors. The end result is our present of endless war, in which those of us lucky enough to not to live under aerial bombardment or with ICE agents knocking at the door are force fed Studio Ghibli illustrations of famine, deportation, and death. It’s no wonder why Hito Steyerl, in her new essay collection Medium Hot, describes our present conjuncture as a “eugenicist necropolitics flanked by awful memes.”
The profiteering and death undergirding today’s slop unfolds in plain sight.
Slop surged into popular parlance last year when OpenAI released Chat GPT-3.5. With commercial large language models suddenly available to any user, news feeds promptly filled up with trolls and bots offering up an unending stream of AI-generated images, articles, and comments. Surfing the web now entails wading through webpages full of hallucinatory content—synthetic people wearing synthetic clothes saying synthetic things, a sideshow of the data economy’s inhumanity. The “enshittification” of the online experience is well-theorized. Internet founders once dreamed of “virtual communities” where users would inhabit an expansive civic space, debating, venting, learning, and pontificating to the general benefit of society. “A coffee house with a thousand rooms” in the words of Howard Rheingold, an early theorist of Web 2.0. But it was only a matter of time before emerging tech conglomerates tore it down and erected the internet equivalent of Starbucks. These firms monopolized cyberspace, monetized our data and attention, and locked us into increasingly unsatisfying worlds with an endless stream of content.
The release of Chat GPT and a host of other commercial generative AI models turbocharged this process. As Jason Koebler and Max Read document, sloppers now fork over monthly subscriptions to Open AI, Google, Meta, which in turn churn out a bonanza of fraudulent material. There are AI-generated books to solve your quarterlife crisis, made-to-order porn, advertisements for fake products collated from your Google search queries. This content isn’t really for you; it’s for the tech titans who rake in profits from all the AI slop they also profit from generating. “A thoroughly commercialized, surveilled and authoritarian space where basic functions are seconded to the extractive appetites of the monopolies overseeing the system,” is how Baffler contributing editor Jacob Silverman recently described the internet.
The commercial generative AI frenzy quickly spread to militaries. Palantir released its own AI platform in 2023. Those with clearance to operate the system can type their queries into a text box and answers are generated instantly in succinct blocks of sans serif type. Around the same time, the IDF began building a chatbot that integrates troves of data taken from the occupied Palestinian territories to guide military incursions. Last year, the U.S. Navy began relying on generative AI applications to cull through intelligence and offer recommendations for operations throughout the Pacific. Each emulate the seamless—and at this point banal—experience of asking a commercial chatbot to draft a cover letter.
Military AI models are built according to the same logic of scale as their civilian prototypes. They are engineered to recommend people and places for aerial bombardment and routes for ground raids by culling and synthesizing troves of information at a speed our rotting human brains cannot comprehend. These systems can do so because their makers trained them on information provided by other AI systems, like predictive analytics that generate recommendations of who or what constitutes a military target, biometric and object recognition systems that automatically identify people and things, algorithms that translate Palestinian-dialect Arabic into Hebrew, or Russian into Ukrainian, and location monitoring systems that can send alerts when someone enters a specific locale, like their family home.
This is problematic, technically speaking. The component parts of these systems are known to glitch, hallucinate, and trick. AI applications that Israeli troops have used have reportedly mistranslated words (“payment” to “grip on a rocket launcher”), misrecognized objects (pipes for guns), miscategorized locations (evacuated when inhabited), and, according to the Israeli journalist Yuval Abraham in an investigation published last April, placed the wrong person on a kill list 10 percent of the time. Even Ukrainian military officials leading what Time calls a “war lab” have questioned just how effective all these emerging AI weapons systems are. But where such systems are technically flawed, they are ideologically robust. For militaries set on destruction, transparent and precise intelligence is less important than inflicting damage at scale. And for technology companies intent on vacuuming up the world’s data, opportunities to prototype new products are more valuable than ensuring such systems work as precisely as advertised.
Nowhere has this confluence of lethality and saturation been more evident than in Gaza over the last twenty-one months. For example, the Israeli Air Force attacked 1,500 targets in Gaza in the first forty-eight hours after October 7. Prime Minister Benjamin Netanyahu was not satisfied with that number. According to the Israeli daily Yedioth Ahronoth, he erupted in anger in a closed cabinet meeting on October 9. “Why not five thousand?” he demanded of the IDF’s then chief of staff, Herzi Halevi. “We don’t have five thousand approved targets,” Halevi replied. “I’m not interested in targets,” Netanyahu retorted. “Take down houses, bomb with everything you have.” A suite of AI-assisted targeting systems churned out that many and more, lending a veneer of algorithmic efficiency to a largely indiscriminate bombing campaign.
One year later, five thousand targets had become over forty thousand. They are generated and synthesized with computing infrastructure and component parts supplied by civilian AI companies—which is why tech giants saw their partnerships with the Israeli military surge in the first months of war. According to internal documents leaked to the Guardian and Dropsite News, Microsoft offered Israel access to large language models, including Azure OpenAI and Azure cloud infrastructure. Israeli soldiers logged onto Amazon cloud servers to coordinate airstrikes. The IDF requested AI applications like Gemini from Google as their forces experimented with automating decision making.
Although militarized AI is billed as an expedient war machine, its effect is often anything but. According to the New York Times, in October 2023 Israeli military rolled out an AI audio analysis tool to locate Hamas commander Ibrahim Biari. The weapon gave only approximate coordinates for an airstrike that killed him along with 125 uninvolved civilians. In June, +972 Magazine reported intelligence units were relying on crude algorithmic analyses of phone usage patterns over a wide area to determine if civilians had evacuated neighborhoods before authorizing an airstrike. Since Israel broke the ceasefire in March 2025, many of those attacks have only killed civilians.
Israeli bombs have reduced much of Gaza to rubble, displaced 1.5 million, and taken the lives of upwards of sixty thousand, according to conservative estimates. The Israeli Air Force has expanded its operations into Lebanon and Iran as well. Meanwhile, the same Silicon Valley firms monopolizing the AI landscape have rolled back limitations on weapons development to increase the flow of data and cash from other military contracts.
Microsoft Azure offers AFRICOM access to cloud computing for war fighting while its Bing Image Creator saturates Facebook feeds with Jesus Shrimp. OpenAI powers many of the sex bots filling up your inbox and recently partnered with the defense startup Anduril to produce artificial intelligence “solutions” for military targets. Amazon floods the marketplace with AI-generated e-books and provides U.S. intelligence agencies with access to Anthropic’s generative AI model Claude. Meta pays content creators for uncanny renditions of sea monsters and will soon supply the U.S. army with AI systems to control drones and automate decision making. Whether churning out military targets or memes, most generative AI applications are deployed to prioritize content and scale rather than authenticity and accuracy.
Innovations in atrocity and distraction have long gone hand in hand. In her 1992 essay “Aesthetics and Anaesthetics,” Susan Buck-Morss reminds us that “sensory addiction to a compensatory reality” is characteristic of modernity. The nineteenth-century’s industrial revolution gave form to urban arcades, opera houses, and opium dens where patrons sought refuge from the grisly body count of mass production. The twentieth century birthed popular cinema, theme parks, and shopping malls—controlled environments that blocked out the mounting bloodshed of a globalizing world. I would like to situate slop within this trajectory: a substance eked out of the horrors of our contemporary moment, and one that hobbles the consumer’s ability to respond politically.
But more so than these precursors, the profiteering and death undergirding today’s slop unfolds in plain sight. I have spent the last year watching tech CEOs grinning with weapons manufacturers, social media moguls eroding constitutional protections and gutting the federal workforce, billionaires penning tracts hyping up an automated war machine. All this sandwiched between reels of AI-generated babies impersonating Shakira and AI-generated Pilates instructors urging me to lose ten pounds. Although I know my phone addiction doubles as a source of revenue for the corporate CEOs turned military hawks, I find it incredibly hard to stop watching.
Which is why current definitions of slop on offer in the newspapers of record seem to miss the mark. Yes, our timelines are filled up with “uncanny stream of words and photos and videos that artificial intelligence spits out” to quote Michelle Goldberg again. But that’s only half the story. Congealed in this artificial material is a cycle of militarization and monetization—blood, distraction, and gore—that ensures warfare drags on.