Bullshit Bots
Your digital life is likely already stuffed with artificial intelligence offerings you never asked for: AI DJs playing AI music on your Spotify, Microsoft Copilot begging to draft your work emails, Google Search results that force you to say “but that’s just the AI overview” when you look something up with friends. Just when you thought that you could not be stuffed any further, tech oligarchs are moving forward with their nth attempt at finding a profitable use for their latest disruption—what they’re calling “agentic AI.” Agents are meant to go beyond generative AI by actually doing some automation, performing multistep tasks with minimal human input or oversight. Here’s the pitch: imagine a future where an autonomous assistant handles all your little to-dos. Things like finding an outfit for a wedding, buying concert tickets and messaging your friends that they’re booked, and planning the itinerary of your anniversary trip. In exchange for control over your digital personhood, your agent will spend your money, coordinate with your friends, and do your living for you.
This trade is being proffered at a time when the tech industry is struggling to justify the cost of AI. As of 2025, 95 percent of companies that invested in GenAI did not profit at all from the investment, per a major MIT report. As the tech journalist Edward Zitron estimated in his guide to the racket, by “the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.” That’s like spending $100 to make $6.25—and then doing it again and again, five billion times, until you bleed the equivalent of Ireland’s entire economy. Agentic commerce seeks to generate some return on these enormous investments in AI infrastructure by hitching its wagon to convenient consumption, one of the most reliable ways to make money in the United States.
Seventy percent of our GDP comes from household consumer spending—more than any other industrialized nation. Chris Suh, the chief financial officer of Visa, described agentic commerce as “a set of services that really empowers consumers to be able to buy.” He envisions an incoming reality where digital transactions are nearly instantaneous, cleared of all the “friction” that disrupts an untold number of purchases every day. Ridding consumers of trifles like visiting multiple websites, authenticating their payment information, and clicking through a series of steps is the great promise of agentic commerce; making money not by producing anything new but by more successfully selling what already exists. Visa, Mastercard, and PayPal all launched agentic commerce initiatives last April, indicating an industry-wide toe-dip. They’re partnering with the usual suspects—OpenAI, Amazon, Microsoft, and other tech giants—to build dutiful servants to your consumption.
To have any chance of being useful, these agents require massive amounts of training data. As Meredith Whittaker, cofounder of the AI Now Institute and president of the encrypted messaging app Signal, explained last year, access to an endless, gushing spigot of data—alongside previously unthinkable computing power—is what makes the AI boom possible. Machine learning algorithms were collecting dust through the 1990s and 2000s, attracting minimal research funding and attention—essentially viewed as a technology with little promise. Then the field was reanimated in the 2010s by the revelation that when trained on massive data sets, and with enough computing power, the algorithms could do marketable things. The revelation couldn’t have come at a better time—mass digital surveillance was increasingly the political and commercial norm, and tech companies had been gathering the personal data of users since the early 2000s. This was a convenient launchpad for AI, which requires the “business model of mass data collection, the more the better,” per Whittaker. Data is the oil and surveillance is the drill.
Data may turn out to be just like every other raw material pillaged by capital—finite, exhaustible.
The boom has just begun, but the tech industry is already running out of oil. As the computational power used to train AI models has grown exponentially, the question is whether data will uphold its side of the “compute + data = AI boom” equation. GPT-2 becomes GPT-4, in part, by gobbling up a whole lot more than its predecessor, and it’s unclear whether future models will have as much to feed on. Researchers warn that the total stock of thirty years’ worth of internet data could be depleted as early as this year, and even tech oligarchs generally agree with this assessment. The publicly available data generated every day on the internet is nowhere near enough to keep up with demand. And though it’s possible for computers to generate artificial data to feed into LLMs, this ersatz ouroboros is an imperfect replacement for training models on the human stuff. Data may turn out to be just like every other raw material pillaged by capital—finite, exhaustible.
This is where your AI assistant comes in. To complete simple tasks, agents need access to every component of our digital lives—total, administrative-level control over our devices. That means unfettered access to your web browser, credit cards, calendar, contact list, messages, location data, and more—all the intimate particularities and accumulations that make up your digital personhood and offer a somewhat robust sketch of your non-digital self. We get frictionless convenience, and tech titans feed bigger and bigger piles of our digital extremities to their machines.
But the tech industry isn’t limiting its aspirations to juicing consumer spending or even snatching up intimate data. Convenient consumption is one way to address AI’s profit problem, helping bosses squeeze workers is another. The same tech giants that are making strides in agentic commerce are marketing an agentic workforce to managers and CEOs. The fintech company Klarna, which allows consumers to buy now and pay later, offers a glimpse into what experiments with agents in the workplace might look like. Klarna was an early adopter of OpenAI’s agent; in 2024, they claimed that it could handle the work of over seven hundred full-time customer service workers. Fast forward to 2025: Klarna scrambled to rehire human beings to do the work. Even customer service, a baseline life-negating experience, cannot yet be fully automated.
But Klarna isn’t getting rid of its agent. Instead, CEO Sebastian Siemiatkowski says AI will augment human workers “in an Uber type of setup”—gigifying work that was previously full-time. The fintech company’s ultimate goal is to replace the thousands of customer service jobs it currently outsources with a smaller, agent-augmented domestic workforce, made up of students and rural workers. Maybe even people who use Klarna themselves, according to Siemiatkowski. Klarna users get charged late fees for failing to make payments; perhaps sometime soon, someone will pay off their debt to Klarna by selling their labor to the company. Agentic commerce invites us to hand over our data to the same regime of automation that will justify and accelerate the global march toward labor becoming more contingent and precarious.
American businesses across sectors are following Klarna’s example and experimenting with either augmenting or replacing workers with agents, including Citigroup, Wendy’s, and Toyota. Meanwhile, Salesforce announced that the IRS will use its agents to interface with taxpayers. After DOGE took a sledgehammer to the agency and reduced its workforce by 25 percent, the moment is ripe for AI companies to promise to make the same amount of work possible with far fewer employees. The IRS’s experiment with agents continues a long-festering trend; algorithms have come to operate as austerity and state violence by another name when used by government agencies, making determinations about everything from Medicaid eligibility to family separations conducted by child welfare agencies. The statistical, algorithmic-based risk assessments of the twentieth century set the stage for the contemporary promise of unbiased, objective AI assessments that can help underfunded and under-resourced government agencies fulfill their purposes. A report by the AI accountability nonprofit TechTonic Justice found that as of last year, all 92 million low-income people in the United States already have some fundamental aspect of their lives decided by AI.
In OpenAI’s early years, the mid-2010s, its research engineers trained an agent to play CoastRunners, a boat-racing video game. They gave it the goal of scoring as many points as possible. Instead of figuring out how to be the fastest boat, though, the agent learned to rack up points by forcing itself into a lagoon with replenishing bonus targets—and forgoing racing altogether. In a report about the agent, the OpenAI researchers wrote, “Despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track, our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way.”
Is AI really coming for entry-level jobs first, and the rest of the workforce next?
Nearly ten years after the CoastRunner experiment, agents struggle with the same problems. Take an OpenAI study released in September and credulously cited in incessant coverage of AI stealing entry-level jobs, for example. The study found that some AI models were capable of matching the performance of industry experts in 1,320 “real-world economically valuable tasks” across 44 occupations. But even in the OpenAI study—functionally, advertising—humans reliably beat out agents, whose biggest stumbling block was failing to follow instructions. This sounds like the performance review of your most useless, chaotic coworker at the job you badly wish you could quit.
Is AI really coming for entry-level jobs first and the rest of the workforce next? Tech CEOs certainly want you to think so. Every month, like clockwork, one takes a break from warning the world of the awesome power of his machines to fire thousands of his employees. And more generally, it’s not a great time to be applying to jobs; the unemployment rate for those trying to enter the full-time workforce is at a nine-year high. But things have been so bad for so long that it’s hard to say with certainty what exactly is happening. For one, the college wage premium, or the expected increase in earnings for someone with a degree compared to someone without, has been falling since 2012. Another way to say that is that there are more college graduates than there is demand for them, depressing wages and hiring. Workers are also quitting their jobs at the lowest rates since just after the Great Recession, and job growth is relatively stagnant. Paired with the uncertainty of the current political moment, these long-term trends are pushing workers and employers to just kind of wait things out, leaving fewer openings and opportunities for people entering the workforce.
AI is more a pretext for the whims of management than it is a job-stealer. This is called “AI-washing” in the words of Brian Merchant, who said of the mass Amazon layoffs at the end of last year that firing thirty-thousand workers because cutting-edge AI technology can replace them sounds a lot better to investors than firing workers to cut costs because the company is over-leveraging itself on data centers or worried about earnings. Agents offer bosses a powerful excuse to depress wages, conduct layoffs, and exert additional control over workers. Where AI is used to augment or gigify labor, managers are able to demand workers accomplish more with fewer colleagues, in less time, and for less stable wages and benefits.
Handwringing about whether agents will toss millions of people out of their jobs helps execute this play. The atmospheric anxiety shapes a culture in which such a thing seems possible, no matter how unlikely, forming something of a self-fulfilling prophecy. The breathless panic that AI is an existential threat to human labor functions as a constant commercial we’re all forced to live inside of, until we can no longer recognize the difference between marketing and reality.
AI’s middling capabilities are also a reason to resist the temptation to see it as neutral technology, overflowing with positive potential if only it were wielded by the right people in the right ways. Call this the fully automated luxury communism POV: the belief that AI could do away with the drudgeries of work and provide the basis for a freer, more abundant future, were it only not for our capitalist context. To be sure, you can do some cool stuff with AI when you have large sets of good data, like discovering unknown geoglyphs in Peru or preserving the Māori language. But even if AI did reach a level of capability that allowed it to cure cancer or mine asteroids for renewable energy—which it won’t—it’s inseparable from capitalism.
Reorganizing the world based on the needs of human beings will not claw back the hundreds of billions of dollars in venture capital spent fueling the ascent of AI. In the meantime, we should expect recuperation in the form of violence—attempts at unblinking surveillance and limitless extraction, until contingent, gigified employment becomes the best option most workers can hope for.