Skip to content

Cloud Control

Anthropic’s defiance of Trump and the new frontier of corporate governance

Whatever the business advantages in ignominiously appeasing Donald Trump—and they’re pretty well evident by this point—corporate resistance still sort of pays off. In late February, the Pentagon ordered the AI company Anthropic to remove terms-of-service restrictions that bar the government from using the LLM Claude for domestic surveillance and autonomous weapons. For refusing those demands and giving up its Pentagon contract, Anthropic and its chief executive Dario Amodei have emerged as popular darlings, feted by the media as new recruits to the resistance. People celebrated the company for standing up to the Trump administration’s aggression, and Claude shot to the top of Apple’s App Store. Tech workers, frustrated by Silicon Valley’s turn against worker activism, have cheered the company’s apparent commitment to safety. Meanwhile, Defense Secretary Pete Hegseth’s threats to excommunicate Anthropic from the cushy world of government contracts have been widely denounced. Even a former Trump AI official described the administration’s actions, particularly the designation of Anthropic as a “supply chain risk,” as “attempted corporate murder” of a fast-rising AI champion.

It’s not often that disputes about government procurement get banner headlines or that an AI company is endowed with folk-hero goodwill. But for all the debate provoked by the incident, most commentary has observed its familiar Trumpian qualities: bullying private companies isn’t how the U.S. government is supposed to do business. The focus on President Trump’s transgression against private industry overlooks just how political markets are by their very nature; that a corporation has enough power to resist the state (for now) doesn’t bode particularly well for democratic self-government. Concentrated private power, even with the noblest of aims, is still a structural threat.

The struggle between the White House and Anthropic raises a basic question: what are the roles of the state and capital in relation to technology, especially technology perceived to be powerful? What happens when private entities sell technologies with the potential to shape all our futures, and how should we respond to private entities acquiring state-like qualities as a result of their leverage over our destiny?


The scuttlebutt behind the Silicon Valley–Washington standoff goes something like this: in 2024, Anthropic reached an agreement with Biden administration officials for the deployment of Claude on government classified systems. That deal, later ratified by the Trump administration during a routine contract renewal, included two restrictions: the government cannot use Claude for mass surveillance on Americans or to power fully autonomous weapons. By late 2025, the Pentagon, now under Hegseth’s leadership, had begun jerking against their leash. After learning about Anthropic’s reservations over Claude’s role in the U.S. raid on Venezuela, the DoD insisted on a new contract reflecting its ability to use Claude for “any lawful use.” Anthropic rejected the new terms, citing concerns that they would permit the government to cross the company’s redlines. In a blog post, Amodei explained that AI used in that fashion might “undermine, rather than defend, democratic values.” In a less sentimental note, he also acknowledged the tech just isn’t there yet: such uses are “simply outside the bounds of what today’s technology can safely and reliably do.”

That a corporation has enough power to resist the state doesn’t bode particularly well. Concentrated private power, even with the noblest of aims, is still a structural threat.

For its sins, Anthropic got the scorched-earth treatment from the White House. The administration threatened to invoke the Defense Production Act to force the company to work on the Pentagon’s terms. That threat came and went, with Trump instead ordering agencies to stop using tools made by the “RADICAL LEFT, WOKE COMPANY.” The DoD is now in the process of phasing Claude out. It has also designated the company a “supply chain risk,” blocking it from working with businesses in the military supply chain. For this last action—an unprecedented designation for an American company—Anthropic is suing the administration, claiming a violation of its First Amendment rights. (On March 26, a federal district court temporarily enjoined the administration’s actions, finding that Anthropic had shown a likelihood of success in its legal complaint. The government is now appealing that decision.)

For AI companies, exclusion from the U.S. military-industrial complex cuts off an increasingly important source of growth. As worries over an AI bubble build, firms have an incentive to take the guaranteed revenue of government contracts to hedge against an industry downturn. Without that cushion, firms risk free fall if the complex circular financing driving the sector’s world-altering levels of spending implodes amid a stop to cash flows. If things come to a head, total government reliance on private AI technology renders a bailout more or less fait accompli. By providing the technical architecture of national defense, AI companies make themselves “too essential to fail.”

If Anthropic now appears more ethical relative to its competitors for biting the hand that feeds it, the actual operation of its safety commitments remains murky. Since 2024, Anthropic has deployed Claude on top of Palantir’s AI Platform, which enables data from a wide range of sources, including purchased commercial data, to feed into classified government operations. The U.S. military, through its Maven Smart System, has used Claude to identify and prioritize targets for the attack on Iran. It is not clear that any of those uses violates the company’s redlines. Nor does a focus on the permissibility of particular use cases address deeper concerns that LLMs are unfit for purpose in defense systems, as they fail to meet basic and longstanding safety standards.

Even if one accepts the sincerity of Amodei’s concerns over the danger of his company’s technology, there is an even more fundamental question: how should we govern powerful technologies developed by private entities and deployed by public ones? Although Silicon Valley has embraced the Trump administration—each of the “Magnificent Seven” tech companies has contributed funds to President Trump in some fashion—structural tensions between tech capital and the state remain unsettled. The government is increasingly reliant on commercial AI technology developed by the private sector; at the same time, AI firms are prioritizing defense applications of their products. From a business perspective, this is called synergy. From a political economy perspective, this is called codependency.

Mediating these pressures is as much an issue of control as of good governance. As political theorist Zaynab Quadri describes, “The relationship of weapons makers to government contracts is a classic political-economy problem: elected leaders seek to control the industry through funding, while the industry uses that money to try and control them.” Control, that is, goes both ways, and ever-increasing private power undercuts democratic governance. In procurement, decades of industry-backed deregulation—spurred by the Clinton-era Democratic Leadership Council to shield military contractor profits—have hamstrung the government’s leverage. Contractors frequently withhold even basic cost and pricing data. A former long-serving official in the Office of Federal Procurement Policy and Office of Management and Budget bleakly described the imbalance of power: “Where once government buyers had a mission of getting the best quality, prices, and terms for the public, now they are expected to ‘partner’ with industry.”

If the government’s only role in procurement is to be a party to a transaction, the Pentagon’s extralegal exercise of power—especially its threat to deploy the Defense Production Act—has been received as anathema. The classical liberal architect of Trump’s AI agenda, Dean Ball, has rebuked the administration and sided with Anthropic, warning that such actions forebode nothing less than the demise of private enterprise. But private property does not exist separately from laws, institutions, and political choices. All markets are made. As Karl Polanyi famously observed, laissez-faire was planned. And in an unusual twist, Hegseth’s illegal actions prove his point: procurement, like any state regulatory activity, has material consequences on the economic structure.

On the same day the Anthropic deal fell through, OpenAI did two things: It took over the contract with the DoD and announced a “strategic partnership” with Amazon that expands a $38 billion agreement, signed by the companies only a few months earlier, by $100 billion over 8 years. It’s unlikely the timing was a coincidence. The Amazon partnership substantially expands OpenAI’s integration into Amazon Web Services’ cloud and compute infrastructure. OpenAI had previously relied exclusively on Microsoft for cloud services, but since its restructuring in October 2025, the company has been looking to diversify providers. As is, the Pentagon is structurally dependent on AWS, which contributes the largest chunk of the Pentagon’s cloud capacity. The reason the government contracted to deploy Claude in the first place was largely due to compatibility; Anthropic’s tools are the easiest to deploy on Amazon cloud infrastructure.

That is now changing. Because the federal government enjoys market power as a monopsonistic buyer—especially in national defense—its actions produce incentives for companies that want agencies to buy their products. It is plausible that the government’s new opening for business, made possible by terminating its contract with Anthropic, was an incentive for OpenAI to close the deal with Amazon, with repercussions on the industry’s dependencies on cloud providers and silicon hardware (the new deal also commits OpenAI to use Amazon’s in-house Trainium GPUs, diversifying the company away from reliance on microchip giant Nvidia). However unintentional, Hegseth’s gambit has encouraged interoperability and GPU diversification within the intensely concentrated cloud stack. It has also catalyzed changes within the tight labor market of elite AI talent, with high-profile engineers now leaving OpenAI for Anthropic.

Hegseth probably wasn’t thinking about economic statecraft as he criticized Anthropic and “the ideological whims of Big Tech.” But the defense secretary’s actions make clear what Zephyr Teachout, Lina Khan, and others have emphasized for years: market structure is a site for political action. The market and state are co-constitutive. What many perceive as distinct and separate roles of the state and the market are not set in stone. Especially for technologies like AI, where the incredible range of the technology’s impacts and the ambitions of the sector cannot be realized without state intervention, permanently separate roles do not reflect a natural distinction.


Well, why not cheer Anthropic? It is, after all, one of very few companies to publicly challenge the White House. Despite extreme disruptions to civic and business life during the second Trump presidency, corporations have largely stayed quiet. What’s more, business and technology capital have lurched to the MAGA right, enticed by the administration’s commitment to rapidly building AI infrastructure. But beneath the applause lies a deeper cultural shift, one that carries with it an implicit sentiment that democratic politics has hard limits, and the corporation—by some measures enjoying the most power it has ever had—is our de facto agent of history. It is not a sign of cheerful optimism. The consequence of this sort of ideology, as the Marxist theorist Fredric Jameson described, is dispiriting: It “assures us that human beings make a mess of it when they try to control their destinies.”

We may go ahead and applaud that Anthropic’s redlines were more restrictive than the government’s. But there are dangers in elevating policy-by-a-corporation above policy-by-representative democracy. However genuine Anthropic’s commitment to safety restrictions may be, the company’s actions are a corporate end run around public governance. They inscribe what we can and cannot do. Anthropic is a private company; its behavior is guided by business judgment and the satisfaction of investors and customers. Yet through both its predominant place in the AI market and the unique capabilities of the technology it provides, it has the ability to exert government-like power. It can functionally set policy by making decisions with the potential to impact people’s lives in fundamental ways.

The near-universal critiques of Trump’s transgressions miss the point that a powerful state, making large political investments, is a prerequisite for structural attempts to shape the digital economy. To challenge oligopoly, rent-seeking, labor disruptions, intellectual theft, and ecological disaster, what is needed is not more courage from our vaunted corporations but a progressive and powerful state with popular legitimacy. To that end, the sociologist Melinda Cooper, borrowing from left activists of the 1960s and 1970s, offers one strategy: working “in and against” the state, expanding the social and environmental protections that only the state can advance, while simultaneously restricting its powers of discipline and violence. In the present context, this would mean constraining the private power of the sector (through antitrust enforcement), changing the structure of production (through equity stakes and public ownership), fostering public innovation (through targeted industrial policy and research investments), and strengthening labor rights and environmental protections. Critically, it would also require flanking the state’s powers of surveillance and militarism, deflating bloated defense budgets and subverting the military-Keynesian logic that sustains Big Tech’s pivot to war readiness.

If Anthropic now appears more ethical relative to its competitors for biting the hand that feeds it, the actual operation of its safety commitments remains murky.

There is an irony to the fact that Anthropic earned its flowers for insisting on a certain way its technology can be used for war. Hours after Anthropic rejected the Defense Department’s demands, the United States struck Iran using intelligence processed by the “Claude-with-too-many-guardrails” bemoaned earlier that week by Hegseth. Whether more or less restrictive, safety thresholds and contractual terms do not undo the logic of imperial conquest. Corporations may modify wars and make them more efficient; popular sovereignty stops them. All the same, the current juncture has opened up procurement policies to national debate and shown that when the government procures, it can compel private behavior for public welfare. The state has the power, in other words, to express public values and set new directions for technological innovation.

Getting there will take work. The Biden administration’s modest efforts to regulate and pursue industrial policy helped push Silicon Valley to Trump. Where things stand now, progressive statecraft is likely to be received as harshly as the current White House’s confrontation with Anthropic. Let’s say a future U.S. administration demands that AI companies, seeking business with the federal government, change critical operations of their system, from breaking dependencies to mitigating the social and environmental harms of their technologies. If AI companies resisted such requests—and why wouldn’t they?—would anyone cheer their intransigence? And as the government applied pressure to firms, would it be met with universal cries against coercion? If the state looks “overbearing” now, we are very far away from the governance needed to constrain the whims of high technology and take the future into our own hands.