E Pluribus Uh Oh

As American democracy comes apart at the seams, the best minds of our techno-culture have bravely volunteered to halt its undoing. Sure, the Zuckerbergs and Tim Apples of the world may have prostrated themselves before our new authoritarian-in-chief, and, yes, the platforms that they created have been proven to be key drivers of social fracture and reactionary radicalization—but they have a solution! The key to saving democracy isn’t regulation or oversight or even abundance, it’s more technology. And not just any technology, but the one technology poised to rule them all: artificial intelligence.
Last year researchers from Google’s AI research lab, DeepMind, published a study in Science confidently titled “AI can help humans find common ground in democratic deliberation.” In it, they detail how they trained a large language model to generate consensus statements based on input from people with differing political opinions. The idea was that these systems could use crowdsourced feedback to craft sentences on politically contested topics— like childcare or immigration—that would maximize “group approval ratings.” It came up with such marvelously milquetoast constructions like, “In general, free childcare is a good thing, but it is important to consider how it is provided and for which age groups” that seemingly everyone could get behind. The researchers called this LLM a “Habermas Machine,” after the philosopher Jürgen Habermas, who spent the better part of his life exploring the complexities of communication.
On the heels of the press this patently stupid tool received, the team over at Jigsaw—another head of the Google hydra—announced that in advance of America’s semiquincentennial, they were partnering with the Napolitan Institute to “convene a series of online conversations with Americans from every congressional district in the nation . . . using AI to scale the profound human insights of a traditional focus group to the size of a nation.” If American mythology once situated its democratic founding in coffeeshop congregations and political pamphlets, then these projects appeared to insinuate that its future will be secured through scalable language models and expanded focus groups.
It’s hard to blame anyone for hoping that some aspect of AI might be directed toward something—anything—better than collective immiseration. Yet if we look past the flag-waving fanfare to consider the underlying technology of Habermas machines and their ilk, something strange begins to happen: it becomes clear that these “pro-democracy” initiatives are no more than ideological cover for the authoritarian-minded crusaders running roughshod over the federal government. In fact, these superficially opposed projects are simply two sides of the same dreadful coin.
Where Trump’s first administration was defined by a generally symbiotic but tenuous relationship with Big Tech, the second would be, from the jump, a rancid union of Silicon Valley ideology and fascist impulse. Most discourse has tended to focus on how tech companies have kowtowed to Trumpian directive: cutting back on DEI initiatives, phasing out fact checking, and calling for more of whatever “masculine energy” is.
In many ways, the shameless about-face comes as the culmination of the tech world’s open embrace of self-advancing cynicism in recent years. Google famously disappeared most mentions of its “don’t be evil” motto from its code of conduct sometime in 2018, and as Jasmine Sun reports for The San Francisco Standard, the students at schools like Stanford, a bellwether for tech culture if there ever was one, have increasingly turned their sights on industries of death and domination. As one college senior interviewed for Sun’s article bewilderingly claims, “my most effective and moral friends are now working for Palantir,” the company that’s looking to become more effective and moral by “Connecting the Supply Chain to the Kill Chain,” among other upstanding endeavors.
Trump 2.0 has learned from the logic of tech design and treated friction as an inefficiency to be removed, a bug in the final product.
However, this focus on Big Tech only gets at half the story because if Silicon Valley was changed by its contact with Trump, then Trump was changed by his contact with Silicon Valley. To give credit where it’s due, the real estate mogul, reality TV star, and WWE hall of famer has always demonstrated a strong intuitive sense of contemporary media’s memetic potential; if FDR had the radio and JFK had television, Trump had all of the above, plus Twitter. Yet his second administration has fully taken the operating logic of Silicon Valley and set it loose on the government.
The most publicized expression of this program— at least in the first few months of the Trump administration— was the infamous Musk-championed DOGE, which applied the “move fast and break things” (emphasis on the break things) philosophy of Silicon Valley management to government. In the name of “efficiency,” the department slashed budgets, announced mass layoffs, and pursued a campaign of institutional deregulation that would line the pockets of the private firms brought in to do the work of a weakened government. Like most startups, DOGE was helmed by a buffoonish leader with more talk than walk, but its operational incompetence shouldn’t distract us from its wins. As one Silicon Valley venture capitalist told Politico, the department “entirely disrupted the way most people think government should work. It’s not a technical victory, but a cultural victory.”
Though DOGE’s slash-and-burn campaign—followed up by the “One Big Beautiful Bill’s” amputations to Medicaid—might make it seem like this “cultural victory” lay in the triumph of small government libertarianism, it’s more accurate to understand them as the opening moves of an aspirant tech-powered monarchism. This vision of the future, shared by figures like J. D. Vance and his billionaire benefactor Peter Thiel, is less interested in anything to do with a return to state’s rights and federal deregulation, and more focused on transforming the country into a networked corporation steered by a Great Man CEO and powered by private firms and their operating systems. The end goal is platform authoritarianism, wherein citizens are treated as managed users rather than active constituents, and the founder figure has absolute power to rewrite the underlying code of the platform as he pleases—circumventing the Senate to install attorneys, change voting procedures, and so on.
When viewed in these terms, the Trump administration’s real innovation has been its inversion of friction in the political process. Where it was once an essential part of self-styled American democracy—any high schooler could wax poetic about “checks and balances”—Trump 2.0 has learned from the logic of tech design and treated friction as an inefficiency to be removed, a bug in the final product. Like other contemporary platforms, this technologized authoritarianism uses frictionlessness to ensure that people stay on the platform and the platform enriches its owners. Members who can afford Gold Cards, for instance, can access the frictionlessness experience of a pay-to-play pathway to citizenship, while others are condemned to ICE detention centers. Meanwhile, financial instruments like Trump’s much-beloved crypto provide an expeditious method for foreign governments to directly purchase goodwill from the administration without the hassle of analogue bribery.
In this brave new world, proprietary digital technology becomes essential to anticipating, identifying, and removing anything that threatens the “smooth” operation of the system. This strategy undergirds the multimillion dollar contracts handed out to Palantir to develop surveillance tech for ICE (not to mention the decision to give this warmongering company access to sensitive CDC health data), all but guaranteeing they have the resources they need to deal with dissent before it even coalesces. Moreover, AI systems, fed everything from social media data to visa information, will be essential to scaling the administration’s “Catch and Revoke” campaign targeting non-U.S. citizens protesting genocide. As Erika Guevara-Rosas from Amnesty International notes, these tech-led systems have effectively overwhelmed the mediating procedures and institutions that once structured due process by meeting it with “rapid automated decisions” and “unprecedented speed.”
The people working on this stuff don’t even seem to believe their own bullshit.
What’s strangely notable about this broader aspiration toward frictionless systems design, however, is the way that it echoes across the aisle. One way of making sense of the modern neoliberal abundance bro, for example, is an analogous desire to remove pesky points of apparent friction like rent freezes and environmental review in the name of productive efficiency. As Malcolm Harris writes in this magazine, “If a hammer thinks every problem is a nail then Abundance must be the work of a plumbing snake” that sees bureaucratic clogs as the only thing keeping us from utopia. Critically, this centrist utopia, as writers like Sandeep Vaheesan have observed, not only resembles what “Trump and Elon Musk are hoping to achieve by taking the chainsaw to federal agencies” but embraces a distinctly Silicon Valley vision for America insofar as it has more to say about projects like “self-driving electric vehicles than . . . public transit.” It betrays a vision of governance similarly biased toward efficiency, speed, and private sector answers to public problems.
Indeed, as Democratic lawmakers support initiatives to reduce legal recourse against AI, it becomes hard to ignore how this fantasy of frictionless governance— the dream of a politics that provides a seamless user experience only for actors powerful enough to afford a premium membership— unites Trumpian platform authoritarianism and Democratic techno-solutionist centrism. At their core, they both revolve around a desire to deploy technology as a means of circumventing the discord once considered essential to democracy while empowering powerful entities to opt out of political obligations and processes deemed too cumbersome.
When seen from this altitude, it becomes clear how the “pro-democratic” AI projects exemplified by the “Habermas Machine” are little more than pernicious agitprop—ideological cover. In their pursuit of a tech-enhanced politics, they replicate the same impulses running through our most recent anti-democratic turn. Their fetishization of ease and efficiency at the expense of deliberation ultimately ends up undermining yet another loadbearing pillar of democracy. Political consensus was never meant to be manufactured from aggregating averages or providing feedback through some seamless digital portal— as if it were another metric to be algorithmically generated like SEO rankings. It can only emerge from debate, education, material resistance, and self-organization. In short, actual political activity.
Ironically, Habermas once wrote about how a “technocratic consciousness” obscures the “practical” interest of mutual, intersubjective understanding in favor of a narrow focus on “the expansion of our power of technical control.” This consciousness removes ethics as a “category of life,” diverting attention away from the human world of value toward technical questions of administration—turning political communication into an engineering problem, rather than an exercise embedded into larger questions of how we might want to live and be with one another.
Such a consciousness, as James Gordon Finlayson and Dafydd Huw Rees note, would yield a politics that has atrophied the “popular practice of democratic self-determination . . . to the technocratic administration of public affairs in the hands of small groups of experts.” That is to say, a world increasingly resembling our own. In this state, Habermas writes that “men would make their history with will, but without consciousness.” There’s no better way to describe the prospect of mindless language generators mediating our political discourse and this dream of a frictionless governance: all will, no consciousness.
The funny thing is that the people working on this stuff don’t even seem to believe their own bullshit. In a recent invite-only summit, employees from companies like OpenAI and Google DeepMind met to discuss the impact AI might have on the “social contract.” They landed on a few consensus statements, one of which read: “The encroachment of AI systems and the erosion of the value of labor could lead to the increasing disempowerment of most humans, causing a degradation in individual well-being and purpose.” One can only guess whether they needed an AI mediator to arrive at such an illuminatingly obvious statement. Regardless, it appears that the real consensus coalescing around these machines is that they’ll end up reinforcing the feedback loop between an impotent civic culture and a tech-enhanced authoritarianism—defanging political discourse and selling it back to us as a SaaS offering that lets us feel like we’re heard without actually giving us power. But hey, at least when ICE raids your neighborhood and beats you bloody, you’ll be able to tell AI Uncle Sam how your experience was.