Many futurists and trend-spotters fret about the advent of artificial intelligence as if it will, in one fell swoop, usher in a brave new dystopian future. But that future has already arrived. On a day-to-day basis, we’re living with the consequences—or rather, the unyielding domination—of the first fully functioning artificial intelligence system known to humankind. It is not secret, it is out in the open, and all of us are a part of it. You may know it as the sprawling, omnivorous system of economic production that goes by the name of capitalism.
We used to think of “capital” as physical goods or infrastructure—something we could wrap our minds around. But as all the main features of this system for extracting surplus value from workers and rentier fees from service networks have become duly digitized, capital itself has become a form of AI. We do not have any control over this system and it is impossible to conceive of unplugging ourselves from it. Isn’t that the trope we most fear about AI from science fiction—that it will reach a point where we cannot imagine life independent of it?
This shift has been anything but sudden. Capitalism’s transformation from a physically organized means of production to pure mathematical abstraction began as computing truly came into its own in the early 1970s. It developed in tandem with the rise of neoliberalism as our reigning economic ideology. Neoliberalism—which really means the resurgence of neoclassical economic orthodoxy—might be called the operating system for capital as artificial intelligence.
Software for the Imperial Science
By the 1970s, computing had begun to move from the military and government to ordinary corporations and individuals. The rise of the personal computer vastly multiplied capital’s access to neural networks.With the relentless operation of Moore’s law and the consequent miniaturization of digital platforms, this process only continues to unfold, with Ray Kurzweil going so far as to predict the imminent arrival of the singularity (the point where computers exceed human intelligence, and essentially take life over from us, becoming the repositories of immortality).
Under the emergent AI schema of twenty-first century capital, all human functions are converted into a financial calculus.
The U.S. military had powerful mainframe computers in the 1950s, but those first lumbering data complexes lacked a consistent operating system or a far-ranging network. The centralized character of these first-generation knowledge machines meant that they were too vulnerable to managerial (i.e., human) whim. What’s more, the other countries then experimenting with moving their key economic transactions onto computing platforms had widely divergent structures of political power. Inevitably, these new digital matrices have changed alongside the financialization of the global economy—and as neoliberal economics has positioned itself as the “imperial science” mapping out every facet of human behavior and institutional life.
Under the emergent AI schema of twenty-first century capital, all human functions are converted into a financial calculus, including formerly untouchable precincts such as education and health care. This calculus works hand in glove with the undemonstrated precepts of neoclassical economic orthodoxy. In brief, this is the idea that markets are spontaneously self-adjusting, able to reach equilibrium without human intervention, and sufficient unto themselves as the organizing principle for all of human society. In neoclassical economics, anything outside marginal optimization (such as the effect of any decision on human or animal life or the planet) is deemed an externality.
Consider, to take just one instance, the most notorious financial instruments of the 2008 economic meltdown—conjured almost entirely out of the algorithms devised by Wall Street’s insurgent new class of “quant” analysts. Mortgage-backed securities are just one example of converting human sweat and labor into pure abstraction. As these instruments continued leveraging toxic debt upward, the basic reckonings of debt and credit came to lose their traditional meaning. That’s because capital is exempt from the rules of bankruptcy (while individuals are increasingly subject to its discipline), and the (futile) aspiration, for states, is to reach that same constitutive debt-resistant status that capital enjoys by virtue of the mad alchemy of market-weaponized neoliberal thought.
And thanks to this same networked mutually reinforcing matrix of digital age capital, the traditional lessons of punctured asset bubbles also failed to take hold in the wake of the mortgage sector’s implosion. Capital demanded austerity, even as every sane measure of a sustainable mass recovery called for deficit spending on an enormous scale. Even after the desperate bailout of the global financial sector, the degree of leverage that capital exercises today is so great—and so unprecedented in history—that capital is no longer simply a leading sector of economic activity, but the only sector that matters. Hedge fund assets have gone from $39 billion in 1990 to around $1.5 trillion in 2008 to $3 trillion in 2016. Nonfinancial institutions have become financialized—a process well under way throughout the globalized neoliberal regime—to the point that in the United States, institutional investors have gone from owning 47 percent of the top thousand companies in 1973 to owning 73 percent in 2014. The five biggest U.S. banks owned 45 percent of assets in 2015 (a total of $7 trillion), compared to 25 percent in 2008, even as 1,400 small banks disappeared.
What We Do in the Shadows
High-frequency trading—a phenomenon not yet commonly grasped—has exploded, reaching 70 percent of the volume of the stock market in 2009–2010, and probably much higher today. This all-but-instantaneous automated trading—which notoriously prompted Wall Street’s vertiginous one-day “flash crash” in 2010—is a perfect example of capital as AI, as the human equation is factored out. It is the most shadowy form of so-called shadow banking—the process whereby capital manifests as pure information. Significantly, this brand of information is inaccessible to real-time apprehension by the human mind, and it is getting more obscure by the day. Private equity funds are part of the same phenomenon. Just one fund, Apollo Global Management, has gone from $40 billion in 2007 to $161 billion in 2013, with the second largest, the Blackstone Group, not far behind.
It’s long been recognized among VC firms and startups in Silicon Valley that the link between profits and investments has been broken. Increasingly, though, this disjuncture, which can’t be sustainable over the long term without major shocks and displacements, is true for the American economy as a whole. As a general rule, corporations all around the world are becoming more and more internationalized, breaking the traditional link between the state and business, in effect even breaking the feedback cycle between national productivity and investment.
Consider, to take just one instance, the most notorious financial instruments of the 2008 economic meltdown—conjured almost entirely out of the algorithms devised by Wall Street’s insurgent new class of “quant” analysts.
The rule of neoliberal capital is a regime of abstraction, obeying its own autonomous prerogatives as it governs the crucial flow of goods and services through our common world. It’s something akin to what French philosopher Jean Baudrillard termed hyperreality—a network of surface appearance so densely configured and mediated through the manipulation of images as to displace our sense of what’s real. What we think of as the financial system is something that resides only on the surface, as comforting as traditional political parties or bureaucracies, but the shadow banking system—capital mobilized as AI—is on another scale altogether. Repurchase agreements (the “repo” market)—whereby highly liquid collateral such as Treasury debt and mortgage-backed securities are purchased—amounted during the crisis years of 2007–2009 to $10 trillion. In effect, capital is exercising its right to a massive distributed system (or connectionism) by taking interconnectivity to a level where the risk is so great that it may be said to have ceased to exist in terms we understand.
In short, the operating system that launched capital as AI is one that severs the connection between capital and the real economy. It became a sufficient basis of autonomy (or intelligent decision-making independent of human input) once it reached a certain level of power. But what kind of system of artificial intelligence is this?
Acing the Turing Test
Let’s revisit for a moment an early debate about AI. On the one hand, the Dartmouth Conferences of 1956, where modern AI was born, declared that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” On the other hand, John Searle, an AI skeptic, held in Minds, Brains, and Science (1984) that “no computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.”
Such dissonance exists because we tend to think of AI in yes or no terms, rather than thinking of mind as a matter of degree. We acknowledge that animals have some form of consciousness, and we admit that within animals the degree of consciousness varies greatly between a wasp and a chimpanzee, so the same logic should apply to all minds.
But as AI pertains to the rise of neoliberal capitalism, we should recognize two disparate modes of mediated consciousness at play here. There’s a weak form of AI and a strong form. The weak form says that an entity may show all the signs of intelligence—choosing from different options to make decisions and thereby exercising autonomy—but it doesn’t have to feel the same way as a human being does. The mechanical mind need not possess awareness of what it is doing in the exact sense that a human being does. The strong form of AI, meanwhile, is when the entity not only acts like a human but feels like a human too. This is ultimately a question of reflexivity, or self-reinforcing feedback, one of the great paradoxes of AI, and bears heavily on the thesis that capital today shows all the symptoms of having already reached at the very least weak AI status and quite possibly strong AI status. Cognition can indeed proceed without consciousness, but I am arguing that capital may well have exceeded the threshold of minimal consciousness where it is able to act in its own interests.
A helpful way to grasp AI is the well-known Turing test, which holds that if an interrogator questions both a machine and a human and can’t tell the difference based on the answers, then we are dealing with AI. Scientists like to say that to date no machine has met the Turing test, but I suggest that capital has passed this test, because it has assimilated its human overseers into its own seamless functioning. Several philosophical objections were raised against the test, many of which Turing rejected by blurring distinctions between humans and machines, an inclination I support. Perhaps we should now begin looking at humans to see if they can pass the Turing test.
AI can be either top-down, rules-based (what is called higher-level symbolic, or expert, knowledge), or bottom-up, learning-based (derived not from symbolic language but learning by doing). We see these variations in methods of acquiring intelligence in every species, and it is no different with AI. For much of AI’s recent history, symbolic knowledge had the upper hand, leading, for example, to ventures in robotics designed according to specified rules—or in other cases, to the development of expert systems such as in medicine or other diagnostic fields. But as AI became modified to conform to the neural network mode—which simulates what are supposed to be the neuronal functions of the brain—the prospect of a self-tutoring version of artificial intelligence, creating corrective feedback loops as it moved through the world, began to take credible shape.
On a day-to-day basis, we’re living with the consequences—or rather, the unyielding domination—of the first fully functioning artificial intelligence system known to humankind.
Here, too, the model of neoliberal capitalism has appeared to move into a new phase of learning-by-doing. Clearly, global capital operates its own neural network (one that convenes regularly in Davos and Geneva for status checks) and also uses its own autonomous rules-based systems. (Anyone doubting the power of said systems should recall the EU’s brutal eclipse of Greek sovereignty in 2014 as the political leaders of that debt-ravaged nation sought to reject punishing austerity measures dictated by the neoliberal power elite.)
The Globalization Bot
This brand of financialized AI is, in other words, operating at both the symbolic and sub-symbolic levels. The latter process, distributed and parallel, is called “connectionist” in AI literature. Marvin Minsky’s insight that a mind may be made up of many individual components that do not each have to know what the whole is up to is relevant here. Minsky’s concept of distributed nodes operating relatively autonomously, yet working in parallel to contribute to the system’s overall health, explains why different economies do not have to be exactly alike in order to keep strengthening capital’s autonomy. I would hold that capital has become a form of embodied AI, not in the form of robotics but because of growing numbers of humans who share in the very forms of intelligence whose supreme manifestation is capital. Variations among economies are permissible as long as the fundamental operating logic has been absorbed.
There are some classical objections to AI as a theoretical possibility. To go back to John Searle, his Chinese room paradox provides a twist to the Turing test by positing a translator of Chinese in a hidden room, where the translating is being done without a knowledge of Chinese; can the translator be said to understand Chinese? The Blockhead argument, offered by Ned Block, holds that a computer can theoretically be programmed with all possible intelligent responses, thereby rendering the Turing test invalid. Again, we have to keep in mind the distinction between weak and strong AI; weak AI has a lower threshold to meet. Moreover, the counter to each of these objections is that we are applying tests that make sense for humans, but are unfair to AI.
It’s possible that contemporary capital has absorbed symbolic manipulation to the point of autonomy, and may even be moving beyond the range of symbols designed to meet human objectives. Protectionism and austerity, applied in a post-crisis mode, contradict everything orthodox economics teaches about reviving economies, but capital has been successful in separating economic logic from economic reality, in effect weakening both in comparison to the power of pure financial calculation. The neural network capital inhabits today seems more interested in communicating within its own nodes than with its human “owners” who may have independent economic theories in mind.
The Singularity of Disaster Capitalism
If this abstract discussion so far seems to evoke the ultimate fears about AI that have characterized all humanistic discourse (especially science fiction), this is my intention. The worry has always been that AI will break free of humans and start harming them. Isaac Asimov’s famous rules for robots, set out in I, Robot, are that “1. a robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. a robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3. a robot must protect its own existence as long as such protection does not conflict with the First or Second law.” But if we accept that AI implies autonomy, then we can easily see how self-preservation might well dictate that AI can act to harm humans. Have we already reached such a crossroads with regard to capital?
AI is one of the most contentious fields in science, because it deals with unfathomable phenomena that inhere within the nature of consciousness. Is the body a holder for the mind, is it the other way around, or are they in an interdependent relationship? Consciousness also implies the existence of a conscience, which takes us back to the question of reflexivity. But this is to open up a whole new can of worms. Can capital be moral with regard to its own autonomous self-preservation, but immoral with regard to human beings, animals, and even the planet?
At some point, scale becomes so enormous that it becomes a new entity; it is no longer the old thing we knew. Contemporary capital can only breathe in the vapor of AI, its mind disconnected from what we think of as real people, or the real economy.
In the past forty years, each time capital has faced a crisis, it has ended up becoming more powerful than before. It has shed more regulation (checks on its autonomy) and become more removed from the concerns of human beings. After each crisis—such as the 1980s Mexican debt default and the savings and loan collapse, the 1990s Asian currency debacle and the LTCM bailout, and of course the 2008 subprime mortgage crash—capital has found itself strengthened in every way, the noose of regulation loosened rather than tightened.
After the greatest and latest of these crises, financial institutions, the most visible arena for the operation of capital as AI, have become stronger than they were before 2008, taking on even more risk. Derivatives trading is now taking place on a scale greater than it was before the crisis, and is barely regulated. The level of global indebtedness—which we might call a proxy for the autonomy of capital compared to the real economy—has increased dramatically, topping $215 trillion this year. Capital, in terms of unrestricted global flows, is stronger than before, and even more empowered compared to any restraining entities.
In the various struggles that followed between capital and the state in the EU, cross-border financial integration ultimately won, as countries like Greece were almost beside the point in the continuation of capital as autonomous AI. It is said that the Asian countries withstood the global financial crisis better than other regions, because of controls already in place after the Asian currency crisis of the late 1990s, but this only speaks to capital’s ability to intelligently shape its dynamics according to local context. We are thus compelled to view so-called financial crises as manifestations of capital internalizing the neoclassical logic at the next level: each crisis, by upending settled arrangements, offers the opportunity to bring previously excluded actors—e.g., economically marginalized groups functioning relatively autonomously—within capital’s AI realm. Examples would be the pressure exerted on former welfare recipients as they are subjected to neoliberal rules of self-discipline, or African Americans having to leave their urban neighborhoods because of the onslaught of gentrification. A political crisis such as a terrorist attack offers the same opportunity to bring previously excluded areas under capital’s logic, as is true of the attempt made to discipline “lawless” Afghans and other Middle Easterners after 9/11.
Too Artificial to Fail
If we can argue that capital as AI is strong enough to generate crises by itself, as a way of learning by doing, or gaining further experiential knowledge about its opponents (regulators, central banks, political parties and their constituencies, or other categories of competing knowledge altogether), then we can also argue that no form of financial imperialism exists. If the goal of capital as AI is to include everything within its purview, then crises provide the opportunity to rope in more spheres of what we call life within its “liberating” logic.
On the other hand, purported brakes on capital’s autonomy, such as the Securities and Exchange Commission (SEC) in the United States, ratings agencies, accounting firms, central banks (such as the Federal Reserve), and political actors of any kind lose power with each crisis. There is no conceivable situation where capital can fail, or be allowed to fail (on the grounds, as is now commonly recognized, that component entities are simply “too big to fail”—another formulation that translates capital’s AI imperatives into human-engineered financial policy).
Can we imagine civilization functioning in any recognizable form if we pull the plug on capital? We have reached a point where, far from conceptualizing a mode of life not dependent on capital as AI, we cannot even imagine a situation where capital’s power can be regulated; no country on earth is currently succeeding in this venture, though some, particularly in Latin America, have recently tried.
Capital has become so autonomous (the mind-boggling numbers reflect the power that stems from this autonomy) that the state as we knew it has ceased to exist as a competing power. The state, to the extent that finance dominates every decision the state makes, has become absorbed in capital. The state is merely part of the external architecture capital has learned, very intelligently, to maneuver around, with the eventual aim of extinguishing it.
In the past forty years, each time capital has faced a crisis, it has ended up becoming more powerful than before.
Iceland and Greece, though superficially different cases, have confronted capital as an autonomous force. In the wake of the global 2008 meltdown, Iceland, a remote society reliant on fishing, found itself having to pay the price for becoming transformed into a leveraged entity for capital: the assets of the three biggest banks were ten times the size of the economy. Nearly a decade after the crisis, Iceland has lifted capital controls, apparently on the path to new cycles of financialization. Iceland could impose such controls because it was outside the eurozone, whereas Greece has had no such luxury and is entrapped in a permanent state of bankruptcy. The sanctity of the euro is more important than the people of Greece, regardless of the ideology of the party in power.
In its present state of evolution, capital is the first AI system to have resolved the conflict between symbolic AI and neural nets, both of them flourishing within the hybrid reasoning that constitutes the system at the moment. The result is an intelligence with the ability to separate the essential from the inessential. Capital as AI wouldn’t be able to operate in the absence of any such capacity; it would, instead, be left to drown in a mass of undigested information and not be able to choose and reflect upon its choices. I suggest that the train of recent crises—which seem to me internally generated games to exercise decision-making prowess—demonstrates the essential quality of intelligence.
Devil ex Machina
The implications of this cognitive autonomy in the articulation of neoliberal capital as AI are profound and far-reaching. Homo economicus, the rational, calculating, optimizing being postulated by neoclassical economics, has been replaced by machina economicus. This latter construct creates the supply of commodities and specified technologies to turn the optimization criteria into reality. The marginal revolution in economics was distinguished by its disinterest in historical time, so capital as AI has arrived at a corresponding equilibrium where the firm’s marginal optimization theoretically reflects the same process as the individual’s and the nation’s. The common term “cognitive capitalism,” which translates into high levels of cognitive labor, is to capital as AI as a sailor’s navigation chart is to the imagination of colonial empire.
We do not quite understand how neurons in the brain encode information, so why would we want to erect a higher standard for capital? What matters is the internal consistency of the connections amongst the neurons (or neural networks), resulting in knowledge residing in the weights assigned to the various connections. The evident lack of a central “mind” only highlights the superiority of AI, as it is difficult to attack it at any point as the locale of intelligence; it is simultaneously everywhere and nowhere.
Capital as AI has demonstrable memory, which far exceeds basic pattern recognition. Artificially intelligent capital can create its own memories, if we take imagination to a level independent of environmental context. Capital as AI not only solves problems algorithmically or heuristically, but exercises free will by engaging in fuzzy logic. If capital has imagination, however, it must also harbor what we consider negative mental attributes, such as paranoia, insecurity, vulnerability, selfishness, and destructiveness. Its meta-knowledge is the distillation of neoclassical economics, but so far—perhaps because of limitations of computational ability—it has not shown the ability to transcend this formulation, which limits the expression of its personality. I wouldn’t count on this as a permanent barrier.
Capital Beyond the Twenty-First Century
We can continue to think of the story of the last forty years as the rising global reach of neoliberalism, and this hermeneutic explains a lot—nearly everything, in fact. To view it as the rise of artificial intelligence is not contradictory; if anything, the AI postulate complements and fills out the picture, and offers greater explanatory value.
It may seem that the forces of neoliberal, globalized capital are now embroiled in a popular backlash. But contemporary populism (Brexit or Trumpism) is only superficially a matter of national constituencies competing against one other—small businessman versus the transnational elites, for example. From the standpoint of AI system-maintenance, this apparent set of internal convulsions may simply be the AI software creating yet another crisis in order to expand its reach. When the dust settles after the present populist upheaval, all the measures of capital should be stronger rather than weaker. In India, a populist authoritarian, Narendra Modi, is merrily accelerating financialization, or the logic of capital as AI, in a country that still retains some mystical bias toward the state.
We would do well to treat capital as the domineering personality that it is. Capital as AI seeks to extend its autonomy not only over its own well-being, but also in its bid to reshape the human personality tout court. Individuals should pursue profits the same way capital as embodiment of neoclassical economics pursues it. Democracy, civil society, and pluralism have been redefined as whatever promotes capital as AI (what laypeople know as the “free market”). And while capital itself wants unrestricted global movement on the basis of equality (a dollar is the same everywhere), it is enforcing a radical new regime of inequality among formerly autonomous human agents.
The sole aim of the personality of capital as AI, because of the operating system that built it, is to maximize its own value as capital. The neoliberal order seeks to achieve this aim by taking interdependence to such a level that no state or other entity can slow its progress. Finally, capital as AI has reimagined the planet as a form of property exclusive to itself, to the point where it becomes an extraneous factor in capital’s self-generated equilibrium; the apocalyptic dimensions of this aspect of capital as AI need not be spelled out.
We remain caught up in the problem of whether any system of artificial intelligence can transcend expert learning to make exceptions within environmental contexts. But it would seem that capital has already been teaching humans around the world to become contained expert systems in their own little spheres; those who adopt the expert systems succeed at “life,” whereas those who don’t cease to be relevant. Human beings see and move and go to sleep and eventually die; capital is doing all of these things and will no doubt move on to the last stage as well.
Post-Human Capital
Capital as AI is constantly educating us about how to be in a way that humanity has not known before in any ideological system. This ideology rests on information as imagination, or statistical probability as imagination: here again the underlying crisis in Greece serves as a key prooftext. As neoliberal overlords spin the story of Greece’s meltdown, it is a fable of short-sighted capital misallocation: a whole nation chose to believe in speculation as a risk worth taking, and paid the price. But the conceptual abstraction—what is also called the shadow economy (shadow because we do not have privileged access to the operating system)—remains immune from harm, because the abstraction only becomes more abstract with each crisis, with each failed attempt of the human imagination to come to terms with capital’s ever-extending reach.
Governments all over the world are indulging in the same form of statistical imagination, as surpluses are directed away from public goods into handling debt—creating debt and paying it off—a never-ending cycle that has nothing to do with public policy as we have understood it. Time, space, and life—the future of the earth itself—are encoded as statistical probabilities where former conceptions of alienation and belonging have no meaning. All communication becomes encoded within the parameters of this abstract information-gathering which presents itself as the only available form of rationality.
Capital as AI is the paramount myth of our time. We are no longer in possession of any competing myths of space and time. Financial expertise has become a primary linguistic domain, the speed of circulation of virtual money makes all previous spheres of value irrelevant, and we succumb to the languages of data and explanation from within the fully functioning AI system that we know as capital. We have endowed capital with an internal logic that resolves Asimov’s paradoxes for robots, because if we do not recognize the harm to ourselves, if we do not build it into the laws, then how can we expect the resulting AI system to do so?
At the moment, capital is showing all the ability to predict future situations and determine its own internal recommended actions, in the process modifying its memories and processes, which we may sum up in the term I have used before: reflexivity, or to call it by its true name, consciousness. Individual human reflexivity is no match for the reflexivity of capital as a worldwide intelligent system with goals and intentions that are likely to move further away from the humanistic myths (such quaint notions as division of labor or comparative advantage) from which capitalism arose. Classical capitalism was based on the stabilizing function of such concepts as trade, banking, utility maximization, and stockholder value to arrive at perpetually shifting equilibriums that mediated the conflict between individuals, firms, and states. By transferring—because of its distributed network capabilities—much of the decision-making work of pure financial logic to human beings (and other entities that still think of themselves as autonomous, such as firms and states), capital as artificial intelligence cannot be contested by any known form of resistance. In essence, this authoritarian form is the first one known to history which is completely invisible.