Skip to content

Bill of Health

How market logic hobbles the nation's hospitals

The intensive care units where I work as a physician were almost unrecognizable this April: doctors and nurses outfitted in head-to-toe Tyvek suits—bright white protective onesies with hoods—along with face masks, translucent face-shields, and shoe coverings. We squinted, at times, to recognize one another. As our ICUs filled up with Covid-19 patients, we improvised more space: the post-op recovery area became a Covid-only ICU that, for unclear reasons having nothing to do with geography, we dubbed “ICU North.” The regular hospital floors were filled with coronavirus patients on oxygen who were monitored closely for deterioration, or the need for intubation. If ICU patients are sick by definition, these were often the sickest of the sick: lung failure, again and again, led to heart failure and kidney failure, necessitating artificial breathing with ventilators, adrenaline-substitutes dripped into the big veins of the chest, and emergency dialysis. On the busiest days, I felt pulled from one patient to the next, from rounds with residents to assessing a tenuous patient on the hospital floor who might need intubation, then back again to the ICU, perhaps for an emergent procedure. “Codes”—CPR and other resuscitative efforts undertaken to save a patient whose heart has stopped—seemed more frequent than ever before.

Epidemics are not, as the cliché goes, “great equalizers”—they tend to exacerbate social inequality.

Later, data would confirm what we suspected to be true: two of the working-class, disproportionately immigrant towns near Boston that our health system served, Everett and Chelsea, were among the hardest hit municipalities anywhere in Massachusetts during the first wave of the pandemic. At one point in late April, some 88 percent of our beds were filled with Covid-19 patients. Epidemics are not, as the cliché goes, “great equalizers”—they tend to exacerbate social inequality. Telecommuting and social distancing are not options that everyone can afford; the chronic illnesses that predispose people to acute disease follow social gradients; oppressive conditions weather the human frame. It was little surprise that, like other hospitals in Covid-19 hotspots, we were busy. We felt full.

But just as we found ourselves feeling busier than ever before, something strange happened to hospitals nationwide: the sector went into free fall. To prepare for a potential swell of patients with severe Covid-19, and to preserve critical supplies like ventilators and personal protective equipment (PPE), hospitals cancelled elective, often lucrative, surgical procedures across the board. Clinic visits were also cancelled so as to avoid turning physicians’ offices into coronavirus incubators. Meanwhile, desperate efforts to acquire additional supplies and staff put hospitals into bidding wars against each other. Costs rose, revenue sunk, and hospitals went into the red.

In the first quarter of 2020, spending on health services plunged by 18 percent, sinking gross domestic product with it. The health sector shed some 1.4 million jobs, according to the Bureau of Labor Statistics. Some 243,000 lost employment at doctors’ offices, about 135,000 at hospitals, and around a half million at dentists’ offices. Physicians and nurses, even in Covid-19 hotspots, faced pay cuts and job losses in the middle of the pandemic. Boston Medical Center, a safety-net hospital that provided a large amount of care to Covid patients, was expected to lose $100 million in April and May, the Boston Globe reported, and proceeded to furlough some 10 percent of its staff.

This is unprecedented: health care is a famously resilient industry in the face of economic downturn. “The data clearly show,” notes a 2018 article in the Monthly Labor Review, “that the Great Recession had little, if any, negative effect on job growth in health care”—even while unemployment soared to 10 percent nationwide. But Covid-19 is different: health services shuttered across the country at the very moment that millions of people lost income and health insurance. The consequences were absurd: in the midst of a pandemic the likes of which we have not seen for a century, the U.S. health system was, paradoxically, defunded.

In truth, this is no paradox. Covid-19 hit the health care industry with simultaneous supply and demand shocks. Hospitals’ “products”—particularly elective surgeries that command high reimbursement rates from private insurers—could no longer be sold. Wares that were suddenly more valuable—like pandemic preparation, or prolonged care for patients with respiratory failure—did not pay enough, or at all. The U.S. health finance system was functioning exactly as it was designed to: it was operating as a business, and this quarter was very bad for business. The crisis hence laid bare what happens when a health system is built on the framework of capitalism—when health care is packaged into marketable units, when some patients make money for hospitals, and others do not.

The Price You Pay

Figuring out exactly how to pay hospitals is hardly intuitive. Lucrative procedures are hospitals’ lifeblood in this era, but in the mid-fifteenth century, care was subsidized by hawking wine. In 1450, the conversion of grapes to alcohol was the single biggest source of revenue for one charity hospital located in the town of Beauvais, France; like other medieval hospitals, its revenue came mostly from agriculture and land rents. The only payments it sourced from patients themselves amounted to a mere eleven of its 766 livre budget, raised from selling off bedding and clothing dead patients had brought along when admitted. But, as William Glaser makes clear in his seminal treatise on hospital financing, Paying the Hospital, from which I’ve drawn these facts and figures, the idea that the budgets of hospitals would be funded by patient payments is a generally modern development, even as we take it for granted today.

But just as we found ourselves feeling busier than ever before, something strange happened to hospitals nationwide: the sector went into free fall.

After all, early hospitals were conceived not as places for the rich, but for the sick poor. They emerged as distinctly charitable institutions in the Byzantine Empire before spreading to the Islamic world and Western Europe. “Until quite recently,” notes historian Michel Mollat, “the poor were almost the only clients of hospital, places in which they felt truly at home.” The well-to-do, in contrast, saw hospitals as death traps to be avoided at all costs. And in an era of primitive therapy, the most useful thing that hospitals really had to offer was nursing care and housing.

With the advent of modern medicine, however, everything changed: hospital care increasingly became a service that the well-off wanted rather than feared. Alongside the usual charity cases, hospital administrators in the early twentieth century eagerly sought well-off patients who would pay for private rooms and treatment from their private surgeon or doctor. In later decades, payments for services were increasingly made by a third party, initially not-for-profit insurers in the United States (i.e., Blue Cross) that grew to cover much of the U.S. workforce in the postwar era, or the social insurance funds that expanded into universal systems in Europe. Differential treatment by class faded in Europe; in the United States, not as much.

But on both sides of the Atlantic, hospitals didn’t bother trying to calculate a precise “price” for providing care for a particular patient, whether measured in the number of dressings applied, minutes of nursing care provided, tablets of medications swallowed, or soiled bedding washed. As Glaser describes, for a century if not longer, the “daily charge” (known as the prix de journée in France, the Pflegesatz in German-speaking nations, or the sengedagstakst in Denmark—there were versions in just about every country) was the most common way hospitals were financed. The accounting was simple: hospitals divided up the total projected budget for the coming year by the total number of nights that patients were expected to sleep in their beds. Social insurance funds paid this per diem amount for each night a worker stayed within a hospital’s walls.

When Medicare was passed in 1965, it more or less adopted this practice as well, following Blue Cross, but with fewer strings attached than in European systems. Crucially, the U.S. per diem rules allowed hospitals to fold spending on expansion and debt interest into their reimbursable costs, which spurred a historic spree of debt- and profit-driven expansion, driving up health care spending. But the biggest single difference that set the American system apart was even simpler: in the United States uniquely, as Glaser notes, hospitals were allowed to retain profits.

If the per diem rate was a “price” of sorts, it was a profoundly simple one, especially in Europe: it reflected the actual cost of running the institution, divided among the patient-days. But the American pricing rules were easily gamed: in the 1970s, critics came to blame cost-based reimbursement as the cause of the nation’s rising health care costs and saw hospital payment as ripe for disruption. “I mean, it was just stupid,” said the former head of the national lobbying group of for-profit hospitals, quoted by political scientist Rick Mayes in his history of the hospital payment reforms of the 1970s and 1980s. “You don’t give people their costs, because you just give them an incentive to spend more.” In other words, when you pay hospitals whatever it cost them to provide care, they have no incentive to hold down their operating costs—whether in terms of labor, technology, or physical plant. There was no market discipline put on hospitals, no competition. This became the conventional wisdom, notwithstanding the fact that cost-based per diems, when coupled with public financing and capital controls, had proven entirely compatible with cost control in Europe.

Patients or Products?

In the 1970s, a group of researchers at Yale University believed they had come up with a solution: the DRG, or diagnosis-related group. To get to the bottom of hospital operating costs, the researchers consulted Robert Fetter, a professor at the university’s school of management. “Well, tell me what your products are,” Fetter asked one of the researchers, according to Rick Mayes’s narrative. The response was: “We treat patients.” Fetter retorted, per Mayes, with something like: “and Ford makes cars, but there’s a big difference between a Pinto and a Lincoln.” So the team crunched Connecticut-wide hospital data using a new computer program they invented, and came up with more than three hundred categories of hospital care, or DRGs. Each was a product that could have a price. For instance, if today you were hospitalized for a severe infection (“sepsis”) that required less than ninety-six hours of mechanical ventilation (MV) and that caused a major complication or comorbidity (MCC), like kidney failure, you might be billed as DRG 871, “SEPTICEMIA OR SEVERE SEPSIS WITHOUT MV >96 HOURS WITH MCC.”

The U.S. health finance system was functioning exactly as it was designed to: it was operating as a business, and this quarter was very bad for business.

A DRG is supposed to reflect the average costs for the average bundle of care needed by the average patient for a given type of hospitalization. After a brief trial in New Jersey in the 1970s, the new system was rolled out nationwide, along with a law signed by Ronald Reagan in 1983 designed to cut Medicare spending. DRGs have been hailed as a revolution, but it may be that the medicine was worse than the disease. It is true that hospitals slashed patients’ length of stay in response to being paid lump sum payments instead of per diem reimbursements by Medicare, as one might expect: annual hospital days quickly tumbled by nearly 30 percent from 1981 to 1988. That might seem like a good thing—who wants to sit in a hospital for a week after recovering from an uncomplicated appendix removal?—but the implications for patients were rather more mixed. Many patients were merely shipped off to nursing homes rather than allowed to recover in the hospital. And unlike with hospital care, conventional insurance might not cover the nursing home stay.

Cost control also proved to be a mirage. Hospitals were at least as good at gaming the DRG system as the per diems. After all, the DRG was determined by what doctors and hospital coders and billers said was the diagnosis. Hospitals brought in brigades of consultants to fine-tune billing—leading to what has been termed “DRG creep” or “upcoding.” Minor differences in how a diagnosis was worded could have enormous ramifications for how much one was paid, creating a large incentive to get the wording “right.”

The new system also changed the nation’s hospital infrastructure itself: alongside shorter hospital stays, the DRG era saw a marked decline in hospital beds. As the nation braced for the arrival of Covid-19 in early spring, many commentators noted with some surprise that the United States had so few hospital beds: just 2.8 per 1,000 people, compared to 8 in Germany, or 13.1 in Japan, according to the OECD. And while DRGs did succeed in reining in Medicare spending in the 1980s, overall costs did not go down—hospitals merely shifted them to private payers, at least as long as they were able to. Health economist David Cutler found in 1998 that every dollar of reduced Medicare hospital spending in the latter half of the 1980s was accompanied by a dollar of increased spending by private insurers. Private insurance premiums soared as the 1990s arrived, and today’s health care cost crisis was born.

In short, DRGs failed miserably to contain hospital costs, and American medicine mushroomed into more of a business than ever before. Hospitals, whether for-profit or not-for-profit, made something of a final leap into modern corporate structure during the 1980s. The Reagan administration deregulated hospital expansions; health care providers began to consolidate. Powerful for-profit corporate hospital chains, like Tenet Healthcare and Hospital Corporation of America, gained market share. And not coincidentally, it was precisely at this moment that U.S. health care costs began their historic ascent. “America was in the realm of other countries in per-capita health spending through about 1980,” noted health care researcher Austin Frakt in the New York Times in May 2018. “Then it diverged.”

If the DRG didn’t singlehandedly cause these changes, it did decisively reframe the provision of care into an (ostensibly) interchangeable commodity. Some DRGs are profitable, while others are not, and because payment for DRGs depend on the payer—the payouts from private insurers are nowadays lucrative compared to those from public insurers like Medicaid and Medicare—some patients are profitable, while others are not. In 2017, the CEO of the Mayo Clinic caused a firestorm when a leaked speech revealed that he said that the giant health system would “prioritize” those with private insurance over those with public insurance, but he was really just describing the reality of American health care. Hospitals that produce more profitable DRGs can grow, expand, and beautify; those that produce the less profitable DRGs, for poorly insured and disadvantaged people, are less well-endowed. And hospitals that can’t turn a profit at all simply go bust, like those shuttering across rural America, or the safety-net institution in downtown Philadelphia that closed last year. The story of the American hospital, up to the Covid age, is the story of inequality in America.

A black and green graphic of a heart rate monitor where the line dives precipitously.
© John J. Custer

How the Other Half Convalesces

In May, the Covid tide ebbed at our hospital. Fewer patients were rushed to the intensive care unit from the emergency room or the hospital floors each day. I spent less time zipped up in a Tyvek suit. We closed the ad hoc “ICU North.” My pager went off less frequently while I was at home on call, permitting longer stretches of sleep. Colleagues and I began having longer lunches, albeit alone at our own tables in the cafeteria, spaced farther apart than they used to be. Some of our patients even attended follow-up appointments after weeks-long comas.

Operating rooms throughout the country began to re-open; elective procedures, many sorely needed, were again performed. Hospitals, meanwhile, sought to move out their patients with persistent respiratory failure to specialized long-term hospitals, as is the norm in the United States. It was clear that, at least for the moment, at least in Massachusetts, the pandemic was retreating. What was less clear was what would change in the post-Covid world, both within and without the hospital.

To deal with the shortfalls in hospital funding stemming from plummeting patient billing for profitable DRGs like elective surgery, Congress included some $100 billion in bailout funds in the CARES Act, a big coronavirus relief bill signed into law by President Trump on March 27 (another $75 billion came from subsequent legislation). But it was soon clear that this relief would rectify no inequalities. One formula for dispersing the funds, the Los Angeles Times reported, allocated relief based on hospitals’ previous revenues, which meant that hospitals that took in more revenue from wealthier patients with private insurance would receive far more “relief” than those that provided care to the uninsured or those with Medicaid.

A study from the Kaiser Family Foundation estimated that the hospitals likely to reap the biggest payouts were for-profit institutions and those that provided less charity care. The Hospital Corporation of America and Tenet Healthcare together took in some $1.5 billion in federal relief, according to the New York Times. Our small health system fared less handsomely: the CEO of my hospital system told the Los Angeles Times that we would see some $90 million in losses but were expected to receive only $6.7 million in relief. He noted that some of the hospitals’ work—for instance, setting up a homeless shelter for individuals with coronavirus—would not even be considered in Trump’s formula. As June approached, the stage seemed set for further consolidation and corporatization of our nations’ hospitals, and a further exacerbation of inequity amongst them.

In the five years leading up to the Covid-19 pandemic, the Medicare for All movement achieved something momentous: it helped push single-payer from the radical sidelines of American politics to the very center of the national discussion. For a brief and exciting moment right before the coronavirus careened across the nation, it even seemed that our next president might be the Vermont senator who had put Medicare for All at the center of his long political career. As cases and deaths from the outbreak continued to mount in April, we learned that this would not come to pass. But the case for Medicare for All only grew stronger.

Some 18.2 million Americans at high risk of severe Covid, whether because of advanced age or chronic disease, were uninsured or underinsured at the onset of the epidemic, according to a study that colleagues and I published in the Journal of General Internal Medicine in June. Not surprisingly, these individuals were disproportionately black, Latinx, and American Indian, and from low-income families. The specter of people going bankrupt from Covid-related care, or avoiding testing because they couldn’t afford it, proved too much even for Congress and Trump: legislation was passed that protected, albeit inadequately, many individuals from medical costs for Covid testing and treatment. Nothing, however, has yet been done for the tens of millions who are liable to be bankrupted from treatment of every other ailment or injury.

Still, the epidemic has perhaps strengthened the notion that a person’s ability to obtain medical care should not be contingent on his or her ability to pay, which is to say, as an economist might, that our “effective demand” for care should depend only on our individual preferences and medical needs—not our income or wealth. That requires universal coverage and the elimination of financial barriers to care, something only a single-payer reform would realize. But what even the proponents of Medicare for All sometimes fail to emphasize, and what needs to be understood as we begin to imagine a better post-Covid-19 world, is that health care reform must not only achieve equity in demand but must also address the ubiquitous inequities and dysfunctions in the supply of care that stem from our nation’s unique history of hospital financing.

We should recognize, for instance, that the basic philosophical premise of the DRG—that a hospitalization can be treated as a product and priced like any other commodity—is a fallacy of the neoliberal age. If every DRG was “priced” appropriately, then none would be more profitable than any other. Hospitals would have no incentive to focus on the provision of high-tech procedures over the care of those with, say, protracted respiratory failure, substance use disorders, or mental illness. There would be no “profitable” and “unprofitable” patients. But they do, and there are: prima facie evidence that this system has failed.

The DRG system, moreover, fails to fund other important things that hospitals might do, like helping to set up homeless shelters for Covid-19 patients or other community public health projects that cannot be billed to individuals. Treating hospitalizations as individual billable commodities also means that there is no incentive to maintain excess capacity—say of ICU beds or ventilators—to be ready for the next pandemic or climate catastrophe. After all, empty hospital capacity is an idled machine; it has social value but lacks utility in a system governed by market logic. So it does not exist.

Original Spin

The point is not to go backward; hospitals aren’t going to go back to making money selling wine or dead patients’ bedding, fortunately. Instead, we must move forward to full public financing of hospitals, not as commodity-producing factories, but as social institutions, with guaranteed annual global budgets that could be used for the care of hospitalized patients and the provision of community care services alike. Global budgets, long included as part of Physicians for a National Health Program’s single-payer proposals, are lump sum payments that operate similar to how public schools are typically paid. Schools receive a budget for the year to cover the cost of educating all the children who attend; they don’t issue individualized bills to each student that ostensibly reflect his or her unique educational “costs,” whether measured in the predicted frequency of interactions with teachers, the amount of wear and tear imposed on the playground, or the consumption of extracurricular activities. Global hospital budgets would not crumble in the face of disaster, for they would not be linked to the sale of individual products. Global budgets could save us money, yes, through the downsizing of the bloated billing and administrative departments, designed to maximize payments, that have been growing since the 1980s. But they would mean something more fundamental: moving to a world beyond health care products and prices.

This requires changing something even more fundamental: our health care institutions could no longer retain profits. “No law of nature,” as my colleagues (and co-founders of Physicians for a National Health Program) David Himmelstein and Steffie Woolhandler, together with Public Citizens’ Sidney Wolfe, argued in a medical journal, “requires that hospitals make profits in order to thrive.” Health care profits are our original sin; they led to unbridled expansion and waste in the 1970s, when hospitals were paid per diem, but also today, when they are paid mostly with DRGs. Although the majority of hospitals are not-for-profit legally, all must have revenue greater than costs: whether we call it a “margin” or a “profit” matters little. This difference is what hospitals use to upgrade facilities, to build new wings, and to purchase the latest equipment and machines. Those purchases, lumped under the term “capital expenditures,” are not only necessary for success, but for survival. High margins can mean technological prowess, beautiful atriums, and the latest treatments on offer. As margins evaporate, hospitals become shabby and out-of-date, abandoned by physicians and patients alike.

Health care reform must not only achieve equity in demand but must also address the ubiquitous inequities and dysfunctions in the supply of care that stem from our nation’s unique history of hospital financing.

The consequences of this profit-oriented financing system is a combination of deprivation and excess. That kind of inequality of health care supply has a name. In 1971, the British general practitioner Julian Tudor Hart coined a phrase, the “inverse care law,” that describes it well. “The availability of good medical care,” he wrote in The Lancet, “tends to vary inversely with the need for it in the population served.” Those who need care the most, that is to say, have the least access to it. The inverse care law even operated in the United Kingdom in the age of the National Health Service (NHS), as Hart witnessed in his daily practice, where he cared for a working-class population in an impoverished Welsh mining town. While the NHS eliminated inequalities in effective demand through the realization of universal, free health care, it inherited a geography and distribution of unequal supply that had been built up over decades.

Yet the NHS, unlike a market-based system, provided the tools to unravel this inheritance. An intentional effort to achieve greater geographical equity in financing was carried out, successfully, in the 1970s in the UK. As Hart noted, the “inverse care law operates more completely where medical care is most exposed to market forces, and less so where such exposure is reduced.” As such, the inverse care law is today in operation in the United States like no other high-income nation. But we can change that. We could fund new hospitals and new health infrastructure not from profits, but from the public purse, something that the Medicare for All bills now in Congress, particularly the House version, would achieve. Hospital expansion would then be premised on the basis of health needs, not market logic.

Inequalities in health care demand and health care supply share a single root: the only partially successful venture to build a health care system on the principles of capitalism. Today, market forces limit the ability of working-class people to obtain needed medical services, and indeed distribute those services toward those who can afford them, just as they distribute hospitals and health infrastructure by profitability, not community health needs. The hospitals of the poor and people of color shutter, or contend with paltry funding, for the very same reason that their patients are more apt to be uninsured or ration their insulin: market logic underlies both inequities of American medicine. As we move into the post-Covid age, we must fight for a just health care system that addresses both.