Skip to content

The Judgment of Paris

Facebook vs. the Communards

In August 2019, Facebook settled an eight-year legal dispute with a French schoolteacher. The complaint, made by Frédéric Durand, accused the platform of deleting his account because he posted an image of Gustave Courbet’s iconic oil painting, L’Origine du monde. A closely cropped, realistic portrayal of a woman’s genitals, Courbet’s 1866 work was reportedly in violation of Facebook’s no-nudity policy, which led them to expunge Durand’s account, notwithstanding that he had accumulated eight hundred followers for his posts about art. He had sought to have it restored.

It’s easy to ignore such journalistic ephemera of the web, these fleeting moments we observe in our scrolling. Facebook, after all, got off easy in its encounter with the French government when both parties agreed to settle by making donations to a street art group. But buried in a mildly captivating headline is an example of what Van Wyck Brooks called a usable past, which is about cutting the cloth of history to fashion a claim for how the present, and the future, could be shaped in a particular image. Courbet’s encounter with Zuckerberg is such a case because it shows us the seemingly novel problems of the digital age—mass surveillance, automation, algorithmic bias—are often better understood as the latest installments of humankind’s ongoing social project. They are not so much debates about technology but discussions about politics, or, in this case, politics and art.

For a modern web platform to take down L’Origine du monde on account of indecency is to replicate the behavior of those stuffy gatekeepers at the Salon of the Académie des Beaux-Arts.

Courbet’s L’Origine du monde is a captivating painting because it can be read as both shocking and banal, essentially pornographic and yet deferential to the everyday miracle of women and their capacity to create life. This kind of jarring juxtaposition was one of Courbet’s key artistic contributions. His large canvasses, traditionally reserved by his contemporaries for religious subjects, were devoted to depictions of peasants. He sought to break the conventions of fine art by populating it with real people, elevating their status and breaking down the divide between viewer and subject. The female nude—that most sacred form of the traditional fine arts—was rendered by Courbet with blistering realism. Reviewing his painting Les Baigneuses, an image of a fat, naked bather, critics were too distracted to consider his use of form or perspective. They were, as one analysis described it, “more outraged with the grime incrusting the dents in her buttocks.” Courbet’s artistic legacy is drawn from his socialism but also his sensationalism. “I painted the picture so that it would be refused,” remarked Courbet about one of his works, “I have succeeded. That way it will bring me some money.” It’s easy to imagine him in the digital age sitting somewhere between a sophisticated meme artist and expert troll.

For a modern web platform to take down L’Origine du monde on account of indecency is to replicate the behavior of those stuffy gatekeepers at the Salon of the Académie des Beaux-Arts. Both Facebook and l’Académie share the experience of being scandalized by Courbet’s relentless boundary pushing. For the Académie this was entirely expected, indeed not least by Courbet. But for Facebook, we feel entitled to expect something different. Facebook capsized modern business orthodoxy with a casual disregard for accepted wisdom, with an embrace of fast failure, with hoodies and the Californian ideology. And yet this very twenty-first century phenomenon looks in reality like a nineteenth-century institution.

When the focus is on Courbet’s delicate brushstrokes, the problem comes into sharp relief. Mainstream American sexuality has always held firm to tradition, easily identified in cultural depictions of gender roles and fairy tale love stories. To suggest, culturally or politically, that women might possess subjectivity, or that there might be ambiguity in their status as either Madonna or whore, has often risked generating a 501 error. It’s almost too endearing that a contrarian artist who has been dead for over a century managed to manifest in the digital present and expose the prudery of a modern tech emperor.

So Courbet is survived by his sensationalism, and yet this moment of remembrance also presents an opportunity to consider his socialism. The other part of the painter’s story—too easily forgotten in the gilded galleries where his works are now on display—is his involvement in the Paris Commune. And the Commune, I would argue, is more revealing of our present predicament than anything that can be rendered in oils-as-pixels.

Capacities in Motion

In 1871, for a few short months, Paris was the epicenter of a political earthquake. The city was transformed into an autonomously organized society, a commune that experimented with alternative ways of structuring social and political life based on collaboration and cooperation. Books have been written about its achievements, quite rightly, but some highlights are easy to select. The Commune abolished conscription and the standing army; established a union of women; declared the separation of church and state; converted all church property into national property; made education free; liberated political prisoners; suspended the payment of all rents for six months; and publicly burned the guillotine. It was radical and expansive. It upended restrictive ideas of what democracy could achieve by making the concept concrete. The experiment was not preplanned, nor was it entirely spontaneous; it was the political outcome of organization and struggle.

Our muse for the present analytical exercise, Courbet, was naturally in the thick of it. He joined the Communards and was appointed the president of the newly formed Artists’ Federation. The Federation sought to liberate artistic practice from state sponsorship. Art, as a social institution, was reimagined as open and belonging to the public, rather than a curated, officially sanctioned set of rules about aesthetics. The Federation was about helping artists reclaim their autonomy and dignity, and it became more actively engaged with the public rather than institutions and wealthy patrons. Under the Commune, artistic education was to be administered by artists, who would organize and collaborate freely without the supervision and privilege of state subsidies, and the public was actively invited to discuss aesthetic questions with the Federation. Artistic practice became more of a peer-to-peer exercise, in which collaboration and experimentation was encouraged, and the boundaries between audience and creator were less rigidly policed. Traditional understandings of the social order were repeatedly broken down in the Commune, and one form this took was the democratization of art.

“More important than any laws the Communards were able to enact was simply the way in which their daily workings inverted entrenched hierarchies and divisions,” writes Kristin Ross in Communal Luxury: the Political Imaginary of the Paris Commune. “First and foremost among these the division between manual and artistic or intellectual labour.” At the helm of this movement in artistic terms was Courbet, and perhaps more significantly, the poet Eugène Pottier, who later went on to write what became the lyrics to the socialist anthem “The Internationale.” For Pottier, something of the intellectual powerhouse in the Federation, overcoming the division between manual and artistic labor “set capacities in motion,” as Ross describes it, which was designed to create the conditions for what he called “communal luxury.”

The “Call for Artists” that had gone out in April to form the basis of the Federation attracted four hundred attendees. The diversity of participants was telling: it included painters, sculptors and architects, as well as artisan cabinet makers and decorative artists. There was a sense of equality between all those who pursued artistic endeavors, as a precondition to participation in the Federation, that deliberately rejected the state-sponsored curation of high-brow culture which had preceded the moment. Pottier himself began writing poetry while apprenticed as a box-maker in his teens. The Federation represented an attempt to organize society that sought to return power to those who produced things, including when those things produced were artistic in nature.

“Solidarity grows through increasing liberty, not through constraint or obligation,” writes Ross. “Personal autonomy and social solidarity do not oppose each other, but instead reinforce each other.” In an age in which online spaces feel more divisive and polarized than ever, perhaps it is time to ponder how we can create conditions of personal autonomy that give rise to greater social solidarity. Perhaps it is the structure of these spaces that is at fault, rather than the individuals within them. Centrally determined “community standards” enforced by automated takedowns and de-platforming might generate tendencies that are more infantilizing than civilizing. A sense of freedom with responsibility in online spaces is unlikely to be cultivated when those who set the boundaries of good taste and political correctness are more interested in applying constraint than promoting solidarity.

A Crisis of Giving a Shit

One way to understand debates about content moderation, therefore, is less as a novel question of the digital age and instead as part of a longer struggle between officially sanctioned views and social and political autonomy. Facebook likes to think its semi-autonomous processes of content moderation are designed to be neutral and non-discriminatory. The company’s youthful presentation belies its establishment tendencies. The company talks blandly about “connecting people” but has become the door bouncer for digital culture, and ultimately politics. The export of American values—individuality, liberty, self-actualization—into the globalized digital space may have seemed noble when discussed in a dorm at Harvard. But at scale, we are starting to see the contours in a harsher light.

Nerdish modesty and white-bread liberalism narrow our cultural horizons, but they also serve our political spaces poorly. Systems of content moderation at scale require decisions that ought to be considered difficult and complex but are treated with all the precision of a sledgehammer. The boundaries of political acceptability might seem straightforward when Trump falsely claims electoral victory. But consider that Facebook has also suspended the accounts of environmental activists as part of a policy to counter disinformation about the climate crisis. In the lead-up to the election, the company also de-platformed the accounts of hundreds of skinheads in one fell swoop, locking out both racists and those deeply committed to anti-racism.

Facebook claimed such actions were in error. But there are alarming implications here. Alexandria Ocasio-Cortez perhaps inadvertently revealed as much in her post-election interview with the New York Times. “If you’re not spending $200,000 on Facebook with fund-raising, persuasion, volunteer recruitment, get-out-the-vote the week before the election,” she declared, “you are not firing on all cylinders.” The left is more dependent than ever on these digital platforms, and to trust executives to make calls about content moderation leaves activists vulnerable. Should Facebook ban Holocaust deniers, for example, it is possible to imagine this leading to a call for a ban on anti-Semitic content and including in this definition criticism of Israel. Facebook has a track record of working with Israel to tackle what they collectively called “incitement.” In the wake of the violent riots at the Capitol on January 6, 2021, President Biden has vowed to take on domestic terrorists, including by employing those who previously led the War on Terror abroad. It is easy to imagine surveillance of social media being a central part of this project, which, if history is any guide, has serious implications for the civil liberties of people and groups not just from the far right but from all political persuasions.

Under the Commune, artistic education was to be administered by artists, who would organize and collaborate freely without the supervision and privilege of state subsidies.

The silencing of specific voices, accidental or otherwise, is only one part of a much bigger problem. Zeynep Tufekci has argued that a better way to understand these issues is by seeing them as a “crisis of attention more than a crisis of speech.” By that metric, Facebook’s bland liberalism and disinterestedness has ultimately led to it taking on the role of facilitator of right-wing radicalism, intense racism, pervasive bullying, and even genocidal tendencies. As this content has proliferated, the company has resisted accountability, despite criticism from governments, human rights groups, and even its own staff members. In September, BuzzFeed published a story about an internal memo authored by Sophie Zhang, a former Facebook data scientist who worked on the Facebook Site Integrity fake engagement team, attempting to do things like identify bots and address disinformation campaigns in elections. In her memo, she describes how she struggled with a lack of resources: the company preferred to focus on global activity that posed public relations risks, rather than issues of electoral or civic harm. “Facebook projects an image of strength and competence to the outside world that can lend itself to such theories, but the reality is that many of our actions are slapdash and haphazard accidents,” she wrote.

The central problem is that Facebook has been charged with resolving philosophical conundrums despite being temperamentally ill-qualified and structurally unmotivated to do so. It’s not that we should do nothing about this problem. But we should be careful about demanding that companies be charged with this duty. Applying automated processes to define the limits and substance of what we see in our digital lives is not a neutral process, and rarely is it benign. Even very legitimate objectives like limiting violent content can congeal and become muddled on contact with the complexities of humanity. If nudity can be artistic, exploitative, smutty, and empowering, then the depiction of violence can be about hate and accountability. A video of a shooting can be an expression of deadly bigotry, but it can also expose police wrongdoing. Distinguishing between them requires human decision-making, and resolving a range of contested ideas. At present, private institutions bear significant responsibility for defining the boundaries of acceptability, and they are not very good at it.

Put differently, what constitutes acceptable content is always a political question, constantly being negotiated and renegotiated by those who hold power and those who do not. Public bodies, like courts and parliaments, are often the forums for such debates, which is why they are a common focus of struggle. In the digital age, however, enormous private entities like Facebook (or Twitter, or Google, etc.) are increasingly the hosts for these discussions. When citizens and policy makers ask Facebook to curate content or design algorithms to do so, the implicit assumption is that people cannot be trusted to have these conversations themselves. Of course, some people are awful online—and this can have real world consequences, for which we need remedies. We need to have cultural norms and practices that minimize this behavior, that cultivate shared understanding and mutual respect. But we ought to be careful about assuming that tech companies can achieve this by us appointing them as cops.

Mur de Fédérés

Private companies are not institutions that are meaningfully accountable to the public, or to any sense of public duty. They are organizations structured around profit. When we collectively authorize private entities to determine what kind of content is acceptable, we entrench particular values and foreclose the potential for more expansive possibilities. These content moderators hammer together the frame of Overton’s window in a shape that suits them.

As essential pieces of digital infrastructure, why do we accept that these platforms remain in private hands, and beholden to the bottom line?

The example of the Commune is tantalizing because it allows us to imagine that an alternative might be possible. Of course, the experience of life online is different from the experience of a Communard over a century ago in all sorts of important ways. But we share a common cause for concern, and there is some merit in the moment we find ourselves in—when the appetite for reform is growing—to look at radical and underexplored solutions. By widening the frame, we can have a discussion that is less about the technical capabilities of Facebook to stop us seeing things that scandalize, offend, or harm us, and more about the universal question of who ought to decide the boundaries of acceptable culture and public discussion. And while there are always extreme examples that justify sanction, there are far more ubiquitous experiences of online life and culture that can be degraded unless we are vigilant about who gets to control their contours.

Perhaps, then, like the Communards, rather than blaming humans wholesale for creating online cultures that are oppressive and unhappy, we might wish to include them in attempts to make better ones. We could start with the assumption that these digital spaces are open and belong to the public. Why not require that the design of the newsfeed algorithm be made transparent? Why not allow people to redesign their content feeds and become active participants in creating their own sense of self rather than having it curated for them by a tech bro? Why not ban the microtargeting that underpins and animates this business model? A data extraction approach to monetization operates by exploiting our emotions to keep us hooked as audiences to be sold to advertisers. As essential pieces of digital infrastructure, why do we accept that these platforms remain in private hands, beholden to the bottom line?

We could pay moderators to manage groups of a particular size, and allow those roles to be elected and accountable, much in the same way as we might pay district council members or representatives. Imagine a social space on the internet that wasn’t filled with ads! Imagine a web where content moderation decisions were governed by a public charter with an accountable board of elected representatives. Perhaps it is even possible to conjure a platform that doesn’t leave complaints about harm buried in some cyber slush pile, but that actively found a strategy to take those complainants seriously and to design rules around resolving their concerns. Platforms, services, and tools could be designed not just for the average user but with the most vulnerable user in mind. Maybe you don’t like these ideas (maybe you do), but maybe there are lots of other ones out there, waiting to be articulated, discussed, adopted, tested, or discarded.

By breaking down the divide between action and consequence in online social life, we might start to “set capacities in motion” that aim to rebuild a sense of freedom with responsibility. It is an argument against outsourcing politics to machines and the few who build them, and in favor of greater public participation by the many in rulemaking in the digital age. It’s not to say it would be a seamless experience of delight; it would certainly feature conflict. But it could be a place where people could collectively explore ideas in conditions of freedom, without being organized in a clandestine way by billionaire tech overlords.

The story of Courbet’s painting illuminates how the best decisions around content moderation are those that involve public deliberation, not carefully calibrated algorithms with their implicit biases. We need social platforms to offer cultural and political experiences that are not placating or addictive, but rather are surprising, ambiguous, challenging, and educational. Maybe we need a federation of content makers that seeks to build a culture of public participation, rather than moderate our sense of possibility.

There is no reason to wait—the present is brimming with the possibility of this alternative vision already. Mainstream commentary may be occupied with the de-platforming of world leaders as the central question of content moderation. But meanwhile, people in online spaces everyday grapple with what it is to exercise autonomy in the context of collective spaces, how to maintain accountable forms of authority that inspire trust, how to accommodate that which is new or different even when it is challenging. These experiments sometimes fail, but they also create capacity. When the Communards explained the establishment of the Paris Commune in March 1871, they spoke less of a specific moment and more of a preexisting reality, brought to the fore. “Paris had no government,” wrote Communard Arthur Arnould of early 1871, when the official government had fled the city. “We had nothing but anonymous power, representation by Monsieur Tout le Monde. At that moment, and this is a point on which I can’t insist too much, because it’s so important and it seems to have gone unnoticed, the Commune already in fact existed.”

Our online social lives may not always be easy, but they could be built around a sense of engagement with social responsibility. Such digital spaces might be places to boldly ask questions about what we aspire for our lives to be, rather than quietly entrenching a sense of why they are deficient. The internet could be a place in which everyone could be an artist and an organizer, one in which communal luxury becomes possible in a globalized, networked way. Our job is not to look to our Silicon Valley corporate despots to enforce this vision, but together to use those capacities that we already have to try to win it.