Welcome back to the Ethical Reckoner. We’re back after a school/life-necessitated hiatus. Today, we’ll be talking about the Facebook Oversight Board’s decision to uphold Facebook’s suspension of Donald Trump’s access to Facebook and Instagram and what it says about the broader question of who exactly Facebook answers to.
On May 5, the Facebook Oversight Board—an independent but Facebook-funded panel of experts often called the “Supreme Court” of Facebook—upheld Facebook’s decision to suspend Donald Trump from the platform due to his encouragement of the January Capitol riots. However, the Board also ruled that Facebook could not make the suspension indefinite, saying that it is an “arbitrary penalty” not provided for Facebook’s rules. It requires* them to review the decision in the next six months and “justify a proportionate response that is consistent with the rules that are applied to other users of its platform.”
On a superficial level, it seems like the Board is here to enforce Facebook’s rules, but this ruling indicates that they are trying to hold Facebook to higher obligations. The impressively thorough ruling says that “The Board’s role is to ensure that Facebook’s rules and processes are consistent with its content policies, its values and its human rights commitments.” So, Facebook ostensibly has three masters: its content policies, its “values,” and some sort of commitment to human rights. We’ll look at each of those in turn. Then, we’ll turn to talking about why none of this solves the real problem, which is that Facebook isn’t actually obligated to any of these higher powers.
The Board agreed that Trump’s posts violated Facebook’s Community Standards (its “content policies”), which prohibit “expressing support or praise for groups, leaders, or individuals” involved in violent events, so Facebook was correct to remove them and bar his access. Because the situation was fluid and significant safety concerns remained in the days following the insurrection, restricting his access longer-term was also justified. However, the Board ruled that Facebook did not follow a “clear published procedure” when imposing an “indefinite” suspension and needs to rectify it by either imposing a time-limited suspension or permanently disabling the account. In a great zinger, the Board says that “In applying an indeterminate and standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.” The Board wants Facebook to take accountability and enforce its own rules; it will not do it for Facebook. If it complies with the ruling, Facebook will need to come up with some sort of sanction that has precedent within their rules. Of course, it could also come up with a new rule to apply retroactively. Unlike in the American government, Facebook is the executive and legislative branch, and it isn’t actually obligated to listen to its new judicial branch, so there isn’t anything preventing them from creating an ex post facto rule to apply.
Facebook’s “Values” page puts “Voice” at the very top, followed by (in bullet points underneath) “Authenticity,” “Safety,” “Privacy,” and “Dignity.” This would imply that “Voice” is top priority, though Facebook notes that “when we limit expression we do it in service of one or more” of the other four values. In perhaps the most clear-cut aspect of its decision, the Board notes that the threat to “Safety” justified limiting Trump’s “Voice” in this case, and a minority of the Board also said that “Dignity” should also be noted because by inflaming racial tensions, Trump exacerbated inequalities and violated individual dignity.
Facebook is a signatory to the UN Guiding Principles on Businesses and Human Rights, which establish that businesses should avoid “causing or contributing to human rights harms, in part through identifying possible and actual harms and working to prevent or address them (UNGP Principles 11, 13, 15, 18). These responsibilities extend to harms caused by third parties (UNGP Principle 19).”** The “should” is crucial there, as there is no enforcement mechanism.
The main human right under consideration is freedom of expression; restricting Trump’s access to Facebook risks violating his own right to expression and “the rights of people to hear from political leaders, whether they support them or not.” International law allows for expression to be limited when rules are clear and accessible, legitimate, and “necessary and proportionate to the risk of harm.” The Board finds that under this three-part test, while the indefinite restriction is not permitted, restricting access for a period of time did not violate human rights.
So, according to Facebook’s three guiding obligations, its decision was (mostly) justified. However, I want to dig more into these obligations. The guiding value underlying all three of these seems to be freedom of expression (or Voice, if you will). It’s fairly clear how this informs Facebook’s Values and human rights obligations, and regarding Community Standards, Facebook says that “The goal of our Community Standards is to create a place for expression and give people voice.” However, this raises the question of who gets Voice.
Facebook’s commitment to Voice is not evenly distributed, which may have gotten Facebook into this mess in the first place. Facebook has been bending over backwards to avoid criticism from right-wing sources. Facebook’s fight against misinformation has been hampered by internal figures who fear that cracking down on false content will lead to backlash from conservative figures who accuse the platform of anti-conservative bias. Time and time again, the company has killed features that would fight misinformation because they would disproportionately affect those with right-wing views. For example, expanding a “correct the record” feature that would show users who had interacted with fake news an independent fact-check, currently used for COVID misinformation, was “vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites.” A proposal to automatically hide posts that violated Facebook’s Community Standards but were deemed “newsworthy” was shot down “due to concerns that outsiders would accuse the company of censoring certain views.” Taken in concert with Trump’s complaints about post shielding, it’s not hard to guess what those “certain views” are, especially given that a former data scientist said that they had seen “a dozen proposals to measure the objective quality of content on News Feed diluted or killed because … they have a disproportionate impact across the US political spectrum, typically harming conservative content more.”
Conservatives cry bias, but misinformation is more of a problem among right-wing users, so measures to counter it will of course impact them more. If a disease is affecting one group more than another, you don’t withhold medicine on the grounds that it’s unfair to the population that’s less sick. You put resources where they’re needed.
Of course, that’s assuming that the impacted group wants the medicine. Right-wing groups, especially American Republicans claim that online platforms have “anti-conservative bias” or “silence conservative voices” in the hopes that by being the squeakiest wheel, they’ll get companies to stop enforcing their policies and allow content that benefits the Right to spread, even if it’s false or harmful. The director of the Annenberg Public Policy Center of the University of Pennsylvania, Kathleen Hall Jamieson, said “Attacking somebody for being biased is effective if you can get them to change their behavior in a way that benefits you. There's a tactical reason to attack the platforms for bias if you increase the likelihood that they're going to let you get away with things as a result because they're trying so hard not to be biased.” By trying not to appear biased, though, Facebook is giving a huge boost to misinformation and harmful content. In doing so, Facebook is showing that its professed values are flexible if they impact Facebook’s image and profit.
Facebook isn’t twisting itself into knots for fun; it’s taking pains to not annoy conservatives because of the huge power the Right has over Facebook’s fortunes. The Right’s enormous base of support is also a significant chunk of Facebook’s user base. Right-leaning pages accounted for 45% of political page interactions (and 30% of total posts) on Facebook between January 1 and December 15 of 2020; left-leaning pages were just 25% of political interactions. By advantaging right-wing content to avoid criticism from its supporters, Facebook is working to keep this user base loyal to their platform and thus keep advertising dollars flowing. Also, expanding on their rhetorical threats, Republicans have threatened to repeal Section 230, which grants platforms broad immunity for the content of their sites (even though they no longer control the White House or Congress, they may find a helpful hand in Biden, who wants to review the law). Facebook absolutely does not want this to happen, as they would be legally responsible illegal content on their site, which would create huge liability concerns. These tactics are working; by essentially complaining as much as possible, right-wing voices have won kid-glove treatment for their content and even their users, propagated at the highest levels of Facebook.
When Facebook was banning Alex Jones and his conspiracy site Infowars under their dangerous organisations and individuals policy, which would also force the company to remove content that expressed support for either, Facebook CEO Mark Zuckerberg “balked at removing posts that praised Jones and his ideas” and directed the team to create a new designation for Jones that wouldn’t require Facebook to remove posts related to him or Infowars. Facebook told the Board that over the years, 20 pieces of content from Trump’s Facebook or Instagram accounts had been flagged as violating the Community Standards, but “were ultimately determined to not be violations;” the “cross check” process by which it did this was not explained. These interventions are giving excess Voice to harmful figures at the cost of Safety and Dignity, but because Facebook’s professed values and Community Standards can change at will, this doesn’t mean anything significant; even if the Oversight Board were called to rule on a case related to it, Facebook could change its rules pretty much at will.
There are signs that the Oversight Board is trying to hold Facebook to a higher standard. In addition to forcing Facebook to commit to a course of action on Trump’s platform access, they asked questions about how Facebook’s algorithms impacted Trump’s content visibility and whether Facebook has researched or plans to research how their design decisions impacted the Capitol riots, as well as questions about suspension of other political figures and removal of their content. It also made recommendations*** for Facebook to improve their explanations of the rules used when sanctioning influential users, document special procedures, “address widespread confusion about how decisions relating to influential users are made” (especially relating to the “newsworthiness allowance”), and review how Facebook contributed to the tensions and “narrative of electoral fraud” that led to the Capitol insurrection, among others. These speak to its human rights obligation to identify “possible and actual harms and [work] to prevent or address them,” but it disappointingly did not take up the minority’s call to “outline some minimum criteria that reflect the Board’s assessment of Facebook's human rights responsibilities,” leaving it instead to the policy recommendations. My money would be on Facebook ignoring most, if not all, of these recommendations, as committing to clarifying their rules would give them less wiggle room to accommodate the conservative figures they depend on and fear. None of Facebook’s content policies, values, or human rights commitments are meaningfully binding, even when enforced by the Oversight Board. Ultimately, it seems that Facebook can do whatever it wants. What it wants, though, is defined by politically-motivated concerns over its image and thus its bottom line.
Facebook’s master is not human rights, its rules, or its values, but profit. This is orchestrated by Mark Zuckerberg, who can apparently change Facebook’s rules at will, though he doesn’t seem to want the responsibility. No matter what he does, he gets criticised from the right and the left, so perhaps he saw the Oversight Board as a chance to punt a decision somewhere else. Unfortunately for him, it’s been punted right back. Facebook will have to decide in the next six months whether or not to allow Trump back on the platform. It will abide by this, but only because the backlash from the Right if it doesn’t and instead leaves Trump in “censorship” limbo would be enormous, not because of any of its guiding principles. It probably will not take action to prevent something like this from happening again.
The Republicans no longer have Congress or the White House, but their influence persists. They still control something extremely powerful: users, and thus clicks, and thus profit. So long as the Right remains effective at crying censorship and inflaming their base, Facebook will take whatever measures it—and especially Zuckerberg—wants to keep those users happy, beholden to no Community Guidelines, “Values,” or human rights obligations.
* These decisions are not actually binding.
** These obligations are not actually binding.
*** These recommendations are even less binding than everything else.
Prompt generated by DALL-E 2 with the prompt “A painting of a supreme court building in the style of David Hockney but if he was a technophile”. (With apologies to to David Hockney.)
Thanks for reading.
Emmie is an MSc student at the Oxford Internet Institute. Check out the archive of past Ethical Reckoner issues here. If you haven’t already, click the big “subscribe” button to get the Ethical Reckoner biweekly(ish), and help the ER grow by spamming all your friends with the “share” button.
Any suggestions or comments? Let me know on Twitter @EmmieHine.