Trump Is Banned. Who Is Next?
Tech giants must not treat their crackdown on the president’s social accounts as an edge case. The social web should be different now.
JANUARY 9, 2021
Affiliate at Harvard’s Berkman Klein Center for Internet and Society
It happened slowly, and then all at once. After years of sparring, the internet’s most powerful moderators deplatformed their most famous troll: the president of the United States. Facebook has blocked Donald Trump’s account indefinitely. So have Snapchat, Twitch, Shopify; even one of the Trump campaign’s email providers has cut it off. At the time of writing, Trump still has his YouTube channel, but the company says it is accelerating its enforcement action. It was a Friday Night Massacre of platform bans.
But one ban outstrips all others in its symbolism: @realDonaldTrump has been suspended from Twitter, the platform that has defined this president more than any other.
The story of the past week in content moderation can be told in two ways. The first is the formalistic myth that platforms want us to believe. In this telling, platforms have policies and principles they hew to; their decisions based on them are neutral, carefully considered evaluations of the rules and the facts. The second is the realist take, in which the posts and tweets of platform executives and spokespeople can be seen as fig leaves, trying to hide that these were, at bottom, arbitrary and suddenly convenient decisions made possible by a changed political landscape and new business imperatives.
The attempts to dress up their actions as part of a coherent and deliberate decision-making structure were trying to mask an uncomfortable truth about our most important speech forums: Platforms can and will do what they like. This week a small handful of extremely powerful tech executives slowly tiptoed toward the edge, egging one another on, being pushed by commentators and employees, until they agreed to hold hands and jump. This was a display of awesome power, not an acknowledgment of culpability. These were more editorial and business decisions taken under fire than the neutral application of prior guidelines. A tiny group of people in Silicon Valley are defining modern discourse, ostensibly establishing a Twilight Zone where the rules are something between democratic governance and journalism, but they’re doing it on the fly in ways that suit them.
In two weeks, Trump will be out of power, but platforms won’t be. They should be forced to live up to the sentiments in their fig-leaf rationales.
Platforms tied themselves in knots this week trying to tell the first story and make their actions seem consistent with the idea that they were simply making a neutral call based on their existing policies. In one sense, they’re right to say their actions were consistent with the rules they’ve always had. A good argument can be made—indeed, I have made it in The Atlantic —that democracy requires voters to know who their candidates really are and what they believe, even (or, perhaps, especially ) when those beliefs are abhorrent. So platforms have long treated the president differently from other users on the grounds that what he says is inherently newsworthy and in the public interest for people to know about even if it violates their rules.
But every platform left a “break glass” escape in the case of incitement to violence. Speech always has to be evaluated in context, and the context this week could not be ignored. The president incited insurrection at the U.S. Capitol, resulting in at least five deaths, and it’s possible more violence is yet to come. For Mark Zuckerberg, Trump’s use of Facebook to condone, rather than condemn, the riots led Zuckerberg to believe the risks of continuing to allow the president to post were too high. The decision wasn’t clear-cut, but “on balance,” another Facebook executive said, the president was deemed to be contributing to, rather than diminishing, the threat of ongoing violence. The company engaged in a careful and principled balancing test, considering all relevant factors.
Facebook’s decision backed Twitter into a corner. Twitter had originally locked Trump’s account for 12 hours, even as calls for the company to ban him entirely grew louder. When the president got his handle back, he shot off some fairly anodyne tweets celebrating his supporters and announcing that he wouldn’t be attending Joe Biden’s inauguration, posts that seemed entirely within the typical Trump genre. Not so, said Twitter in a long and detailed blog post announcing the account’s permanent suspension. The context of these tweets and “specifically how they are being received and interpreted on and off Twitter” meant that they amounted to a glorification of violence. Twitter’s rules have penumbras, apparently, and Trump’s dog whistles to his followers now fall within them.
What gave observers whiplash watching platforms’ actions was that the president’s posts this week were arguably no worse than stuff he’s posted before. The conspiratorial thinkingof many in his base means that his comments have long been interpreted in dangerous ways, including, of course, in the lead-up to the deadly riots on Wednesday. Trump’s accounts have survived posting that “when the looting starts, the shooting starts” during the summer’s Black Lives Matter protests. He has repeatedly amplified accounts supporting QAnon, a conspiracy theory that the FBI has labeled a domestic-terror threat. He’s compared the size of his nuclear button with that of North Korea’s, and still @realDonaldTrump has continued to grace our screens.
Meanwhile, the Taliban’s official spokesperson still has a Twitter account. As does India’s President Narendra Modi, even as his government cracks down on dissent and oversees nationalistic violence. The Philippines’ President Rodrigo Duterte’s Facebook account is alive and well, despite his having weaponized the platform against journalists and in his “war on drugs.” The list could go on. Facebook says it has taken action against other world leaders before Trump, but it hasn’t given details.
Against this background, platforms’ explanations seem flimsy and contorted, unsatisfactory answers to the questions “Why him?” and “Why now?”
So we’re left with the realist take on this week’s content moderation. The posts and tweets of platform executives and spokespeople did not mention that, in effect, American voters deplatformed the president in November. None acknowledged that Trump is leaving office in less than two weeks or the change in the makeup of the Senate that will mean Democrats, not Republicans, will oversee the moderators’ regulation going forward. They did not speak to the increased public pressure, including from those who had long supported their decision to maintain Trump’s accounts, or the extent to which their employees were agitating.
Platforms are probably hoping that this week gets marked down as a weird blip in their history and Trump is once again just dismissed as a special case. But they can’t and shouldn’t be. These moves were the most overt display of these platforms’ power yet, as well as an implicit acknowledgment that platforms not only facilitate self-government, but play a role as a check and balance within it. It’s not unprincipled to look beyond your platform and take context into account when evaluating whether a post should stay up; it’s necessary. Pointing at the failure of other institutions, including Congress or the media, is no excuse; those institutions need a reckoning too, but that can’t absolve platforms of their own responsibility. It’s not enough to say that the leader of the free world will always have a platform; if he wants to incite violence, let him find that platform elsewhere.
Banning Trump should not be seen as the end of an era. It needs to be the start of a new one, and we should not give up on the myth of content moderation that depicts a better future. This will not be the last time a leader uses these platforms to incite violence; our tech overlords should prove, then, that it wasn’t just political expediency and tech-bro one-upmanship that made them act this week. In doing so, they can develop the coherent and consistent decision-making structure that they insist already exists and tie themselves to more principled masts.
EVELYN DOUEK, a doctoral student at Harvard Law School, is an affiliate at Harvard’s Berkman Klein Center for Internet and Society.