Tweeting vs. Rioting

 

The explosive video of Minneapolis police officer Derek Chauvin, a white male, with his knee firmly planted on the neck of local, black resident George Floyd for nearly nine minutes, brings to public attention two forms of immunity from liability.

The first is a police officer’s broad level of qualified immunity. Floyd, who was detained under suspicion of passing a counterfeit $20 bill, became non-responsive and died shortly thereafter. Several days later, Office Chauvin was charged with murder on the correct ground that he lost his qualified immunity from prosecution because his actions so manifestly violated established norms of police behavior. That charging decision was met with universal approbation across the political spectrum, but was preceded by widespread acts of violence in Minneapolis and around the nation, bringing massive destruction to the property of innocent residents, which only intensified even after the prosecution was announced.

There are many urgent and cogent calls today to reduce the burdens needed to overcome the qualified immunity for police officers, calls that are long overdue. But just as police officers must be held accountable for the damage they cause, so too must the rioters who have opportunistically used Floyd’s killing to inflict further harm on innocent bystanders. The First Amendment’s right of the people “peaceably to assemble” provides no immunity to such acts of violence.

The subsequent rioting indirectly opened up a second front in the immunity wars concerning the absolute forms of immunity that are given to social network platforms like Facebook and Twitter under Section 230 of the Community Decency Act (CDA) for statements made by their users. Section 230(c)(1) in the basic statutory scheme provides quite simply that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The basic scheme is simplicity itself: internet platforms are homes to billions of messages each day, some fraction of which are likely to offend some legal or social norm. It is not possible nor desirable to ask the platform operator to examine this mass of posted content. It is also unwise to burden these platforms with a proactive obligation to intercede when various forms of civil or criminal liability could be imposed on the publisher or speaker of that information.

That notwithstanding, the CDA protects so-called “Good Samaritans” who choose to affirmatively block such material. Section 230(c)(2) exempts platform companies from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”  This provision is more difficult to justify than section 230(c)(1) because, at this point, the platform has deliberately injected itself into a given situation and thus cannot claim that the huge press of business makes it impossible to monitor all possible abuses of its site.

Two qualifications should immediately be noted. First, the immunity under section 230(c)(2) is not absolute, but only applies to cases in which the platform operator acts “in good faith”—a notoriously slippery and otherwise undefined term in the CDA. Second, the last, catch-all term, “otherwise objectionable,” in the series of covered activities introduces a vagueness and subjectivity that could widely expand the scope of this exceptionable immunity.

All of these ambiguities surged to the fore when President Trump made no bones about what he thought of the violent protestors who burned and looted in the wake of Floyd’s death. His pointed tweet stated: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!”

The pointed phrase “looting and shooting” has an unhappy history. It was used by a Miami police chief to express his “get-tough” attitude toward crime in 1967, a year before race riots would break out in the city. Any attempt to attribute subsequent riots to the chief’s phrase, however, betrays the post hoc, propter hoc fallacy, as there were many other potential causes of the nationwide violence during the summer of 1968. Today, it would be sheer fantasy to claim that Trump’s remarks were the cause of the recent, self-sustaining violence that long preceded his tweet. Nonetheless, Twitter announced that the Trump tweet had violated its rules for “glorifying violence,” for which he was placed into internet purgatory. The offending tweet remained on the site in order to advance the “public interest,” but it was accessible only after reading a warning as to its dangerous content. Further, no comments could be made to the tweet, as if they, too, would shatter Twitter’s artificially induced calm.

No one should praise Trump for his blunt use of the phrase “when the looting starts, the shooting starts.” But those words were uttered to end violence, not glorify it. Readers are free to attribute a more sinister motive to Trump, but they can make up their minds themselves without putting the remark behind a translucent wall to reduce its impact, ironically making it now more cited than it otherwise would have been.

At this point, Twitter’s good faith protection under section 230(c)(2) of the CDA is lost because of its selective intervention. Holding Twitter liable in damages seems far-fetched, but Trump could insist that Twitter remove the warnings around his looting tweet, and refrain from using a similar tactic against his other tweets, mindless or not, going forward. Indeed, Twitter would be well-advised to follow the Facebook line of staying out of these roiled political waters altogether. There are cases where counter-speech is ineffective to address offensive or libelous speech. But not on this occasion, where the torrent of criticism may force Trump to pay a high price.

Trump has not remained inactive, but instead posted an executive order last week to systematically address what it termed “Preventing On line Censorship.” This executive order should be considered in the context of a broader debate surrounding the regulation of network platforms. Facebook and Twitter are not government parties, so they do not have any constitutional duty to avoid viewpoint discrimination. But their large market share has led to constant efforts to subject them to regulation against viewpoint discrimination as if they were common carriers, without any real evidence that such changes would be beneficial.

Twitter’s foolish response to Trump’s tweet strengthens the hand of those who want more regulation, given that, as the Executive Order notes, such service providers become content creators who engage in what he terms “selective censorship” that no longer deserves CDA protections. And it is a fair question why Trump’s tweet was flagged, when others, including for example those of Representative Adam Schiff or pro-Chinese firms magnifying Chinese propaganda, were not similarly treated.

The upshot is relatively timid.  Trump wants to read section 230(c) narrowly by putting a bit more beef behind its “good faith provisions.” The executive order hits the mark in pointing out: “Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.”

There is, sadly, no good legislative fix to deal with Twitter’s self-inflicted wounds.  On the one hand, legislation that helps control bias is likely to expose decent firms to the kinds of liabilities that section 230(c) immunity was intended to deal with. On the other hand, legislation that seeks to give a wide berth to responsible firms is likely to prove toothless in dealing with various forms of internet abuse. So in the end, only some combination of insistent social pressure and responsible self-governance can work—a task not made easier by Trump’s gratuitously inflammatory posts, which are themselves seemingly immune to any social pressures.

Indeed, the most important task is to put these tweetstorms in perspective, for the far greater threat to social stability has nothing to do with byzantine questions of internet immunity. As Kyle Hooten reports in the Wall Street Journal, Minneapolis police have taken a “hands-off” approach as looters demolish locally owned, largely black, businesses serving some of the poorest parts of the city. It defies comprehension why the promise of equal rights under the law should let the city pass on its most fundamental obligation to supply equal protection of the laws to all its citizens.

Every citizen of the United States, not just Donald Trump, should be aghast at this total, de facto immunity. We have come a long way since Southern police stood aside as gangs of white thugs terrorized black citizens. We must not ourselves regress so that “protestors” can opportunistically use the frustration created by Floyd’s killing to further harm distressed communities. The inappropriate tweets of the President are troubling, but they are at the moment the least of our problems.

© 2020 by the Board of Trustees of Leland Stanford Junior University.

Published in Law
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 3 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. DonG (skeptic) Coolidge
    DonG (skeptic)
    @DonG

    The problem for Twitter is that erosion of immunity leads to lawsuits and as they say, “discovery is a b!tch.”  It isn’t just the censorship, there is shadow banning, the weird unfollowing that conservatives experience, and all the tuning behind the algos. 

    • #1
  2. James Gawron Inactive
    James Gawron
    @JamesGawron

    Richard Epstein: At this point, Twitter’s good faith protection under section 230(c)(2) of the CDA is lost because of its selective intervention. Holding Twitter liable in damages seems far-fetched, but Trump could insist that Twitter remove the warnings around his looting tweet, and refrain from using a similar tactic against his other tweets, mindless or not, going forward. Indeed, Twitter would be well-advised to follow the Facebook line of staying out of these roiled political waters altogether. There are cases where counter-speech is ineffective to address offensive or libelous speech. But not on this occasion, where the torrent of criticism may force Trump to pay a high price.

    Richard,

    Jack the Twitter Ripper had better watch his act. Trump has patience and a long memory. If down the road a large legal rock is dropped on Jack I told ya so.

    Regards,

    Jim

    • #2
  3. Gazpacho Grande' Coolidge
    Gazpacho Grande'
    @ChrisCampion

    I’m not sure how immunity stands up when the provider is advocating for some points of view, by allowing Antifa to have a Twitter account, one that posts direct public threats on the one hand, and edits the president’s tweets on the other.  They’re now not just content providers, but editors, and select ones at that.

     

     

     

    • #3
Become a member to join the conversation. Or sign in if you're already a member.