January 3, 2022

Liability for Amplification of Disinformation: A Law of Unintended Consequences?

Jane E. Kirtley Silha Professor of Media Ethics and Law, University of Minnesota


cut out letters spelling fake news on top of a folded newspaper

This blog is part of ACS’s Blog Symposium remembering and analyzing the January 6, 2021 attack on the Capitol.

The New Year 2022 had barely begun when Twitter announced that it had permanently suspended the personal account of Rep. Marjorie Taylor Greene (R-Ga.), claiming that she had violated its policies against spreading falsehoods about Covid-19.

This wasn’t the congresswoman’s first brush with Twitter’s misinformation policy. Her account was suspended several times for shorter periods during the summer of 2020, also for false claims about the coronavirus and the vaccines that target it.

But a tweet on January 1, 2022 that alleged “extremely high amounts of Covid vaccine deaths” was apparently the last straw for the social media company. Its spokesperson, Katie Rosborough, said that Greene’s repeated violations added up to five “strikes,” which under Twitter’s system merits permanent expulsion from the platform.

Greene responded – using the messaging app Telegram – that Twitter “is an enemy to America and can’t handle the truth.” Predictably, her suspension added fodder to the claim that Greene was the latest political conservative to be “censored” by social media, following in the footsteps of former President Donald Trump, who was banned from several social media sites, including Twitter, in the wake of the Jan. 6, 2021 attacks on the U.S. Capitol. Twitter had specifically cited two of Trump’s tweets as violating its policy against glorification of violence: one stating that Trump was not going to attend President-elect Biden’s inauguration, and the other praising the “75,000,000 great America Patriots who voted for” him. Both statements, Twitter said, conveyed the message that the election was illegitimate, and that Trump would support and protect those who sought to overturn the election by force.

In response to the bans on Trump and other conservatives, two states – Florida and Texas – enacted statutes in 2021 prohibiting social media companies from banning controversial posts, and discouraging them from engaging in content moderation, particularly if the moderation was based on the viewpoint expressed. The Texas law would require companies to publish detailed moderation reports explaining how decisions were made.

Federal judges in both states enjoined the statutes, ruling that the First Amendment and Section 230 of the Communications Decency Act protected the platforms’ exercise of editorial discretion, with Judge Robert Pitman finding that the moderation reporting requirements in the Texas law would be “inordinately burdensome” and discourage any moderation, even of harmful content.

Both rulings are the latest in a string of cases recognizing that the constitutional protections enjoyed by the mainstream or traditional media extend to social media platforms as well. Under ordinary circumstances, First Amendment scholars would applaud these decisions.

But some have not. Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, together with Institute attorney Scott Wilkens, wrote in the New York Times that the companies’ arguments that they should be immune from the consequences of moderation decisions are “deeply misconceived” and even dangerous. They contend that taken to their logical conclusion, these arguments would foreclose even “carefully drawn laws” that would compel greater transparency in the companies’ use of algorithms or other standards to decide which accounts are suspended, undermining what they call “the integrity of the digital public sphere.”

In a report issued in November by Aspen Digital, a project of the non-partisan Aspen Institute, its Commission on Information Disorder (whose members include Jaffer as well as journalist Katie Couric and Prince Harry, among others) also recommended that social media platforms be required to disclose information about their content moderation policies and practices, and to share their data with “authorized researchers.” And Alan Rozenshtein, an associate professor of law at the University of Minnesota, observed on Lawfare that “government restrictions on content moderation may well be necessary to ensure the health of the digital commons.”

The fact that scholars and think tanks who have long championed the First Amendment rights of freedom of the press and of speech are now advocating government regulation of social media as the way to promote democratic ideals is disconcerting, to say the least. But it is emblematic of a broader concern about fundamental threats to democracy itself. The spread or amplification of misinformation or disinformation through social media is regarded as particularly problematic. After the Capitol attacks in January 2021 and revelations in October by Facebook whistleblower Frances Haugen, lawmakers on both sides of the aisle took the opportunity to pounce on Big Tech by threatening to revamp Section 230 to create liability for amplifying misinformation or hate speech by using algorithms.

The term “amplification” is an interesting one in this context. Different bills define it in different ways. Some are aimed at the general promotion, or high ranking, of specific kinds of content, such as health misinformation or terrorist tracts, based in part on what will generate advertising revenues or “clicks.” Others focus on the use of targeted algorithms that rank material based on personal data, such as peppering a particular user with numerous posts that reinforce erroneous beliefs about the legitimacy of the 2020 election.

But the goal of all these bills is essentially the same: to curb the virtually unlimited immunity social media companies enjoy when they act as conduits for user-generated content and exercise their own right to either amplify or remove it. The idea is that private companies cannot be trusted as the gatekeepers of information. The supporters of these initiatives argue that the tech companies have had ample opportunity to engage in responsible self-regulation, and have failed to do so, to the detriment of society and, ultimately, to the truth.

In the face of the very real possibility of a repeat of the January 6 insurrection, it is tempting to clamp down on the social media that are viewed as co-conspirators in the spread of disinformation. But as the scholarship of Professor Robert Pape at the University of Chicago has documented, of the 21 million people in the United States who believe that President Biden was not legitimately elected and that the use of force to restore Donald Trump to office is justified, only ten percent of them get their news from social media like Gab or Telegram. Instead, 42 percent rely on Fox News or other “mainstream conservative media,” and the next largest proportion consume media such as CNN or NPR. This suggests that a significant component of amplification of disinformation occurs when the conventional media repeat it. Should those news media be subject to government regulation as well?

Some argue that they should be, and in some countries, they already are. Anti-“fake news” laws have been used in countries like Malaysia, Venezuela, and Kenya have been used to suppress opposition media. In late December, Stand News, a pro-democracy online news outlet based in Hong Kong, shut down after its facilities were raided by police and members of its staff were arrested on charges of “inciting hatred” against the government.

But so far, in mature democracies, most of the regulatory efforts are targeted at social media, who are feared by many in government because of their ubiquity and lack of accountability. In the United Kingdom, for example, a bill is pending that would create potential criminal liability for social media companies that fail to protect users by removing “harmful algorithms.” The Conservative MP who chairs the parliamentary committee on the draft online safety bill refers to the social media landscape as “the land of the lawless.” In his view, it is past time to impose responsibility upon them.

There is no question that the spread of disinformation threatens the core values of democracy. It is tempting to threaten those who disseminate it with legal consequences. But however well-intentioned these legislative initiatives may be, there are dangers inherent in the government deciding what constitutes harmful content.

Certain narrowly-defined categories of speech do not receive First Amendment protection: true threats and incitement to violence are two of them. However, falsehoods – unless they also harm a reputation – are protected.

Nevertheless, there are creative ways to tackle disinformation under existing law. One approach is through libel suits, such as those brought by election equipment operators and election workers against media who falsely claimed manipulation of the 2020 election, thereby defaming them.

These lawsuits may, it is hoped, get to the truth. But in any event, they will proceed with the robust protection of New York Times v. Sullivan and its progeny intact, preserving the right to speak about controversial subjects in good faith without fear of retribution.

The core teaching of Sullivan is essential: we should not leave it to government to be the arbiter of truth. In civil society, we want to presume that government is acting in good faith. But history teaches us that legislation intended to preserve and protect truth could become an instrument to suppress it.

Rather than provide bad actors with such weapons, the best remedy for false speech is more speech. Or as Justice Anthony Kennedy wrote, “The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”

First Amendment, Free Speech, National Security and Civil Liberties, Technology Law and Intellectual Property