February 21, 2023

Section 230 Requires a Balanced Approach that Protects Civil Rights and Free Expression

Dariely Rodriguez Deputy Chief Counsel at the Lawyers' Committee for Civil Rights Under Law
David Brody Managing Attorney of the Digital Justice Initiative at the Lawyers' Committee for Civil Rights Under Law


Section 230 featured image

This column is the first in an ACS special series examining the First Amendment Freedom of Speech and its role in the development and evaluation of public policies regarding the regulation of the internet, campaign finance, antidiscrimination laws, and other areas.

On Tuesday, February 21, the Supreme Court heard oral arguments in Gonzalez v. Google, a case involving whether online platforms get legal immunity when they make targeted content recommendations. In 1996, Congress passed Section 230, providing online platforms with limited immunity from liability for publishing third-party content. This will be the first time the Supreme Court weighs in on Section 230 of the Communications Decency Act.

The Lawyers’ Committee for Civil Rights Under Law, joined by five civil rights organizations, filed an amicus brief highlighting for the Court the ways in which censorship and discrimination manifest online, and urging the Court to adopt a balanced interpretation of Section 230 that does not impede enforcement of critical civil rights laws or the free expression of diverse voices. As amici, we recognize that the internet is particularly important for people of color and others historically excluded from the halls of power because it enables circumvention of traditional economic and political gatekeepers. Section 230 allows people of color, women, LGBTQ communities, people with disabilities, and religious minorities to leverage online platforms for self-expression, community building, civic engagement, and entrepreneurship. Modern civil rights movements, perhaps most notably the Movement for Black Lives, have benefited from reduced gatekeeping on social media.

Our brief cautioned that limiting Section 230 immunity could lead to greater censorship of diverse voices online. Without Section 230, online platforms would minimize the risk of liability for illegal content by engaging in heavy-handed cost-effective censorship instead of carefully reviewing every piece of content. Current content moderation systems already disproportionately silence Black people and other historically marginalized populations, even when they do not violate platform rules. Increased censorship by platforms seeking to evade liability would further silence diverse perspectives on important issues like racial and gender justice.

We also noted that as we conduct more and more of our daily lives online, automated decision-making systems risk reproducing discrimination at scale.  Our brief discussed how providing overbroad immunity to platforms would jeopardize enforcement of decades-old anti-discrimination laws like the Fair Housing Act, the Civil Rights Act and Voting Rights Act. Specifically, we argued Section 230 does not immunize defendants for civil rights violations or other illegal conduct that do not involve publishing; and even if the conduct involves publishing, Section 230 does not shield it if the defendant “materially contributed” to the illegality. This two-step process follows the consensus of the lower courts, where there is no circuit split, and helps ensure that online platforms are held accountable for civil rights violations.

Step One: No Immunity for Non-Publishing Activity

Offline, a publisher (such as a newspaper or book publisher) is legally responsible for any content they publish—including content from third parties such as letters to the editor. Online, however, Congress decided in Section 230 that websites and other entities that publish third-party content online—such as tweets or YouTube videos—generally should not be liable if their users post unlawful content.

But Section 230 immunity is not absolute. Internet companies are not always immune when they use third-party information. Rather, the critical question for immunity is whether the claim brought against the internet company treats it as the publisher of third-party content. Section 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Thus, it is not sufficient for a defendant to simply be an online publisher in some aspects of their business.

The “publisher” question matters because many civil rights violations that occur through the internet have nothing to do with publishing. There is now extensive documentation of discriminatory decision-making that is conducted or enabled by tech companies, including through discriminatory algorithms. Mortgage approval algorithms disproportionately deny applicants of color compared to similarly-situated white applicants; they are up to 80% more likely to deny Black applicants. Mortgage lenders, often using algorithmic pricing, have charged Black and Latino borrowers higher rates than similarly situated white borrowers, costing Black and Latino borrowers $765 million in additional interest annually. Tenant screening algorithms frequently produce flawed reports and disproportionately result in denials of housing to applicants of color. Algorithms used for hiring can “reproduce patterns of inequity at all stages of the hiring process.” Facial recognition technologies have been consistently found to produce inaccurate matches on the basis of sex and race. Sweeping surveillance and data collection also raise significant civil rights concerns. This type of conduct is not publishing information to an audience; it is processing data to make a decision or for some other purpose.

Existing case law supports this position. Lower courts have rejected Section 230 defenses where the unlawful conduct did not derive from publication. The Ninth Circuit put it most clearly in Barnes v. Yahoo: “[W]hat matters is whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.” More recently, in Henderson v. Source for Public Data, the Fourth Circuit held that Section 230 “does not provide blanket protection from claims . . . just because they depend in some way on publishing information.” In Henderson, claims against a background check company for failure to provide Fair Credit Reporting Act disclosures did not treat the company as a publisher because the claims did not hinge on the impropriety of the background checks themselves. Similarly, Section 230 did not bar a product liability claim against Snapchat for encouraging dangerous behavior through its app’s design because the claim was not based on Snapchat’s publishing activity.

Section 230 must not undercut bedrock anti-discrimination protections given the real risks posed by algorithmic technology online. Just because a platform is operating online and using data generated by third parties does not automatically mean Section 230 immunizes its activity. If a claim seeks to hold a platform liable for engaging in discrimination, and publishing is not a core element of the claim, Section 230 does not apply.

Step Two: No Immunity for Material Contributions to Illegality

Critically, Section 230 only protects against liability for content produced by others, not content produced by the defendant themselves. The second prong of the Section 230 analysis acknowledges that even if a platform is engaged in publishing, it can still lose immunity if it “materially contributes” to the illegality. Material contribution entails “being responsible for what makes the displayed content allegedly unlawful,” as the Sixth Circuit put it.

The material contribution test is essential for the protection of civil rights. Complex algorithms are not passive conduits that treat all content or users equally. Datasets often explicitly reflect race, sex, disability, and other protected characteristics, or these characteristics creep in via proxies. Data drawn from a society with a history of systemic inequity reflects that inequity. In turn, algorithms that draw on this data threaten to replicate and perpetuate inequity. Facially neutral tools can still produce discriminatory outcomes.

Publishers can make material contributions by transforming benign third-party content into something illegal. For example, Meta recently settled a Fair Housing Act lawsuit brought by the Department of Justice that alleged that Facebook’s advertising algorithm disproportionately targeted housing ads toward white users and away from Black users, even when advertisers never selected such targeting. Similarly, in the Roommates.com case, the Ninth Circuit denied Section 230 immunity when a rental housing website required users to disclose protected characteristics and roommate preferences in order to steer users towards people who matched their discriminatory criteria. By inducing discriminatory content, a publisher “becomes much more than a passive transmitter of information provided by others.” As noted in Henderson, Section 230 provides no shelter for publishers who “cross the line into substantively altering the content at issue in ways that make it unlawful.”

A platform can also materially contribute to illegality when it affirmatively furthers a third party’s illegal conduct. In National Coalition on Black Civic Participation v. Wohl, defendants sent tens of thousands of voter intimidation robocalls. They were sued for violating the Voting Rights Act and KKK Act. The court held that their robocall service provider was not entitled to Section 230 immunity because it allegedly helped the defendants target the robocalls to Black neighborhoods.

The material contribution test asks the right question: Is the publisher responsible for what makes the displayed content allegedly unlawful? In the civil rights context, this means that platforms should be considered “responsible” when they take benign content and transform it into discriminatory conduct, or when they take illegal conduct and affirmatively help exacerbate the harm.

*    *    *

The internet’s promise of equality and free expression remains yet unfulfilled. The law demands a more nuanced interpretation of Section 230 that can bring us closer to an equitable internet that protects both civil rights and free expression. Accountability for civil rights violations by online platforms is critical to achieve equality in a data-driven economy. The Supreme Court must strike the right balance.

_______________

Dariely Rodriguez is Chief Deputy Counsel at the Lawyers' Committee for Civil Rights Under Law

 

 

 

 

David Brody is Managing Attorney of the Digital Justice Initiative at the Lawyers' Committee for Civil Rights Under Law