Earlier this month, the Supreme Court granted cert petitions for two cases involving the hotly-debated Section 230 of the Communications Decency Act, a law that protects online platforms and their ability to moderate content without incurring liability. In recent years, Section 230 has come under fire from Republicans decrying “Big Tech” bias and from Democrats concerned about the proliferation of misinformation online. Hopefully, the decisions in these cases will put an end to the political back-and-forth and preserve the internet ecosystem as we know it.
Let’s start with the basics: Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The speaker, not the intermediary, is responsible for their own speech.
In Gonzalez v. Google, the Court will assess whether Section 230(c)(1) “immunize[s] computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions.” In other words, the justices will clarify the scope of the law in relation to recommendation algorithms, such as those used by social media companies to curate content for their users. Gonzalez v. Google revolves around a series of heinous attacks from November 2015 perpetrated by ISIS. The plaintiffs argue that YouTube’s algorithms promoted hateful videos that enabled terrorist recruitment and radicalization. The Ninth Circuit ruled in favor of Google on the basis of Section 230 protection.
A similar question is posed in Twitter, Inc. v. Taamneh. Although this case is not about Section 230 explicitly, it will determine whether social media companies violate Section 2333 of the Anti-Terrorism Act by hosting and recommending radical content. The Ninth Circuit differed in Taamneh, in that “the same panel declined to consider Section 230 at all and instead held that Twitter, Google, and Facebook could be liable for aiding and abetting an act of international terrorism.”
It’s about time that the justices weigh in on Section 230. Hopefully, in their considerations, they’ll take into account the regulation’s impact on the free and open internet writ large. As noted by Professor and Section 230 expert Jeff Kosseff, peeling away the liability protections could have counterproductive effects:
“Increasing the liability for websites that prioritize or personalize content might lead to more harmful content, not less. We likely would see more platforms resort to purely reverse chronological feeds, or avoid technology that automatically filters harmful content or spam. It is hard to imagine search engines—or even a search function for finding content on a single site, like YouTube or Etsy—existing under that rule.”
Kosseff went on to clarify that Section 230 never has “immunized platforms from federal criminal law enforcement … including any actions that the US Justice Department could bring under terrorism laws if it believed the companies were violating them.” The legal shield does not grant blanket or sweeping immunity, as many congressmen believe.
Content moderation is not a black-and-white issue—it has always been complicated, an inherent balancing act between expression and harm. Such complexities mirror that of the free speech debate that has existed in this country since its founding.
No one knows which way the Court will rule, but for the sake of online discourse, let’s hope they protect Section 230.