It feels like yesterday that Instagram populated my feed with youthful influencers and their cat-eye tutorials and crop tops. Today, Instagram sends me haggard moms tackling “wake windows” and reviewing “signs a baby is cold at night.” The algorithm did not miss; I am indeed a new mom. But what happens when Insta’s algorithm does me wrong — spreads defamatory lies about my family, exposes my children to anorexia tutorials, or worse?
In Gonzalez v. Google, which the Supreme Court heard on Tuesday, Feb. 21, the court is considering whether a website — YouTube, Facebook, Twitter, etc. — can be sued when one of its suggestions goes awry. In this case, YouTube’s algorithm permitted ISIS recruitment videos to pop up during searches and in its “Up Next” feature. The family of a woman killed during the 2015 Paris terrorist attacks argues YouTube’s actions amount to aiding and abetting terrorism; in other words, YouTube should be liable for murder. But, before the family can even make its argument (which will fail), YouTube says a federal law prohibits the family from even bringing the case, even if it were a winner.
YouTube is right.
The law, Section 230 of the Communications Decency Act, says that websites can’t be sued as “publishers” of information provided by users. Enacted in 1996, the goal was to enable the internet to exist. If websites could be sued every time someone uploaded arguably false, illegal, or obscene material, websites (except perhaps the wealthiest) would slow content generation to a near halt. Goodbye TripAdvisor, Truth Social, Netflix, Rumble, Yelp, and so on.
Congress kept the plaintiffs’ lawyers at bay — here at least — and, in exchange, we’ve gotten an impressive amount of content: “Every minute, users upload more than 500 hours of video to YouTube,” according to YouTube’s brief.
The victim’s family is not trying to take down the internet, of course. The family agrees Section 230 protects YouTube for hosting bad videos, but says the law does not protect YouTube when it plays a heavier role, like its “Up Next” list or thumbnails of uploaded content. At oral argument, the family’s lawyer said YouTube is basically making an informational catalog and handing it out. That catalog, the family says, is fair game for a lawsuit under current law.
It was a confusing argument the justices could barely understand, perhaps because YouTube’s thumbnails are so far from aiding and abetting terrorism that the entire lawsuit stopped making sense.
But also, the family’s argument did not appear to save the internet as we know it. Major websites need to sort information and present it in ways users can digest. Even an innocuous sorting mechanism, like the most recent videos or the most popular, involves a choice by the website. If every algorithm decision meant the website was subject to lawsuits, tech giants would face a tremendously big risk in allowing third-party content at all. Should the Supreme Court usher in this world?
Justice Elena Kagan cautioned against it. She said she could “imagine a world where … none of this stuff gets protection.” In other words, lawsuits galore! That might be okay, she said, because “every other industry has to internalize the costs of its conduct.” But it might not be okay. These suits might end the internet. “We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” Justice Kagan quipped.
Perhaps a conservative would shrug and say that Big Tech’s algorithms are so biased that endless lawsuits might right the ship. After all, Google employees give 88 percent of their donation money to Democrats, Netflix employees give 98 percent of their donation money to Democrats, and Facebook employees give 77 percent of their donation money to Democrats. On that view, any harm to Big Tech and their algorithms would seem to affect mostly the Left.
But it’s unclear whether a push toward greater censorship, which is what this lawsuit requests, is the better path.
Conservatives have largely argued that Big Tech censors too much already. Twitter and Facebook blocked a highly newsworthy story about potential corruption by then-candidate Joe Biden. YouTube pulled down thousands of discussions about COVID-19, including a panel with Gov. Ron DeSantis. And recently, it removed a Project Veritas video that raises serious legal and ethical questions about Pfizer.
But the problem really could get worse. As YouTube said in its brief, where websites are pressured to “remove third-party content that might trigger litigation,” they will allow even less “political (including conservative-leaning) speech on hot-button topics.” I don’t love the threat, but it’s grounded in practical reality.
Ultimately, the Gonzalez case raises a line-drawing question. In some scenarios, an algorithm might mean Big Tech is doing a damaging act; in others, it’s just enabling users to post and access content. The line will be hard to define, and the 1996 law, written before today’s algorithms, does not do the job. But the debate should take place before elected officials, and not unelected justices. Fortunately, that seems to be the direction the case is headed.