Treating social media platforms as state actors will only weaken the resiliency of the First Amendment.

Last week, three former Twitter executives testified before the House Oversight Committee, recalling the missteps that led to the suppression of the Hunter Biden laptop story. The employees refused to acknowledge any government interference in their mea culpa, arguing that the social media company acted in a manner fully consistent with the First Amendment.

Mark Zuckerberg has said otherwise: In September, the Facebook founder admitted that his platform blocked the Hunter Biden article at the FBI’s prodding. The “Twitter Files” have highlighted even more egregious instances of public officials attempting to remove content. Yet, members of Congress condemning Big Tech as mere subsidiaries of federal agencies are oversimplifying this free speech dilemma in ways that can exacerbate government overreach.

That’s not to say that companies such as Twitter, Meta and Google are innocent parties. The FBI can send constant and pervasive emails. The Department of Homeland Security can pre-flag as many posts as it chooses. Ultimately, the platform decides whether to act upon these requests. The legal regime that governs online speech is sympathetic to intermediaries that host content but do not create it. Section 230 of the Communications Decency Act of 1996 has cultivated an ecosystem in which companies can moderate user content (or leave it up) without incurring civil liability.

This controversial statute, often viewed as a legal loophole for Big Tech, is at risk of being dissolved entirely. Conservatives and liberals alike are eager to dismantle Section 230 through legislation, and the Supreme Court will hear a case later this month about the scope of the law with respect to recommendation algorithms. Justice Clarence Thomas previously expressed interest in assessing Section 230 and the powers it grants in the context of the First Amendment. Evidence of substantial government influence only further complicates these questions.

Yale Law School professor Jed Rubenfeld has written about the complex relationship between technology companies, the government and Section 230. According to Rubenfeld, public officials use Big Tech as a “back door” to target disfavored but otherwise legal and constitutionally-protected speech. He argues that when government pressure is coupled with immunity granted by the statute, the result is state action.

In other words, if a platform accedes to a bureaucrat’s request, then it should also be subjected to First Amendment limitations. In a lawsuit against Twitter, former President Trump’s attorneys put forth a similar argument, noting in a recent brief that “powerful governmental actors are well aware of this vulnerability [afforded by Section 230] and have exploited it by getting social media platforms to do for them what the government cannot do directly.”

But this state actor theory misconstrues the impact of intermediary protection. In the absence of Section 230, the threat presented by government requests becomes even greater. Recall when the White House press secretary publicly singled out the “Disinformation Dozen,” 12 citizens allegedly responsible for the majority of anti-vaccine misinformation on social media. The Biden administration could not restrict these posts on its own since the First Amendment forbids encroachment on protected speech. More importantly, the social media companies were under no obligation to respond.

Imagine a world in which there is no Section 230. In this situation, the legal landscape surrounding content moderation is far more litigious, and companies heavily scrutinize user content because they are liable for it. Platforms are more inclined to yield to government requests since inaction would unleash a barrage of civil lawsuits. Consequently, pressure from the White House has greater coercive strength, giving officials more leeway to shape public discourse on social media. Government censorship by proxy is far more likely.

While there are certainly situations where a content moderation decision can rise to the level of quasi-state action, there must be a “sufficiently close nexus between the State and the challenged action.” Section 230 is not this link but, rather, a countervailing force that weakens the power of governmental bullying.