In a recent interview on the “Joe Rogan Experience,” Meta CEO Mark Zuckerberg admitted that, in retrospect, Facebook did not handle the Hunter Biden laptop controversy as well as it should have, to say the very least. Shortly before the 2020 election, the New York Post published an article about the president’s son, claiming that his abandoned laptop contained evidence of corruption and unethical foreign business dealings. When asked by Rogan if he regretted suppressing the New York Post story, Zuckerberg responded:
“Yeah, it sucks. Because it turned out… the fact-checkers looked into it and no one was able to say it was false. So it had this period where it was getting less distribution… it sucks in the same way [as having] to go through a criminal trial but being proven innocent in the end.” Nevertheless, he called the process “pretty reasonable” while noting his company “obviously [doesn’t] want to have situations like that.”
During the October 2020 incident, Twitter’s actions garnered the most scrutiny since the company prevented the New York Post article from circulating on its site. At the time, Twitter reasoned that the story violated its “hacked materials” policy and subsequently blocked the URL from being shared by users. Then-CEO Jack Dorsey called their response a “total mistake.” On the podcast, Zuckerberg compared the two companies’ actions, which ultimately was a distinction without a difference:
“Our protocol is different from Twitter’s. What Twitter did is that they said you can’t share this at all. We didn’t do that… for five or seven days when it was being determined whether it was false, the distribution on Facebook was decreased but people were still allowed to share it… the ranking in News Feed was a little bit less.” He noted that “fewer people saw it than would have otherwise” and that these limitations were “meaningful” and significant.
The admissions from Dorsey and Zuckerberg are noteworthy, in part because they reveal the critical role that social media plays within our society. False positives and poor judgment can have a real and tangible impact. In a Rasmussen Reports poll from this year, 48% of respondents said that if the media had “fully reported the story about Hunter Biden’s laptop before the 2020 election, it’s unlikely Joe Biden would have been elected president.” Other surveys indicate that Biden voters would have switched their vote had they been aware of the story.
By delegitimizing a critical political story, which had direct implications on then-candidate Joe Biden, Twitter and Facebook impeded the public’s ability to make informed decisions based on all available and truthful information. However, it is important to note that the companies were well within their authority to make these decisions, even though it was—as the tech CEOs have acknowledged—the wrong call.
Their comments highlight the difficulty and tension that exists in content moderation. The COVID-19 pandemic and 2020 election prompted social media companies to take unprecedented steps to combat the proliferation of misinformation. While their actions were legally permissible, the ramifications for political discourse cannot be ignored.
Content moderation exists in both legal and social capacities. Thus, the way in which we evaluate these practices can and should differ depending on the context. In this case, there is little dispute, even amongst the social media giants, that they erred in their judgment and halted the spread of legitimate discussion.
As Zuckerberg lamented, false positives “suck.” The next hyperpolitical and polarizing story is just around the corner—let’s hope for a better outcome.