After a long wait, Facebook has rolled out a new tool to flag fake or what it calls “disputed” news. Will it work or just be used as a censoring tool for unpopular views?

Facebook rolled out this new tool over the weekend. It allows users in the U.S. to slap a warning label on posts that seem to have no factual basis. It identifies links to websites known to produce misinformation and cites third-party factor-checking websites such as Snopes and Politifact for evidence.

Not every Facebook user has access to this tool yet. However, those who do can flag an article as fake by clicking on the upper right hand corner of a post and select that disputed tag. News articles flagged by users are routed to a fact-checking organization and if it’s found to be false, the article will be flagged as “disputed” along with a link to an article explaining why. Facebook’s algorithms are also on the lookout for fake articles.

USA Today reports an example of a “disputed” news piece:

Among the disputed offenders that people spotted on Facebook: A fictionalized story, "Trump's Android Device Believed To Be Source of Recent White House Leaks," from a fictional publication, "The Seattle Tribune." The story carried a disputed label with links to fact-checking services that explained why it was not true.

The website has a disclaimer that it is a "news and entertainment satire web publication." But the story fooled people anyway.

Facebook is intentionally putting the onus on its users to be the arbiter of news trustworthiness. It doesn’t want to exercise editorial judgment over the content that influences people around the world. And rightly so; Facebook is a platform not a magazine.

The idea of “fake news” originated with the left but was popularized by President Trump, who applies it to news outlets that he thinks have been unfriendly or unfair in their coverage of him. However, the 2016 election cycle was the catalyst for fake news. As a writer explains in Forbes:

Former President Barack Obama also acknowledged that the spread of fake news on Facebook became a major problem during Hillary Clinton's campaign trail. “The way campaigns have unfolded, we just start accepting crazy stuff as normal. And people, if they just repeat attacks enough and outright lies over and over again, as long as it's on Facebook and people can see it, as long as it's on social media, people start believing it. And it creates this dust cloud of nonsense,” said Obama during a Hillary for America rally in Ann Arbor last year.

The impacts of acting on misinformation came to head when Comet Pizza in D.C. was visited by a man wielding a gun, who believed a bizarre report claiming Hillary Clinton and her aides were running a human trafficking ring.

Dealing with this fake news phenomenon has not always been a big deal for Facebook. After last year’s election, CEO Mark Zuckerberg called the idea of fake news on Facebook influencing the results a "pretty crazy idea." But about a month later, the he backtracked, saying that since Facebook has a platform for public discourse, it has a responsibility.

There’s no dispute in the rise of content on social media that uses salacious headlines to grab your attention and prompts you to click on it (aka click bait). However, attributing Hillary Clinton’s loss or President Trump's victory to fake news is a narrative progressives have been pushing and won’t let go. We have yet to see the definitive evidence.

We do know that Americans lean heavily on social media for their news. Last year, Pew Research found that 62 percent of U.S. adults say they get their news from social media and Facebook is the top social media news source. And nearly one out of four Americans say they have shared a fake story.

Part of the problem is that the media ignores is their own bias in reporting news that this makes alternative sources of information appealing.

A disputed icon on a “news” piece in your Facebook feed is not a perfect solution, but it is actually a logical approach and far better than removing the piece or letting Facebook editors make decisions. Indeed, Facebook came under fire when it was exposed that their own editors filtered trending news and filtered out conservative or right-leaning news pieces. So that Facebook can retain some semblance of non-partisanship, they are trying to remove themselves from passing judgment.

I would rather retain my right to decipher news than have Facebook attempt to do it for me by allowing their Facebook staff to decide what is fake.

In addition, as bloggers and writers, our opinions are not always the most popular. The risk of our content being tagged as fake by someone who simply disagrees is a concern for the free flow of ideas that makes democracy work. 

In the Christian Science Monitor one professor, who specializes in the psychology of social media, makes a good point about the gray area in online news:

"The lines between editorial content, bias, half-truths, and blatant lies can be so blurred, I'm not sure that stopping the proliferation of fake news should even be the aim," she writes. "The best-case scenario is that a system like this will raise people’s level of awareness about the presence of misinformation in their news diet."

Not everyone will be pleased that Facebook isn’t doing more, but I’d prefer neutrality from them. Let users exercise their our judgment. That ensures that any content producer doesn’t just get dismissed because her opinion or prose is unpopular.