Back in 2017, Facebook introduced a “Disputed by Fact-Checkers” tag. The rationale was to provide an unobtrusive but clear warning that the factual accuracy of a shared piece of content is questionable. It proved ineffective. Facebook’s co-founder and CEO Mark Zuckerberg has recently back-pedalled on his claim made in a Georgetown University speech: “I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100 percent true”.

Now, the opposite is the case. In a recent Facebook press conference (the complete transcript of which can be found here), Zuckerberg revealed the following:

“We’re announcing a few improvements to our services today. The first is we’re going to show much more prominent labels on content that independent fact-checkers have marked as false. 

We already show labels today, but the new labels will increase transparency of the fact check and ensure that anyone who comes across fact-checked content will see that it has been fact-checked and marked false before tapping through to see the content.

We’re also introducing clearer labelling for fact-checked content on Instagram too. The second thing we’re doing is we’re going to label content coming from state-sponsored media. In the U.S., we have the benefit of a free press here, and because of that, we think it’s especially important to call out transparently when media coming from any country around the world is acting as an organ of the government and not a free press. So we’re going to label them prominently.“

In essence, if Facebook receives signals that a piece of content is false, it will reduce its distribution pending review by a third-party fact-checker. If found to be false, the user who encounters the content will be met by the following notification:

Facebook and Instagram's fake news warnings

Facebook also intends to implement the following initiatives in addition:

  • Combating inauthentic behaviour, on which they’ll soon clarify how they enforce against the “spectrum of deceptive practices” on their platform
  • Labelling state-controlled media on their Facebook Page
  • Banning paid ads suggesting users don’t vote
  • Displaying clearly on pages what country the page is operated from and the legal name of the person or organisation that is operating the page

These initiatives follow the most controversial years in Facebook’s history, during which Zuckerberg’s platform has inadvertently promoted totally polarised political discourse as a side-effect of its updates.

Legitimate publishers need not be concerned: these initiatives are to be introduced with the specific purpose of targeting and nixing malicious content that threatens the democratic process. You may be wondering if the ability to flag stories for fact-checking is susceptible to abuse at the hands of Facebook users who are either careless or consciously attempting to erase points of view out of line with their own. This seems, in theory, like a risk to honest, opinionated publishers. But, Facebook has the tools and algorithmic power to identify the truly dubious articles and differentiate them from opinionated but innocent journalism, so this needn’t be a worry.

All this effort could be read as an attempt by Facebook to rekindle favour with publishers. The tech giant may be attempting to clean up its act as it tries to court publishers with integrity once again. Its News tab also acts as evidence of this.