To Combat Fake News, Facebook To Give 'Trust' Ratings To Users Who Identify False Reports

Once a story is flagged, Facebook's fact-checking team will verify its authenticity.

In order to tackle fake news, false reports and misinformation, Facebook is now planning to privately rate its users on their ‘trustworthiness’ to flag wrong information and malicious news stories.

While speaking to Washington Post about the new credibility system, Tessa Lyons, who works as a product manager at Facebook, said,

People often report things that they just disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”

But the social media giant won’t solely rely on a user’s activity to analyse the authenticity of a news story. Once a story is flagged, Facebook’s fact-checking team will verify its authenticity. It maintains that a user’s contribution in spotting fake news will not be the highest parameter in determining the authenticity of the story.

If false stories flagged by a user are later found to be credible, Facebook will rate the user on a scale from 0 to 1, the former being least reliant. The new algorithm is partially related to how a user has responded to fake news in the past.

Tessa, while further explaining how this new tool will help fight fake news, said,

“For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true.”