Facebook has a fake news problem. Its most recent trending news misstep promoted misinformation from anonymous messageboard 4chan about the Las Vegas shooting. To combat the spread of fake news (and growing backlash against the company), the social network is testing out a new feature enabling users to tell the difference themselves between an article from a trustworthy publisher, and one that's a bit more suspect than usual, by hitting a "more information" button.
Image credit: Mark Court/Getty
The company's new context information button adds relevant information to any linked article shared in your news feed, such as information about the publisher (like a Wikipedia page), additional articles about the topic, and how the article is being shared on Facebook. If an article lacks contextual information such as a Wikipedia page, or lacks links to similar stories from other publishers, Facebook will alert users to the missing contextual information. The new tool is there, according to Facebook, to help users "make an informed decision about which stories to read, share, and trust". The presence of fake news shared by Facebook itself has been an increasingly problematic issue after Facebook disbanded its human-powered trending news team.
However, Facebook's previous efforts in identifying fake news for users -- in the form of a "Disputed" tag on the article in question -- didn't do much to change the habits of younger Facebook users. We reached out to Facebook to ask how this new fake news identification tool would be more effective than its previous effort, but had not heard back at time of writing.