Facebook Will Start Telling You When A Story May Be Fake

0
>>Follow Matzav On Whatsapp!<<

Facebook has struggled for months over whether it should crack down on false stories and hoaxes being spread on its site. Now, it has finally come to a decision.

The social network is going to partner with the Poynter International Fact-Checking Network, which includes groups such as Snopes, to evaluate articles flagged by Facebook users.

If those articles don’t pass the smell test for the fact-checkers, Facebook will pass on that evaluation with a little label whenever they are posted or shared, along with a link to the organization that debunked the story.

“We have a responsibility to reduce the spread of fake news on our platform,” said Facebook’s Adam Mosseri, vice president of product development, in an interview with The Washington Post. Mosseri added that Facebook still wants to be a place where people with all kinds of opinions can express themselves. And Facebook has no interest in being the arbiter of what’s true and what isn’t for its billion users, he said.

The new system will work like this: If there’s a story out there that is patently false — saying that a celebrity is dead when they aren’t, for example — then users will see a notice saying that the story has been disputed or debunked. People who try and share stories that have been found false will also see an alert before they post. Flagged stories will also appear lower in the News Feed than unflagged stories.

Users will also be able to report potentially false stories to Facebook, or send messages to the person posting a questionable article directly.

The company is focusing, for now, on what Mosseri called the “bottom of the barrel” websites which are purposefully set up to deceive and spread fake news, as well as those that are impersonating other news organizations.

“We are not looking to flag legitimate organizations,” Mosseri said. “We’re looking for pages posing as legitimate organizations.” Articles from legitimate sites that are controversial or even wrong shouldn’t get flagged, he said.

There is no blacklist of sites that will automatically have their stories sent on, Mosseri said. But Facebook has built a sort of data profile of characteristics fake news articles share — such as low share numbers after the headline is clicked — which it will use to decide when to have something fact-checked.

The company will also prioritize checking stories that are getting lots of flags from users and are being shared widely, to go after the biggest targets possible.

If someone wants to appeal a label, they can direct that complaint to the fact-checking organization that made the call on whether an article was true or not.

Facebook is also trying to crack down on people who’ve made a business going into fake news by tweaking its advertising practices. Any article that’s been disputed, for example, cannot be used in an ad. Facebook’s also playing around with ways to limit links from publishers with landing pages that are mostly ads — a common tactic for fake news websites.

With those measures in place, “we’re hoping financially motivated spammers might move away from fake news,” Mosseri said.

All of these efforts, Mosseri said, are works in progress. Users will start seeing them Thursday, but Facebook is testing out different options to see what works best. This round of efforts is definitely the first of many that Facebook will have to try to stay ahead of fake news sites.

“We don’t think it will get us all the way there,” he said. “I expect it to be something we need to invest in on an ongoing basis.”

(c) 2016, The Washington Post · Hayley Tsukayama 

{Matzav.com}


LEAVE A REPLY

Please enter your comment!
Please enter your name here