Facebook Wants To Use Artificial Intelligence To Block Terrorists Online

2
>>Follow Matzav On Whatsapp!<<

To combat the spread of terror-related content online, Facebook announced Thursday that it will bolster its automated and human-powered efforts to flag and take down extremists posts, and will develop data-sharing systems across its family of social media and messaging apps.

“Making Facebook a hostile place for terrorists is really important to us,” said Monika Bickert, Facebook’s head of global policy management.

Facebook announced earlier this year that it would increase its community operations team by 3,000, increasing the number of people who review flagged posts on the social network, including instances of bullying, hate speech and terrorism. One hundred and fifty employees at Facebook count counterterrorism as their primary responsibility, the company said.

Facebook also will deploy artificial intelligence to weed out extremist content. Through image matching, AI can keep certain images and videos that have been flagged from being uploaded again. The company is also building an algorithm that aims to analyze written text to keep terrorism-related language off the platform. But Facebook acknowledged that human expertise is key to its new measures. “AI allows us to remove the black-and-white cases very, very quickly,” said Brian Fishman, the lead policy manager for counterterrorism at Facebook. But he added that human experts are better at analyzing the context of a post, and in grappling with the evolving methods used to bypass Facebook’s counterterrorism measures.

Systems to block accounts of terrorists across the flagship social network and its sister apps, Instagram and WhatsApp, are also being developed. Facebook declined to say what types of customer data will be shared between its apps, but said that cross-platform systems being developed for counterterrorism purposes are separate from its commercial data sharing.

In recent years, Facebook has been criticized for not doing enough to combat propaganda and extremist content online. After the terrorist attack in London this month, British Prime Minister Theresa May attacked Web companies for providing a “safe space” for people with violent ideologies. Under pressure from governments around the world, the tech industry has responded to this type of criticism before. Facebook, Twitter, Google and Microsoft said they would begin sharing unique digital fingerprints of flagged images and video, to keep them from resurfacing on different online platforms.

In a separate post Thursday morning, Facebook said it will be seeking public feedback and sharing its own thinking on thorny issues, including the definition of fake news, the removal of controversial content, and what to do with a person’s online identity when they die.

(c) 2017, The Washington Post · Hamza Shaban

{Matzav}


2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here