Facebook will begin banning posts, photos and other content that reference white nationalism and white separatism, revising its rules in response to criticism that a loophole had allowed racism to thrive on its platform.
Previously, Facebook only had prohibited users from sharing messages that glorified white supremacy – a rhetorical discrepancy, in the eyes of civil-rights advocates, who argued that white nationalism, supremacy and separatism are indistinguishable and that the policy undermined the tech giant’s stepped-up efforts to combat hate speech online.
Facebook now agrees with that analysis, according to people who’ve been briefed on the decision. The new policy also applies to Instagram.
The rise and spread of white nationalism on Facebook were thrown into sharp relief in the wake of the deadly neo-Nazi rally in Charlottesville, Virginia, in 2017, when self-avowed white nationalists used the social networking site as an organizing tool.
The following year, Motherboard, a tech publication owned by Vice, obtained internal documents meant for training and guiding content reviewers that revealed Facebook treated the terms differently: The materials showed that Facebook permitted “praise, support and representation” of both white nationalism and white separatism “as an ideology.” The policy drew sharp rebukes from civil-rights advocates, who have argued for years that the terms are interchangeable.
Facebook’s decision comes one week after the company made another announcement to appeal to longstanding complaints from civil rights advocates: The company prohibited advertisers from excluding minorities and other protected groups from ads for housing, employment and credit.
Civil rights groups applauded the move. “There is no defensible distinction that can be drawn between white supremacy, white nationalism or white separatism in society today,” Kristen Clarke, the president and executive director of the Lawyers’ Committee for Civil Rights Under Law, said on Wednesday.
The organization had pushed Facebook for months to change its policies, pointing to pages such as “It’s okay to be white,” which has more than 18,000 followers and has regularly defended white nationalism. Another, called “American White History Month 2,” often posted white supremacist memes, according to the Lawyers’ Committee. A cached version of the page from late February showed it had more than 258,000 followers before it went offline.
Facebook’s new policy comes as the company continues to struggle to take down other content that attacks people on the basis of their race, ethnicity, national origin and a host of other “protected characteristics.” Between Jan. 1 and Sept. 30, 2018, Facebook took action against eight million pieces of content that violated its rules on hate speech, according to its latest transparency report. Facebook is not legally required to remove this content, but its rules prohibit it.
To help enforce its policies, Facebook has developed and deployed artificial intelligence tools that can spot and remove content even before users see it. But the technology isn’t perfect, particularly when it comes to hate speech. The company only removes about 50 percent of such posts at the moment users upload them, it said last year. As a result, such harmful, extremist content still can go viral on Facebook – a reality the company confronted earlier this month when users continued to upload videos of the mass shooting in New Zealand that left 50 people dead. The shooter specifically sought to target Muslims.
To that end, civil-rights groups said Facebook still had considerable work to do to address the spread of hate speech on its platform.
“As we have seen with tragic attacks on houses of worship in Charleston, Pittsburgh, New Zealand, and elsewhere, there are real world consequences when social media networks provide platforms for violent white supremacists, allowing them to incubate, organize, and recruit new followers to their hateful movements,” Clarke said.
(c) 2019, The Washington Post · Tony Romm, Elizabeth Dwoskin