Twitter said today it has shut down more than 235,000 accounts for promoting terrorism since February, far surpassing the 125,000 it had suspended in the previous seven months.
That brings the total number of such suspensions to 360,000 since June 2015.
Since February, “the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe,” the firm said in a blog post. Attacks linked to or inspired by the Islamic State have occurred in France, Germany, Turkey, Iraq and Florida, among other locations. “We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform,” the company said.
For the past two years, the popular social media site has sought to respond to criticism that it wasn’t doing enough to crack down on users who promote or are linked to the Islamic State and other terrorist groups. It began mass suspensions in early 2014.
Last December, after the San Bernardino, California, mass shootings, President Barack Obama called on tech leaders to “make it harder for terrorists to use technology to escape from justice.” Earlier this year, senior national security officials, including the attorney general and FBI director, met with tech firm senior executives to discuss ways to use technology to “disrupt paths to radicalization to violence.”
In its blog post, Twitter reported that daily suspensions are up more than 80 percent since last year, with spikes in suspensions immediately following terrorist attacks.
“Our response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically,” it said. “We have also made progress in disrupting the ability of those suspended to immediately return to the platform.”
The company said it has expanded the teams that review reports around the clock, adding new tools to help detect suspicious accounts and hiring people fluent in different languages. The firm said it also collaborates with other social media companies, sharing information for identifying terrorist content.
“There is no one ‘magic algorithm’ for identifying terrorist content on the Internet,” the company said. But it deploys technologies such as proprietary spam-fighting tools to supplement reports from the public to help identify people who violate Twitter’s user policies. During the past six months, these tools have helped the firm to automatically identify more than one-third of the accounts that were ultimately suspended for promoting terrorism, the company said.
Twitter said it works with law enforcement agencies seeking help in preventing or prosecuting terrorist attacks. It reports on government requests for information twice a year in its transparency report.
(c) 2016, The Washington Post · Ellen Nakashima