Facebook announced in March that it would ban white nationalist and white supremacist content on its social media platforms, a departure The New York Times describes as “[bowing] to longstanding demands from civil rights groups who said the tech giant was failing to confront the powerful reach of white extremism.”
Twitter did not follow suit, despite being a platform that white supremacists use “with relative impunity,” as a 2016 study from George Washington University’s Program on Extremism found.
Experts who study online extremism, including the authors of the 2016 George Washington University study, have observed Twitter’s success with using artificial intelligence to suspend ISIS-linked accounts—approximately 360,000 by 2016—and its unwillingness to use the same methods to combat white nationalism. A new story from Motherboard by Joseph Cox and Jason Koebler suggests that Twitter fears those algorithms risk catching Republican politicians.
The question of why Twitter won’t use the same methods to target white nationalists as it does for ISIS members came up in an all-hands meeting on March 22, Motherboard reports. After a staff member, who remains anonymous, asked the question, an executive answered that Twitter simply follows the law.
Another employee, who works on machine learning, explained the trade-offs involved with any content filter, how sometimes non-extremist accounts can be swept up in attempts to ban ISIS ones, but as Motherboard paraphrases the machine learning employee, “Society, in general, accepts the benefit of banning ISIS for inconveniencing some others.”
Apparently, this example doesn’t apply to inconvenience Republicans. Cox and Koebler continue:
In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.
Cox and Koebler emphasized that this isn’t an official Twitter position. A spokesperson from Twitter told Motherboard that their source’s comments are an “accurate characterization of our policies or enforcement—on any level.”
Twitter has been under fire from civil rights activists for its unwillingness to ban many white nationalists. David Duke, the former Grand Wizard of the Ku Klux Klan, retains a Twitter account. Shortly after the mass shootings at two mosques in Christchurch, New Zealand, Slate found that President Trump has amplified multiple white supremacist accounts, liking and retweeting their content. In March, Motherboard found that even when web hosting platforms have shut down neo-Nazi sites, they still found a home on Twitter, despite the site’s rules against “abuse and hateful conduct.”
Twitter did not comment on why it is able to more effectively police ISIS than white supremacists on the platform. Outside experts suggested to Motherboard that it’s possible the public is more willing to accept that innocent accounts might be banned in a broader attempt to get Islamic extremists off the site, but that doing so in the name of white supremacy would more likely spark a backlash.
A 2018 VOX-Pol report was blunt about the difficulties of battling white supremacy versus ISIS online: “The task of crafting a response to the alt-right is considerably more complex and fraught with landmines, largely as a result of the movement’s inherently political nature and its proximity to political power.”
This article first appeared on Truthdig