As firefighters worked to contain the fire that ravaged the Notre Dame Cathedral in Paris on Monday, Twitter and YouTube struggled to take down conspiracy theories being pushed by both anonymous accounts and verified white nationalists who spread Islamophobic theories about the disaster.
French officials investigating the fire have ruled out arson and terrorism, saying the fire that led to a roof collapse may have been tied to ongoing repairs at the cathedral.
Images and videos of the flaming cathedral spread quickly across social media and were quickly seized upon to push Islamophobic narratives that have flourished in far-right politics around the world in recent years. Those sentiments then made their way to more mainstream conservative pundits, who questioned whether the fire had been set on purpose.
Some anonymous users used editing software to push conspiracy theories on YouTube and Twitter, and their accounts remained active on the platforms — despite repeated debunkings and reports on the platform. One video on YouTube, viewed 40,000 times, superimposes audio of a man yelling “Allahu Akbar” (Arabic for “God is great”) over a video of the Notre Dame fire. The audio comes from a years-old video, which is the first result when a user searches for “Allahu Akbar Scream” in Google. The video has not been removed from YouTube.
The account that posted the video, which features a white supremacist cartoon for an avatar, remains active.
That hoax video was then re-posted to Twitter, receiving almost 2,000 retweets and tens of thousands of views before being removed Monday night. The user’s account, however, remains active.
Sophie Bjork-James, an assistant professor of anthropology at Vanderbilt University who focuses on white nationalism, said the edited videos and talking points are part of a recruitment strategy but they “also fit into their ideology that the race war is happening.”
“They’re committed to getting more white people to white nationalism, and fanning the flames of Islamophobia is helpful,” she said.
Bjork-James said the onus is on social media companies to limit the use of their platforms as recruitment megaphones. She cited prominent white nationalist Richard Spencer’s tweet Monday that the fire could “spur the White man into action—to sieze (sic) power in his countries, in Europe, in the world.”
“Individual countries can pass laws limiting hate speech, but given the global nature of white nationalism today, and how it inspires violence, a lot of the responsibility falls on social media companies,” she said. “That’s especially true when so often they’re perpetuating lies, not just racism, that portray Muslims as celebrating the burning of the cathedral.”
White nationalist YouTube agitators including Stefan Molyneux and Faith Goldy, who are both verified on Twitter, pushed conspiracy theories about Muslims related to the fire that racked up tens of thousands of retweets. Goldy was banned by Facebook earlier this month in a purge of white nationalist accounts. Molyneux, who lives in Canada and had no first-hand information about the fire, implored the public not to trust any officials regarding a cause of the fire. Goldy and Molyneux’s tweets are still visible.
Molyneux’s talking point was later pushed by more mainstream parts of the conservative media ecosystem. Talk radio host Glenn Beck claimed on his show that “if this was started by Islamists, I don’t think you’ll find out about it” and compared the fire to the 9/11 terrorist attacks.
Twitter also declined to immediately take down some misinformation related to the fire, including a fake CNN account that claimed the fire was an act of terrorism, according to a CNN spokesperson. The account was later suspended.
While social media companies have taken a more active role in moderating their platforms, most have only recently attempted to crack down on certain types of hate speech, such as white nationalism and islamophobia.
Twitter and YouTube did not reply to requests for comment by press time.
Twitter released a report Tuesday detailing its efforts to remove hate speech. The company said that its automated system took down 38 percent of all abusive content from the platform before it reached public view. The company will soon update its rules “so they’re shorter, simpler and easier to understand,” it wrote in a blog post.