If it feels as though white supremacists are coming out of the woodwork everywhere these days, particularly on social media, you’re not alone—and it’s not a figment of your imagination.
White Supremacists Thrive, Organize And Recruit On Facebook
Members and supporters of the National Socialist Movement, one of the largest neo-Nazi groups in the US, hold a rally in downtown Newnan on April 21, 2018 in Newnan, Georgia. Credit: Spencer Platt/Getty Images
May 27, 2020

If it feels as though white supremacists are coming out of the woodwork everywhere these days, particularly on social media, you’re not alone—and it’s not a figment of your imagination. On Facebook particularly, they are thriving, organizing, and recruiting, and it’s unlikely to slow down anytime soon.

A report this week from the Tech Transparency Project (TTP) examining the numbers of designated hate groups with a presence on Facebook turned up some disturbing results. Foremost among them: Over half of the 221 organizations identified by the Southern Poverty Law Center (SPLC) and the Anti-Defamation League (ADL) as hate groups have a continuing and active presence on Facebook, with even previously banned outfits managing to worm their way back onto the platform.

The trend, as the report’s authors explain, is acutely worrying in the midst of a pandemic that is enabling or requiring millions of people to spend most of their time online—which is where nearly all white supremacist organizing and recruitment now occurs.

“With millions of people now quarantining at home and vulnerable to ideologies that seek to exploit people’s fears and resentments about COVID-19, Facebook’s failure to remove white supremacist groups could give these organizations fertile new ground to attract followers,” the report observes.

TTP’s researchers found that 113 of those 221 organizations were on Facebook and were involved in a total of 153 Facebook pages and four Facebook groups. It found that 34 of the groups had two or more such pages.

Some of these pages have been active for more than a decade. A number of them were created by Facebook itself, auto-generated as business pages when someone listed them as their employer in their Facebook account.

The report also found:

  • Facebook’s “Related Pages” feature often directed users visiting white supremacist pages to other extremist or far-right content, raising concerns that the platform is contributing to radicalization.
  • One of Facebook’s strategies for combatting extremism—redirecting users who search for terms associated with white supremacy or hate groups to the page for “Life After Hate,” an organization that promotes tolerance—only worked in 6% (14) of the 221 searches for white supremacist organizations.
  • In addition to the hate groups designated by SPLC and ADL, TTP found white supremacist organizations that Facebook had explicitly banned in the past. One known as “Right Wing Death Squad” had at least three pages on Facebook, all created prior to Facebook’s ban.

After TTP issued its report on Thursday, Facebook apparently began removing a number of the pages it had identified as white supremacist. Following queries from HuffPost reporter Christopher Mathias, TTP researchers said they noticed that pages for 55 white supremacist groups named in its report had been taken down.

“We are making progress keeping this activity off our platform and are reviewing content in this report,” a Facebook spokesperson said in a statement to HuffPost, adding that the company has “banned over 250 white supremacist organizations and removed 4.7 million pieces of content tied to organized hate globally in the first quarter of 2020, over 96% of which we found before someone reported it.”

The Facebook spokesperson added that the company has a team of 350 people devoted to rooting out violent extremists under its “Dangerous Individual and Organizations” policy, which bans hate and terror groups.

However, the issue the TTP report raises has more to do with what comprises Facebook’s standards for defining “hate and terror” groups, and how Facebook prioritizes its enforcement. Neither are clear, and there are indications that its underlying standards are badly skewed regarding the realities of terrorism and extremism.

Facebook’s response so far is reminiscent of its approach to dealing with the spread of cultish QAnon conspiracism on its platform: Namely, to kick off a few high-profile cases by applying its standards less than rigorously while acting as though the problem has been dispatched and the site’s policies and algorithms resolved all its previous and ongoing failings.

Facebook’s algorithms are the keys to its profitability, and the company guards them zealously, producing a remarkably laissez-faire approach to violent and hateful content that is now the subject of international concern due to numerous violent acts—such as the March 2019 terror attacks in Christchurch, New Zealand—being planned, announced, and even livestreamed on their platform. Facebook was, after all, one of the signees to the Christchurch Call (an international initiative—which was ignored by the United States—to focus efforts on rooting out violent extremism and its spread on social media).

“Facebook's algorithms are set up to keep users on their platform for the maximum amount of time possible, therefore allowing the company to profit off of additional ad views,” said Daniel E. Stevens, executive director of TTP’s Campaign for Accountability. “Facebook has figured out that the best way to keep users on the site is to know what content might interest them and serve up similar content that will keep users coming back.

“Even though Facebook may not be making money directly from each ‘related-pages’ click, they still make money from other, non-extremist ads when their users spend more time on the site in general. Facebook could address many of these content issues by using additional human moderators instead of algorithms, but it is more profitable for them to rely on software they’ve already developed instead of increasing their payroll.”

All of the major social media platforms—including Twitter, YouTube, Google, and others—build their revenue-generating operations around “engagement,” which means getting their users to remain on their platforms for as long as possible. As multiple studies have shown, this not only produces perverse incentives to retain violent and hateful content, but also for the algorithms of these websites to create feedback loops that deepen and enhance the online radicalization of a broad range of extremists.

“Facebook cannot be trusted to regulate its own platform,” Stevens added. “It was only after our report was made public that Facebook removed dozens of these pages, a clear example of the company being reactive rather than proactive. Facebook cannot be left to its own devices.”

Published with permission of Daily Kos

Can you help us out?

For nearly 20 years we have been exposing Washington lies and untangling media deceit, but now Facebook is drowning us in an ocean of right wing lies. Please give a one-time or recurring donation, or buy a year's subscription for an ad-free experience. Thank you.

Discussion

We welcome relevant, respectful comments. Any comments that are sexist or in any other way deemed hateful by our staff will be deleted and constitute grounds for a ban from posting on the site. Please refer to our Terms of Service for information on our posting policy.
Mastodon