It’s as predictable as the ocean tides: When Twitter or some other social media giant announces a new rule intended to stamp out abuse of its platform, the very people who specialize in that abuse promptly figure out how to game the rules in a way that targets the people who expose their abusiveness both on social media and in the real world.
December 8, 2021

It’s as predictable as the ocean tides: When Twitter or some other social media giant announces a new rule intended to stamp out abuse of its platform, the very people who specialize in that abuse promptly figure out how to game the rules in a way that targets the people who expose their abusiveness both on social media and in the real world. It happened to me back in 2019.

True to form, Twitter recently announced a new “private information policy” allowing someone whose photo or video was tweeted without their consent to have it taken down upon request. Within hours, white nationalists and neo-Nazis were openly strategizing how to use the rule to have accounts by researchers and antifascists suspended and their posts removed—and then easily succeeded in doing so.

“Beginning today, we will not allow the sharing of private media, such as images or videos of private individuals without their consent,” Twitter Safety announced Tuesday. “Publishing people's private info is also prohibited under the policy, as is threatening or incentivizing others to do so.”

The policy is being enforced retroactively, so a number of the accounts being targeted by the flood of “coordinated and malicious” reports from the far right were suspended for posts published in previous years. A number of the suspensions involved posts with content that Twitter had already clarified would not be in violation of the new rule. After being alerted to the problem, Twitter restored some of those accounts, though not all of them.

“It’s going to be emboldening to the fascists,” said antifascist researcher Gwen Snyder, whose account was suspended on Thursday for a 2019 tweet showing photos of Philadelphia mayoral candidate Billy Ciancaglini consorting with Proud Boys at a public event.

Twitter restored Snyder’s account after The Washington Post inquired about her suspension. A Twitter spokesperson told reporter Drew Harwell that an internal review had shown that the tweet did not violate the policy, and that “our teams took enforcement action in error.”

However, the fate of many of the other suspended accounts is less clear. According to the antifascist account @WhiteRoseAFA, “There are dozens of other researchers who were hit in the same 48-hour window, yet to my knowledge not a single other account had their penalties lifted.”

Twitter’s announcement spurred an immediate enthusiastic reaction in far-right circles, both on Twitter and other platforms, notably the encrypted chat platform Telegram. That was where noted white supremacist Tony Hovater, a cofounder of the neo-Nazi Traditionalist Worker Party, went to work.

“Due to the new privacy policy at Twitter, things now unexpectedly work more in our favor as we can take down Antifa … doxing pages more easily,” Hovater wrote to his Telegram followers. “Anyone with a Twitter account should be reporting doxxing posts from the following accounts to deplatform.”

Hovater provided a list of nearly 50 Twitter accounts for his followers to target. Several of them have subsequently been suspended. After researcher Kristofer Goldsmith shared it on Twitter as a warning, Hovater replied: “Yeah and we’ll do it again.”

The organized campaign spread to other platforms, notably the white nationalist-friendly site Gab. Far-right activists boasted there about Twitter accounts they had successfully reported; one of them claimed he had filed more than 50 reports, adding, “It’s time to stay on the offensive.”

Others have taken to Twitter to organize. One far-right account (subsequently suspended) claimed he had reported dozens of antifascist accounts, and admonished his readers: “[Right-wing] Twitter, it is time. I told you yesterday and you had reservations. No more excuses. We have work to do.”

The work they did was immediately effective. In addition to Snyder’s account, a roster of antifascist accounts were summarily slammed with suspensions and warnings:

Some of these accounts, such as Redoubt Antifascists, successfully appealed their suspensions. But many others, such as @SkySpider_ and @WANaziWatch, were only able to restore access to their accounts after the offending post was removed. As @WhiteRoseAFA commented: “Accounts like @afainatl, @ExposeDezNat, @MIAagainstFash, @sirtou2, @RuthlessWe, myself, and many others suddenly were pushed to the brink of being permanently suspended and that hasn't changed.”

When announcing the rule, Twitter claimed it would help “curb the misuse of media to harass, intimidate and reveal the identities of private individuals, which disproportionately impacts women, activists, dissidents, and members of minority communities.”

The company also clarified that the rule would not apply to photos that added “value to public discourse” or were of people involved in a large-scale protest, crisis situation or other “newsworthy event due to public interest value.”

Twitter also said that it “will take into consideration whether the image is publicly available and/or is being covered by journalists,” and that “images/videos that show people participating in public events (like large scale protests, sporting events, etc.) would generally not violate this policy.” Yet a number of the subsequent suspensions involved content that fell well within those categories.

The company has tried to backtrack: “We became aware of a significant amount of coordinated and malicious reports, and unfortunately, our enforcement teams made several errors,” Twitter spokeswoman Siobhan Murphy told The Verge. “We’ve corrected those errors and are undergoing an internal review to make certain that this policy is used as intended — to curb the misuse of media to harass or intimidate private individuals.”

Yet, as The Daily Beast’s Kelly Weill reports, the suspensions involving antifascist researchers have continued as new groups lined up to file false reports. As Weill notes, the suspended accounts in some instances didn’t even originally publish the pictures in question. The Miami Against Fascism account was hit with a brief suspension after it quote-tweeted a Miami journalist’s photo of Proud Boys leader Enrique Tarrio outside a school board meeting.

“I'm appalled that Twitter is continuing to enforce a policy that is vague and obviously flawed,” Snyder said. “Nazis are using this ‘privacy’ policy to silence principled attempts to hold them accountable for violence and stochastic terror, and it's incredibly disturbing that Twitter is knowingly continuing to enable that activity.”

This is a consistent pattern with Twitter, which studies have demonstrated is perfectly capable (like other social media giants) of monitoring and eliminating hate speech from its platform: It just doesn’t want to.

As the Southern Poverty Law Center’s Michael Edison Hayden observed in his devastating July report detailing Twitter’s cozy treatment of far-right extremists:

Twitter does not enforce these rules with any discernible consistency. Dorsey and his staff have in fact enabled some repeat offenders, who post at a high volume on the site and have built up big followings to spread hate and disinformation. Many of these disinformation superspreaders have never faced any meaningful consequences for violating Twitter’s terms of service.

Twitter has also shown little indication that it seeks to limit the proliferation of hate or disinformation on its platform in any systemic way. On the contrary, the Twitter business model appears to hinge on instilling feelings of resentment in people and, to at least some degree, exacerbating mental illness and anxiety. Extremists who terrorize other users and exploit the site to sow chaos keep the billion-dollar corporation’s business model humming.

“Even if the algorithm were removed, Twitter is filled with horrible content,” Megan Squire, a professor of computer science at Elon University and a senior fellow with the Southern Poverty Law Center, told Hatewatch. “Even if they removed the algorithm, the problem is what Twitter allows on its site. Inventing algorithms to promote that content is adding fuel on the fire.”

Longtime antifascist organizer Daryle Lamont Jenkins told Vice that it might be time for researchers in the field to shift away from Twitter and focus on more traditional websites and blogs—as Atlanta Antifascists and WANaziWatch have already begun doing.

"There used to be a time when we all had our own website,” said Jenkins. “We all had our own blogs. We need to go back to that if we're so worried."

Discussion

We welcome relevant, respectful comments. Any comments that are sexist or in any other way deemed hateful by our staff will be deleted and constitute grounds for a ban from posting on the site. Please refer to our Terms of Service for information on our posting policy.
Mastodon