Far right is using Twitter’s new rule against anti-extremism researchers

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,780
Reputation
9,783
Daps
174,043
https://www.washingtonpost.com/technology/2021/12/02/twitter-media-rule-used-by-extremists/

Researchers fear the new ban on posts sharing people’s private information will be ‘emboldening to the fascists’ eager to keep their identities concealed. ‘Things now unexpectedly work more in our favor,’ one Nazi sympathizer wrote.

Clashes at the Unite the Right rally in Charlottesville in 2017. (Evelyn Hockstein)
By Drew Harwell
December 2, 2021|Updated yesterday at 7:13 p.m. EST

Neo-Nazis and far-right activists are coaching followers on how to use a new Twitter rule to persuade the social media platform to remove photos of them posted by anti-extremism researchers and journalists who specialize in identifying episodes of real-world hate.


Advocates said they worry the new policy will suppress efforts to document the activities of the far right and will prove to be a gift to members of hateful movements eager to keep their identities concealed.

“It’s going to be emboldening to the fascists,” said Gwen Snyder, an anti-fascist researcher and organizer in Philadelphia.

Snyder’s Twitter account was suspended early Thursday after someone reported a 2019 tweet of hers showing photos of a local mayoral candidate attending a public rally alongside the extremist group the Proud Boys. After The Washington Post asked about the suspension, Twitter spokesperson Trenton Kennedy said the tweet was not in violation and that “our teams took enforcement action in error.”

Twitter says it suspended accounts in error following flood of ‘coordinated and malicious’ reports

On Tuesday, Twitter said its new “private information policy” would allow someone whose photo or video was tweeted without their consent to request the company take it down.

Twitter said the rule would help “curb the misuse of media to harass, intimidate and reveal the identities of private individuals, which disproportionately impacts women, activists, dissidents, and members of minority communities.”

Twitter has a confusing new policy on what images you can post. Here’s what you need to know.

The rule, company officials said Tuesday, would not apply to photos that added “value to public discourse” or were of people involved in a large-scale protest, crisis situation or other “newsworthy event due to public interest value.”

In the days since, however, white supremacists on channels such as the encrypted chat service Telegram have urged supporters to use the new policy against activists and journalists who have shared their information or identified them in photos of hate rallies or public events.



“Due to the new privacy policy at Twitter, things now unexpectedly work more in our favor as we can take down Antifa … doxing pages more easily,” a white nationalist and Nazi sympathizer wrote to followers on Telegram on Wednesday night, referring to the anti-fascist political movement whose members often clash with far-right protesters and to the practice of publishing people’s personal information online.


He included a list of nearly 50 Twitter accounts and urged people to report them for suspension under the new rule. At least one of the accounts was suspended by Thursday. Twitter did not respond to a question about why the account had been taken down.

The Telegram post has been viewed more than 10,000 times. After it was shared on Twitter by anti-extremism researcher Kristofer Goldsmith, the Telegram user wrote, “Yeah and we’ll do it again.”

How Twitter will enforce the new policy remains contentious. A Twitter spokesman told The Post this week that the policy would help prevent the unauthorized sharing of photos of rape victims or women in authoritarian countries who could face real-world punishment for going outside without a burqa.


The company said that each report will be reviewed case-by-case and that flagged accounts can file an appeal or delete the offending posts to resolve their suspensions.


Snyder, the Philadelphia anti-fascist researcher, said she believed her reported tweet did not break the rules but deleted it anyway, worried that any appeal she filed would take too long or ultimately fail. She suspects the rule could have a “catastrophic” chilling effect on other researchers working to expose extremists.

Since the violent white-nationalist rally in Charlottesville in 2017, anti-extremism activists have used Twitter to identify previously anonymous members of far-right militias, neo-Nazis and other hate groups, sharing their photos, names and other information.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,780
Reputation
9,783
Daps
174,043
(continued)

In some cases, the exposed people have lost jobs, been reported to law enforcement or faced consequences with co-workers, friends or family. Activists and researchers who have shared their information have also faced death threats and online attacks.

Goldsmith, a researcher with the Innovation Lab at Human Rights First who tracks the far right, said the rule could undermine Twitter’s front-line role in distributing critical information about online and real-world hate campaigns.

Amateur investigators known as “sedition hunters” openly used Twitter to identify rioters at the U.S. Capitol on Jan. 6. Other researchers did the same after Charlottesville, he said. A jury last week ruled that more than a dozen white supremacists and hate groups should pay more than $26 million in damages for acts of intimidation and violence during the rally that left one woman dead.

“A large portion of the evidence that has been presented in these cases came from what Twitter now says is protected or ‘private’ information,” Goldsmith said.

Anti-extremism researchers and photojournalists on Twitter have in recent days posted reports showing suspension notices they’d received related to the new rule, even for months-old tweets of people in public places for whom the rule would not appear to apply.

Far-right activists have also worked to exploit their newfound power. On Telegram, one far-right activist shared tips on how to find potentially reportable images, using Twitter search queries such as “images fascist exposed.”

On other sites, like the fringe social network Gab, far-right activists said they were aggressively hammering out reports in hopes of taking down anti-fascist Twitter accounts. One said he had filed more than 50 reports in a day, adding, “It’s time to stay on the offensive.”

Some have also attempted to organize on Twitter, with one account saying they had submitted dozens of reports under the rule against anti-fascist accounts, tweeting, “[Right-wing] Twitter, it is time. I told you yesterday and you had reservations. No more excuses. We have work to do.” The account has since been suspended.

Goldsmith said he worried that Twitter’s moderators would not be prepared for a flood of reports from bad actors who could organize on other sites in hopes of blocking or hindering researchers’ work.

“Twitter simply does not have the human power to make these judgment calls,” he said.

Oren Segal, vice president of the Center on Extremism at the Anti-Defamation League, said Twitter needs to provide more clarity into how these rules will be enforced.

“If the intention of the new rules is to help stop doxing and harassment, that is important. But exposing extremists is also important,” Segal said. “Accountability is important. And sunlight can be the best disinfectant when done responsibly.”
 

Jello Biafra

A true friend stabs you in the front
Supporter
Joined
May 16, 2012
Messages
46,184
Reputation
4,943
Daps
120,896
Reppin
Behind You
The problem with this new rule on Twitter is the problem with all of Twitter's reporting functions: namely that it really doesn't come down to what is being reported but how often something is reported. If you can get enough people to report something Twitter acts based on the volume and not on the content of what is being reported.
Most of that comes down to the fact that Twitter (like Facebook and all the other social media sites) continue to use automated programs to police content instead of human moderators because those companies refuse to hire the amount of human beings needed to do a job that only human beings can really do successfully.
The moderators they do have are all basically just outsourced contractors instead of employees of the companies.
 

Makavalli

Sinister is a system
Joined
May 3, 2012
Messages
9,368
Reputation
2,353
Daps
30,375
Reppin
NULL
So what lobbyist or politician pushed behind the scenes for this new policy cause it was obviously set up to protect those racists from being put on blast
 
  • Dap
Reactions: TEH

TEH

Veteran
Joined
Jul 24, 2015
Messages
49,779
Reputation
14,822
Daps
203,636
Reppin
....
They bully also

what they’re not telling you is how hurt they are that they were stopped …
 
Top