The Change the Terms coalition, which includes civil rights group Color of Change and good governance group Common Cause, released a scathing 19-page analysis on Thursday of the election policies of major tech companies and whether they are keeping their promises to fight back. against disinformation before the vote.
The report argues that tech companies’ plans to fight misinformation and connect users to credible information came too late and weren’t aggressive enough to tackle proliferating conspiracies about widespread voter fraud or specific attacks on election officials.
“Election disinformation and disinformation are not anecdotal or seasonal. The lies — especially the brand of election denial rhetoric that has increased in 2020 — have been rampant online for years, and this crisis has no end in sight,” the groups write.
“Treating ‘election-related’ disinformation in particular as episodic ignores that it is present year-round and shapes the beliefs and opinions that lead to the harassment of election officials, as well as election-related hoaxes and violence. “.
The coalition report, which was written primarily by researchers and activists from the media advocacy group Free Press, offers a grim assessment of how tech companies have wielded their power to shape public discourse during a season of high-stakes campaign where Americans will decide who represents them in the House of Representatives, a third of the Senate and many state offices. Millions of voters have already cast their ballots.
“We went to the brink of violence and saw the effect of social media influence on January 6,” said Nora Benavidez, senior attorney and director of digital justice and civil rights at Free Press. “Despite this, companies are not doing any better. They clearly didn’t update their systems in time for the election.
YouTube spokeswoman Ivy Choi said in a statement that the company disagrees with the report’s characterization of company policies. “Inciting violence against election officials or alleging that the 2020 U.S. presidential election was stolen or rigged is not permitted on YouTube, and we enforce our policies regardless of the speaker,” Choi said.
TikTok spokesperson Ben Rathe said in a statement that the company removes election misinformation and provides access to authoritative election information through its “Election Center, which is available in more than 45 languages. “.
Twitter spokeswoman Elizabeth Busby said in a statement that the company has “taken deliberate and meaningful steps to elevate credible and authoritative information about the United States midterm elections and to ensure that misleading information is not amplified”.
A spokesperson for Meta, which is the parent company of Facebook and Instagram, declined to comment on the report, but referred a Washington Post reporter to an August press release outlining Meta’s intention to combat the misinformation about how to vote and threats of violence or harassment against the election. workers.
The report follows a months-long campaign by the coalition to encourage tech companies to tackle hateful, misinformed and violent content on their platforms. Over the summer, the coalition began meeting with executives from the four companies to discuss specific strategies they could adopt to deal with problematic information. Months later, according to the coalition, companies have followed few of its recommendations.
Over the summer, tech companies announced they were largely sticking to strategies they had deployed in previous election cycles to combat misrepresentation about the election process while elevating information credible. They pledged to ban and remove content that misleads users about how and when to vote while promoting accurate information about the electoral process. Twitter, TikTok and YouTube also said they would take action against posts that falsely claim the 2020 election was rigged. Meta has only banned such posts in political ads.
But the report alleges that serious shortcomings exist in the companies’ policies and in the enforcement of their own rules. Advocacy groups have been particularly critical of the exceptions to the rules granted by the four companies because they deem certain content to be newsworthy or in the “public interest,” according to the report.
Activists say that “any promising protection policy seems to be circumventable with each platform’s arbitrary ‘news’ or ‘public interest’ exception.” »
This issue has sparked renewed interest since tech mogul Elon Musk, who is set to become the owner of Twitter on Friday, said he would reverse Twitter’s ban on former President Donald Trump.
Hundreds of GOP candidates have embraced Trump’s false claims about his defeat in the 2020 presidential race, and some are using social media to deceive unsubstantiated claims of voter fraud.
Busby said Twitter rarely enforces the public interest exception and when it does, the tweet is ineligible to be retweeted and is placed behind a notice that provides context about the violation of the rule.
Choi said that while YouTube “allows content with sufficient educational, documentary, scientific, or artistic (EDSA) context or opposing views — it’s not a pass to violate our policies based on ‘Media Value’. .”
Rathe pointed a Washington Post reporter to TikTok’s content guidelines, which state that the company can apply exceptions to its rules in “certain limited circumstances,” such as material material for documentary, scientific, or scientific reasons. or artistic.
The coalition report also urges platforms to strengthen their policies to protect election workers from violence and harassment. Election workers and their families have faced death threats as well as sexual and racial attacks spurred by their refusal to support Trump’s claims of a rigged election or because they have been caught up in conspiracy theories that purport to wrong that they were part of an election. scheme.
Some of the technology companies have policies that prohibit certain types of harassment or the disclosure of personally identifiable information about users, including election workers.
Civil rights groups argue that companies should be more transparent about their efforts to prevent the spread of personal information of election workers online and should do more to weed out misinformation that could make election workers a target in the first place. .
“The ‘Big Lie’ is spreading across all platforms; examples also abound on Meta and Twitter, where hateful and misleading posts pack a punch: encouraging violence against election workers over patently false allegations about stealing the 2020 election from Donald Trump,” the authors wrote. groups.