Advocacy groups warn of harassment on Twitter after Musk kills safety board
Elon Musk continues to unravel existing content moderation policies with the disbanding of its outside council.
The morning after Elon Musk abruptly eliminated a six-year-old Twitter advisory council, children's safety and civil rights groups warned the move will only make it harder to police harassment and hate speech on the platform.
The Trust and Safety Council included experts from across fields, such as digital rights and online harassment, and had been expected to meet for the first time under Musk’s leadership on Monday night. Instead — 40 minutes before the Zoom meeting — members got an email disbanding the council and thanking them for their service.
Dissolving the council will make it more difficult for Twitter “to come up with a system that does protect free speech and at the same time prevent the most egregious forms of harassment and bullying,” according to former council member Larry Magid, the CEO of children’s safety nonprofit ConnectSafely.
Musk has spent the weeks since he bought the platform systematically dismantling much of the company’s previous content moderation apparatus as he’s embraced demands for unfettered speech and railed against what he calls “the woke mind virus.” The elimination of the council — which was strictly advisory — is just his latest rejection of outside input on how he should run Twitter.
“It’s disappointing that the council was dissolved because its members had valuable insights about how to make their platform a safer place for all users,” said Yael Eisenstat, vice president of the Anti-Defamation League’s Center for Technology and Society.
“Twitter will lose the opportunity to learn from a diverse group of experts ranging from free speech absolutists to people who are extremely protective of children and people’s privacy,” added Magid.
GLAAD, an LGBTQ advocacy group, said that absent the council or best practices on hate speech, “the platform clearly remains unsafe not only for LGBTQ users but for brands and advertisers.”
Musk had already fired nearly all of the company’s human content moderators, personally attacked Twitter’s former head of Trust and Safety Yoel Roth and amplified posts from writers Matt Taibbi and Bari Weiss pushing the story line that former staff suppressed conservative viewpoints in the latest release of the so-called Twitter Files.
First formed in 2016, the council had grown from its initial 40 members to approximately 100 groups around the globe. They met periodically to advise on product launches or new initiatives that Twitter was working on. And even before it was formally dissolved, three prominent council members quit last week, saying “the safety and wellbeing of Twitter’s users are on the decline.”
In October, Musk announced he was going to form his own content moderation council and met with civil rights groups to address an apparent spike in hate speech on the platform. But over the next month Musk quickly charted a different course, instead granting “general amnesty” to previously banned accounts like former President Donald Trump and Rep. Marjorie Taylor Greene (R-Ga.), ending enforcement of Twitter’s Covid-19 misinformation policy and releasing internal documents alleging that “activist” employees deplatformed Trump.
More recently, Musk personally attacked Roth, Twitter’s former top safety official, and those attacks produced a wave of trolling and threats to Roth’s personal safety — forcing him and his family to flee their home, according to reports. Musk also claimed the advisory council and Twitter didn’t remove child sexual abuse content for years, an allegation that former Twitter CEO and co-founder Jack Dorsey rejected.
Musk’s salvos have led to criticism from politicians, including Rep. Ted Lieu (D-Calif.) who tweeted that the “Twitter Files” content was “stupid” and that claims that Trump was silenced in 2020 were “pure gaslighting.” Additionally, Sen. Mark Kelly (D-Ariz.) pushed back on Musk’s tweets about Anthony Fauci, the nation's top infectious disease expert, asking Musk not to “mock and promote hate toward already marginalized and at-risk-of-violence members of the #LGBTQ+ community.”
Rep. Ritchie Torres (D-N.Y.) said Musk’s criticism of Fauci means “Elon is no champion of free speech.”
However, no sitting lawmaker in Congress has left the platform yet, largely because there’s no equivalent social media site to share their messages and quickly reach journalists.
Advocacy groups continue to warn that Musk’s own conduct — and his professed commitment to free-speech absolutism — could lead to harm in the real world.
The Center for Democracy and Technology, a digital rights nonprofit that receives some funding from tech companies, was a member of the council and rejected Musk’s overt embrace of hateful speech on the platform. “We have also been dismayed by Twitter leadership’s irresponsible actions to spread misinformation about the council, which have endangered council members and eroded any semblance of trust in the company,” the group said in a statement.
In lieu of the council and the majority of its human content moderators, Musk’s Twitter may be relying on artificial intelligence and algorithms to detect and remove hate speech and extremist content, according to experts. But that won’t be enough, former council members say.
“It's unclear what the platform will look like, but I am not optimistic that AI alone can protect speech without human beings who can evaluate the nuances of certain kinds of content to figure out whether or not they violate the terms of service,” Magid said.