Twitter exec says moving fast on moderation, as harmful content surges

A Twitter logo hangs outside the company's San Francisco offices on Nov. 1, 2022. (AP)
Short Url
Updated 03 December 2022
Follow

Twitter exec says moving fast on moderation, as harmful content surges

  • Twitter is restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography

SAN FRANCISCO: Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favoring restrictions on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.
Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter Vice President of Trust and Safety Product Ella Irwin.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Irwin said on Thursday, in the first interview a Twitter executive has given since Musk’s acquisition of the social media company in late October.
Her comments come as researchers are reporting a surge in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or engaged in “egregious spam.”
The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees.
And advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.
On Friday, Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with France President Emmanuel Macron.
Irwin said Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company’s top priority. “He emphasizes that every single day, multiple times a day,” she said.
The approach to safety Irwin described at least in part reflects an acceleration of changes that were already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that work.
One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.
The number of tweets containing hateful content on Twitter rose sharply in the week before Musk tweeted on Nov. 23 that impressions, or views, of hateful speech were declining, according to the Center for Countering Digital Hate – in one example of researchers pointing to the prevalence of such content, while Musk touts a reduction in visibility.
Tweets containing words that were anti-Black that week were triple the number seen in the month before Musk took over, while tweets containing a gay slur were up 31 percent, the researchers said.
’MORE RISKS, MOVE FAST’
Irwin, who joined the company in June and previously held safety roles at other companies including Amazon.com and Google, pushed back on suggestions that Twitter did not have the resources or willingness to protect the platform.
She said layoffs did not significantly impact full-time employees or contractors working on what the company referred to as its “Health” divisions, including in “critical areas” like child safety and content moderation.
Two sources familiar with the cuts said that more than 50 percent of the Health engineering unit was laid off. Irwin did not immediately respond to a request for comment on the assertion, but previously denied that the Health team was severely impacted by layoffs.
She added that the number of people working on child safety had not changed since the acquisition, and that the product manager for the team was still there. Irwin said Twitter backfilled some positions for people who left the company, though she declined to provide specific figures for the extent of the turnover.
She said Musk was focused on using automation more, arguing that the company had in the past erred on the side of using time- and labor-intensive human reviews of harmful content.
“He’s encouraged the team to take more risks, move fast, get the platform safe,” she said.
On child safety, for instance, Irwin said Twitter had shifted toward automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said she has noticed Twitter recently taking down some content as fast as 30 seconds after she reports it, without acknowledging receipt of her report or confirmation of its decision.
In the interview on Thursday, Irwin said Twitter took down about 44,000 accounts involved in child safety violations, in collaboration with cybersecurity group Ghost Data.
Twitter is also restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography. Past concerns about the impact of such restrictions on permitted uses of the terms were gone, she said.
The use of “trusted reporters” was “something we’ve discussed in the past at Twitter, but there was some hesitancy and frankly just some delay,” said Irwin.
“I think we now have the ability to actually move forward with things like that,” she said.

 


Spotify and Dubai Culture sign MoU to support local talent development

Updated 26 January 2026
Follow

Spotify and Dubai Culture sign MoU to support local talent development

DUBAI: Spotify and the Dubai Culture and Arts Authority signed a memorandum of understanding earlier this month aimed at supporting the growth of local musical talent.

The partnership will include the sharing of insights, data and analytics, as well as practical support to help UAE-based artists sustain and progress their careers, the organizations said.

As part of the MoU, Spotify and Dubai Culture will launch joint programs and develop a series of music-led projects focused on the emirate’s creative community.

Talent development is a core pillar of Dubai Culture’s work, said Her Excellency Hala Badri, director-general of the Dubai Culture and Arts Authority.

She added: “In the music sector, this translates into sustained support that enables musicians to develop, produce, and continue their practice over time. The agreement with Spotify is part of our broader efforts to support artists and creatives at all career stages and to strengthen the professional foundations of the music sector in Dubai.”

For Spotify, the MoU is in line with existing initiatives such as the RADAR Arabia program and the Fresh Finds Arabia playlist, which highlight and support local emerging talent.

As a global hub connecting Asia, Africa and Europe, Dubai is playing an increasingly important role in the region’s music economy, said Gustav Gyllenhammar, senior vice president of markets and subscriptions at Spotify.

Through the collaboration with Dubai Culture, he added, Spotify is “helping build a stronger local music ecosystem, supporting discovery and helping music coming out of Dubai reach listeners around the world.”