Twitter fighting terror posts with ‘zero-tolerance’ measures

1 / 2
George Salama
2 / 2
Updated 25 April 2017
Follow

Twitter fighting terror posts with ‘zero-tolerance’ measures

LONDON: New safety measures introduced by Twitter are helping fight extremist and abusive posts, a Dubai-based executive for the company said, amid criticism that technology firms have dropped the ball when it comes to terror content.
As part of its semi-annual “Transparency Report” released in March, Twitter said it had suspended some 377,000 accounts during the final six months of 2016 for “violations related to promotion of terrorism.”
George Salama, head of public policy and government relations for the Middle East and North Africa, told Arab News that new safety measures are also helping in the fight against abuse and terror content.
Measures include a “safer search” feature that removes tweets with potentially sensitive content from users’ timelines, an expanded “mute” feature, and new filtering options for notifications to give people more control over what they see. Twitter is also working to identify accounts engaging in abusive behavior, even if they are not reported.
“It is a complete package of safety tools that is making Twitter a safer platform, whether we are talking about terrorism or any other kind of abuse,” said Salama.
“It’s helping a lot in fighting such phenomena… Safety is No. 1 priority for us.”
But some believe that technology companies such as Twitter are not doing enough to tackle the spread of extremist content online. Two UK ministers last month said Twitter, Google and Facebook must do more to tackle posts that promote terrorism and extremism.
Here Salama explains how Twitter is clamping down on this in the Middle East and beyond.

Q. What is Twitter’s policy regarding terror and extremist-related content on its platform?
We at Twitter clearly condemn the use of our platform to promote any sort of terrorism. The Twitter rules make it very clear that such behavior, or any sort of violent threat, is not permitted on our services.

Q. Is it actually possible to prevent people from posting such content? Surely if you close one account, they can just open another…
We took clear, major steps to update the platform from a safety perspective, which would help to enforce, and empower people on the platform to (make them) feel that they are safe, and engaging in conversation in a much more productive way. The problem (does not only involve) Twitter. The Internet, in general, is public and open, and Twitter itself is public and open.

Q. What kind of interactions do you have with regional governments in the Gulf, and how many requests are they sending regarding account suspensions?
Transparency is part of our DNA at Twitter. We are issuing twice per year our “Transparency Report.” And it has a clear breakdown by country of how many requests we have received by governments, and what has been actioned, and what is not… It is part of my role to raise awareness with governments, regulators and law-enforcement on exactly what to expect when submitting any information request.

Q. Top ministers in the UK recently criticized Twitter — along with Google and Facebook — saying they must do more to tackle content that promotes terrorism and extremism. Do you agree?
There is always room for improvement. And we are working with our industry partners, not only in Europe — it is a global effort to counter violent extremism. We are not only working on taking down accounts. We are working in parallel with different partners in Europe and in the Middle East to raise awareness.

Q. But the British Foreign Minister Boris Johnson has said technology companies are not acting fast enough to remove extremist content when issues have been raised, and need to develop new systems and algorithms to detect it. What is your response to that?
I am not in a place to talk about other industry partners. But what I am confident about is Twitter’s efforts on that front are remarkable, and they are welcomed by many governments regionally and globally.

Q. Does Twitter have any other measures in the pipeline to tackle this problem?
The efforts are ongoing. And (following) the big set of safety updates that we announced earlier, there are always updates coming… With the machine-learning, artificial-intelligence tools and spam-filtering tools it is an ongoing process. We are heavily focusing on that to ensure that Twitter is a safe place for our users.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.