France fines Google 220 mn euros over online ad dominance

The fine comes as part of a wave of antitrust investigations by the French regulator into tech giants like Google, Apple and Facebook. (File/AFP)
Short Url
Updated 07 June 2021

France fines Google 220 mn euros over online ad dominance

  • France fines Google 220 million euros for abusing its dominant position in the market to advance online ads.
  • Google agrees to pay fine and change the way its business works across the world after settling a France probe.

PARIS: France’s competition regulator on Monday fined Google 220 million euros ($267 million) after finding it had abused its dominant market position for placing online ads, as US tech giants face growing pressure in Europe.

The penalty is part of a settlement reached after three media groups — News Corp, French daily Le Figaro and Belgium’s Groupe Rossel — accused Google of effectively having a monopoly over ad sales for their websites and apps.

The competition authority determined that Google gave preferential treatment to its own ad inventory auction service AdX and to Doubleclick Ad Exchange, its real-time platform for letting clients choose and buy ads.

“It is the first ruling in the world to scrutinize the complex algorithmic processes for the auctions that determine online ‘display’ advertising,” the authority’s president Isabelle de Silva said.

Media groups looking to sell ad space on their Internet sites or mobile apps using rival platforms often found that Google’s services were unfairly competing against rivals, using a variety of methods.

For example, regulators found that Doubleclick would vary the commission it took when making a sale based on prices offered by other so-called ad servers.
At the same time, Google arranged for AdX, its own supply-side platform (SSP), to give preferential treatment to offers emanating from Doubleclick — effectively squeezing out competitors such as Xandr or Index Exchange.

“The practices are particularly serious because they are penalizing Google’s competitors in the SSP market as well as the editors of websites and mobile apps,” the regulator said in a statement.

Media groups saw their online ad revenues crimped “even as their business model has been strongly undermined by the decline in paper subscriptions and the associated drop in advertising revenue,” it said.

Le Figaro eventually dropped its complaint.

Google did not contest the findings, and the regulator said the company has committed to operational changes including improved interoperability with third-party ad placement providers.

“We are going to test and develop these changes in the coming months before deploying them more broadly, including some on a global scale,” Maria Gomri, legal director at Google France, said in a statement.

The fine represents just a tiny fraction of the $55.3 billion in revenue booked by Google in the first quarter of this year alone, mainly from online ad sales.
The ruling comes as American technology firms are drawing closer scrutiny from European authorities, which are giving themselves new resources to better understand the complex workings of fast-evolving markets.

Last week, Germany’s competition regulator said it was expanding an antitrust investigation into Google and its parent company Alphabet to include Google News Showcase, a service aimed at increasing revenue for media publishers.

Facebook also found itself targeted last week by parallel competition inquiries from the European Union and Britain, into whether the social media giant uses data from advertisers to unfairly dominate the online classifieds market.

Google had already been fined 150 million euros by the French regulator in December 2019 over “opaque” operating rules for its advertising platform, which were deemed to be applied in “an unfair and random manner.”

And in December last year, Google as well as Amazon were fined a total of 135 million euros by France’s privacy watchdog for placing advertising cookies on users’ computers without consent.


Lebanese media minister George Kordahi stirs controversy yet again by defending Houthis

Updated 27 October 2021

Lebanese media minister George Kordahi stirs controversy yet again by defending Houthis

DUBAI: Once again, Lebanon’s information minister has triggered social media frenzy when a video of him wishing for a ‘temporary military coup’ to emanate and restructure the country’s political life, surfaced on Tuesday.
“I wish that a military coup happens in Lebanon, yet a temporary military coup that comes to organize and reorganize the political life in Lebanon,” the current Lebanese information minister George Kordahi was heard telling a TV host in the short video.
An independent online media platform, Megaphone posted on Twitter the two and a half minute video that has so far been viewed by nearly 6000 users.
According to Lebanese news portal, Annahar online, the video was part of an interview conducted by a media platform called Barlamanasha3b [People’s Parliament] and the interview was carried out on August 5.
At the time, Kordahi had not yet been named as information minister in Prime Minister Najib Mekati’s cabinet that was formed during September.
When the host opposed him by saying ‘there is nothing called temporary military coup’, Kordahi maintained saying: “Yes there is a temporary military coup for at least five years [in my opinion] then they reappoint the political regime.”
When the TV host of Barlamanasha3b asked him about his position on what is happening in Yemen, Kordahi said ‘they’ [referring to Houthis] are defending themselves’.
He questioned in a exclamatory tone, ‘Them! Are they assaulting anyone?’.
“In my opinion, this Yemeni war is absurd and should stop,” he said.
Meanwhile a cohost asked him about the nonstop drone attacks carried out by Houthis against Saudi civilians and properties, he replied saying: “Yes but you could also see them as people … and see the damages that are being inflicted upon them while being bombarded at their homes, properties, villages, squares, funerals and weddings by warplanes … it is about time this war comes to an end.”
Kordahi reiterated his opinion that ‘it is an absurd war’.
The Lebanese minister said: “We cannot compare between the efforts of Hezbollah in liberation and liberating Lebanese lands and the efforts of Houthis who are defending themselves against foreign aggression.”
According to the video, the cohost asked Kordahi if he considers the Saudis and Emiratis a ‘foreign aggression’.
“What?” he replied hesitantly as he moved his head forward before the cohost rephrased his question asking ‘do you consider Saudis and Emiratis as foreign aggression against Yemen?’
“Aggression, for sure there is aggression. Not because it is Saudi or Emirati but yes there has been an aggression for the past five or six years or for how long!” said Kordahi before the female host corrected him saying its ‘eight years’.
“Eight years [of aggression] continuously against those people! Enough! What couldn’t be achieved within two or three years, you won’t achieve it within eight years. So this has become an absurd war that’s my opinion,” he concluded.
Citing a Saudi source, MTV news posted on its twitter handle that the source said they were facing a severe diplomatic crisis because of Kordahi's offensive statements on Arab countries ‘regardless of the timing of the interview, but it indicated his intentions’.
Beirut-based Washington Post correspondent Sarah Dadouch tweeted that the Saudi ambassador to Lebanon just retweeted several stories citing Saudi sources saying, “We are in front of a sharp diplomatic crisis because of the comments of Media Minister George Kordahi”
Meanwhile, Emirati twitter user Hassan Sajwani tweeted “Lebanese Prime Minister: George Qardahi's words do not represent the government's official position on the Yemeni issue. - Al Arabiya TV”
A former television presenter, Kordahi has stirred controversy in the past given his questionable opinions on matters ranging from Syrian President Bashar Assad to his views on harassment in the workplace.
A well-known and highly popular among a large segment of the Lebanese population, the 71-year-old media figure rose to fame when he hosted the pan-Arab version of “Who Wants to be Millionaire?” for several years.
Arab News published earlier that his controversial political opinions might not have mattered then, but they sure do matter now that he is a member of Lebanon’s cabinet.


Facebook, YouTube take down Bolsonaro video over false vaccine claim

Bolsonaro, who tested positive for the coronavirus in July last year, had credited his taking hydroxychloroquine, an anti-malarial drug, for his mild symptoms. (File/AFP)
Updated 26 October 2021

Facebook, YouTube take down Bolsonaro video over false vaccine claim

  • Both Facebook and Alphabet Inc’s YouTube said the video, which was recorded on Thursday, violated their policies

RIO DE JANEIRO: Facebook and YouTube have removed from their platforms a video by Brazilian President Jair Bolsonaro in which the far-right leader made a false claim that COVID-19 vaccines were linked with developing AIDS.
Both Facebook and Alphabet Inc’s YouTube said the video, which was recorded on Thursday, violated their policies.
“Our policies don’t allow claims that COVID-19 vaccines kill or seriously harm people,” a Facebook spokesperson said in a statement on Monday.
YouTube confirmed that it had taken the same step later in the day.
“We removed a video from Jair Bolsonaro’s channel for violating our medical disinformation policy regarding COVID-19 for alleging that vaccines don’t reduce the risk of contracting the disease and that they cause other infectious diseases,” YouTube said in a statement.
According to the Joint United Nations Programme on HIV and AIDS (UNAIDS), COVID-19 vaccines approved by health regulators are safe for most people, including those living with HIV, the virus that causes acquired immunodeficiency syndrome, known as AIDS.
Bolsonaro’s office did not respond immediately to a request for comment outside normal hours.
In July, YouTube removed videos from Bolsonaro’s official channel in which he recommended using hydroxychloroquine and ivermectin against COVID-19, despite scientific proof that these drugs are not effective in treating the disease.
Since then, Bolsonaro has avoided naming both drugs on his live broadcasts, saying the videos could be removed and advocating “early treatment” in general for COVID-19.
Bolsonaro, who tested positive for the coronavirus in July last year, had credited his taking hydroxychloroquine, an anti-malarial drug, for his mild symptoms. While Bolsonaro himself last January said that he wouldn’t take any COVID-19 vaccine, he did vow to quickly inoculate all Brazilians.
In addition to removing the video, YouTube has suspended Bolsonaro for seven days, national newspapers O Estado de S. Paulo and O Globo reported, citing a source familiar with the matter.
YouTube did not respond to a separate Reuters request for comment regarding the suspension on Monday night.


Whistleblower Haugen says Facebook making online hate worse

An installation depicting Facebook founder Mark Zuckerberg surfing on a wave of cash and surrounded by distressed teenagers. (AFP)
Updated 25 October 2021

Whistleblower Haugen says Facebook making online hate worse

  • Haugen told UK lawmakers how Facebook Groups amplifies online hate, saying algorithms that prioritize engagement take people with mainstream interests and push them to the extremes

LONDON: Former Facebook data scientist turned whistleblower Frances Haugen on Monday told lawmakers in the United Kingdom working on legislation to rein in social media companies that the company is making online hate and extremism worse and outlined how it could improve online safety.
Haugen appeared before a parliamentary committee scrutinizing the British government’s draft legislation to crack down on harmful online content, and her comments could help lawmakers beef up the rules. She’s testifying the same day that Facebook is set to release its latest earnings and that The Associated Press and other news organizations started publishing stories based on thousands of pages of internal company documents she obtained.
Haugen told UK lawmakers how Facebook Groups amplifies online hate, saying algorithms that prioritize engagement take people with mainstream interests and push them to the extremes. She said the company could add moderators to prevent groups from being used to spread extremist views.
“Unquestionably, it’s making hate worse,” she said.
Haugen added that she was “shocked to hear recently that Facebook wants to double down on the metaverse and that they’re gonna hire 10,000 engineers in Europe to work on the metaverse,” Haugen said, referring to the company’s plans for an immersive online world it believes will be the next big Internet trend.
“I was like, ‘Wow, do you know what we could have done with safety if we had 10,000 more engineers?’ It would be amazing,” she said.
It’s her second appearance before lawmakers after she testified in the US Senate earlier this month about the danger she says the company poses, from harming children to inciting political violence and fueling misinformation. Haugen cited internal research documents she secretly copied before leaving her job in Facebook’s civic integrity unit.
The documents, which Haugen provided to the US Securities and Exchange Commission, allege Facebook prioritized profits over safety and hid its own research from investors and the public. Some stories based on the files have already been published, exposing internal turmoil after Facebook was blindsided by the Jan. 6 US Capitol riot and how it dithered over curbing divisive content in India, and more is to come.
Facebook CEO Mark Zuckerberg has disputed Haugen’s portrayal of the company as one that puts profit over the well-being of its users or that pushes divisive content, saying a false picture is being painted. But he does agree on the need for updated Internet regulations, saying lawmakers are best able to assess the tradeoffs.
Haugen has told US lawmakers that she thinks a federal regulator is needed to oversee digital giants like Facebook, something that officials in Britain and the European Union are already working on.
The UK government’s online safety bill calls for setting up a regulator that would hold companies to account when it comes to removing harmful or illegal content from their platforms, such as terrorist material or child sex abuse images.
“This is quite a big moment,” Damian Collins, the lawmaker who chairs the committee, said ahead of the hearing. “This is a moment, sort of like Cambridge Analytica, but possibly bigger in that I think it provides a real window into the soul of these companies.”
Collins was referring to the 2018 debacle involving data-mining firm Cambridge Analytica, which gathered details on as many as 87 million Facebook users without their permission.
Representatives from Facebook and other social media companies plan to speak to the committee Thursday.
Ahead of the hearing, Haugen met the father of Molly Russell, a 14-year-old girl who killed herself in 2017 after viewing disturbing content on Facebook-owned Instagram. In a chat filmed by the BBC, Ian Russell told Haugen that after Molly’s death, her family found notes she wrote about being addicted to Instagram.
Haugen also is scheduled to meet next month with European Union officials in Brussels, where the bloc’s executive commission is updating its digital rulebook to better protect Internet users by holding online companies more responsible for illegal or dangerous content.
Under the UK rules, expected to take effect next year, Silicon Valley giants face an ultimate penalty of up to 10 percent of their global revenue for any violations. The EU is proposing a similar penalty.
The UK committee will be hoping to hear more from Haugen about the data that tech companies have gathered. Collins said the internal files that Haugen has turned over to US authorities are important because it shows the kind of information that Facebook holds — and what regulators should be asking when they investigate these companies.
The committee has already heard from another Facebook whistleblower, Sophie Zhang, who raised the alarm after finding evidence of online political manipulation in countries such as Honduras and Azerbaijan before she was fired.


Australia wants Facebook to seek parental consent for kids

Social media platforms would be required to take all reasonable steps to verify their users’ ages. (File/AFP)
Updated 25 October 2021

Australia wants Facebook to seek parental consent for kids

  • Australia plans to crack down on online advertisers targeting children by making social media platforms seek parental consent for users younger than 16 years old

CANBERRA: Australia plans to crack down on online advertisers targeting children by making social media platforms seek parental consent for users younger than 16 years old to join or face fines of 10 million Australian dollars ($7.5 million) under a draft law released Monday.
The landmark legislation would protect Australians online and ensure that Australia’s privacy laws are appropriate in the digital age, a government statement said.
Social media platforms would be required to take all reasonable steps to verify their users’ ages under a binding code for social media services, data brokers and other large online platforms operating in Australia,
The platforms would also have to give primary consideration to the best interests of children when handling their personal information, the draft legislation states.
The code would also require platforms to obtain parental consent for users under the age of 16.
The proposed legal changes come after former Facebook product manager Frances Haugen this month asserted that whenever there was a conflict between the public good and what benefited the company, the social media giant would choose its own interests.
Assistant Minister to the Prime Minister for Mental Health and Suicide Prevention David Coleman said the new code would lead the world in protecting children from social media companies.
“In Australia, even before the COVID-19 pandemic, there was a consistent increase in signs of distress and mental ill health among young people. While the reasons for this are varied and complex, we know that social media is part of the problem,” Coleman said in a statement.
Facebook regional director of public policy Mia Garlick said her platform had been calling for Australia’s privacy laws to evolve with new technology.
“We have supported the development of international codes around young people’s data, like the UK Age Appropriate Design Code,” Garlick said in a statement, referring to British legislation introduced this year that requires platforms to verify users’ ages if content risks the moral, physical or mental well-being of children.
“We’re reviewing the draft bill and discussion paper released today, and look forward to working with the Australian government on this further,” she added.
Australia has been a prominent voice in calling for international regulation of the Internet.
It passed laws this year that oblige Google and Facebook to pay for journalism. Australia also defied the tech companies by creating a law that could imprison social media executives if their platforms stream violent images.


Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Updated 25 October 2021

Facebook’s language gaps weaken screening of hate, terrorism

  • Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects
  • In some of the world’s most volatile regions, terrorist content and hate speech proliferate because Facebook remains short on moderators who speak local languages and understand cultural contexts

DUBAI: As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.
Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.
For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.
Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.
Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.
In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions were reviewed by a consortium of news organizations, including The Associated Press.
In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.
But when it comes to Arabic content moderation, the company said, “We still have more work to do. ... We conduct research to better understand this complexity and identify how we can improve.”
In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.
The Rohingya’s persecution, which the US has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.
Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.
In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.
In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.
Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.
The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Qur’an. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.
Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.
Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.
For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.
Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.
He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.
Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.
But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.
Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the US government equivalent — are grounds for a takedown.
“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”
The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.
“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.
In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”
“We know our systems are not perfect,” it added.
The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.
Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.
Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.
“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”
Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.
“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.
Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.
Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.
“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.
In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.
When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.
When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.
In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.
Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.
Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77 percent of the time, one document said.
“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90 percent of the time, a report said.
Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.
“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”
Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.
“The stereotype that Arabic is one entity is a major problem,” said Enam Al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.
Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.
Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.
Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.
“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.
During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.
In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.
“The repetition of false positives creates a huge drain of resources,” it said.
In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.
Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.
“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.
But the company also lamented that “there is not one clear mitigation strategy.”
Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.
“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”