Google launches chatbot Bard in Arabic

Along with Arabic and English, Google has developed Bard to serve users in more than 40 languages such as Mandarin, German, Hindi, and Spanish. (AFP/File)
Short Url
Updated 13 July 2023
Follow

Google launches chatbot Bard in Arabic

  • The ChatGPT-like AI chatbot will allow Arabic users to expand their creativity, learning, and productivity, Google MENA director said

RIYADH and DUBAI: Google launched its latest generative artificial intelligence experiment, Bard in Arabic, on Thursday, having initially introduced it in English in May this year, to allow Arabic-speaking people to utilize their creative capabilities and increase productivity.

Google is intentionally calling Bard an “AI experiment” — not a chatbot — allowing the company to explore a “new paradigm in computing,” said Najeeb Jarrar, regional director of marketing at Google MENA.

“We’re learning together how large language models can be helpful and how to minimize poor experiences,” he told Arab News.

The Arabic language consists of several dialects, making it a challenge for AI models. Bard, however, is based on Google’s most recent language model, PaLM2, which can understand information in multiple languages.

It is designed to recognize questions in over 16 Arabic dialects, including Egyptian Spoken Arabic and Saudi Arabian Spoken Arabic, and can reply to questions in Modern Standard Arabic, Jarrar said.

It also understands input even if it contains mixed languages such as inserting sentences in Arabic with other languages, along with a user interface that supports right-to-left writing. 

“I have been using Bard since its release in the Middle East, in the English language. My use for it was to summarize some videos and reports,” said Osamah Essam Eddin, a technical content creator.

He explained how he used both Bard and ChatGPT and compared the two. “I use Bard more for search or (to) lookup updates about a piece of information. It is excellent for anything related to searching such as searching for a specific brand, specific feature, and such,” he said.

Currently, Bard is only available for personal use. When asked about how businesses can use Bard, Jarrar said: “As we launch Bard in new languages including Arabic, our focus will primarily focus on users’ experience and how they can benefit more from Bard.”

There is also no news regarding advertising and revenue models for Bard.

It is primarily designed to boost productivity through features like exporting Python code to Replit; sharing Bard chats with friends; and image search.

Google has already integrated products like Lens, Gmail, Docs and Collab into Bard with plans for “further integration,” Jarrar said.

“We are used to thinking of computing (as) narrowing the world’s existing information, and now it’s about applying the information and expanding it into new ways of creation and creativity,” said Jack Krawczyk, senior product director at Google and one of the leads at Bard, during a roundtable earlier this week.

Addressing privacy and misinformation concerns associated with AI, particularly generative AI chatbots, he said that Google is taking a “bold and responsible approach,” which means engaging with privacy regulators before launching.

Image search, for example, is currently only available in English because Google wants to “understand how this new form of creativity operates in a single language” so that it can build systems that essentially “maximize helpfulness and minimize harm” in other languages, Krawczyk said.

“A lot of people talk about the race that’s happening right now in AI and we believe there’s only one race — the race to get it right. And in that race to get it right, we’re taking this responsible approach,” he added.

Arabic was among over 40 languages Bard was launched in and rolled out across Europe on Thursday.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.