Meta releases beefed-up AI models, eyes integration into its apps

By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it. (AFP/File)
Short Url
Updated 19 April 2024
Follow

Meta releases beefed-up AI models, eyes integration into its apps

  • AI model Llama 3 takes step towards human-level intelligence, Meta claims
  • Company also announced new AI Assistant integration into its major social media apps

SAN FRANCISCO: Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.
Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.
“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.
Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.
“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.
“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”
That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.
“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.
AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”
Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.

Meta AI has been consistently updated and improved since its initial release last year, according to the company.
“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.
“Its social media apps represent a massive user base that it can use to test AI experiences.”
By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.
Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.
Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.
“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.
Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.
Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.
Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.