LONDON: Prince William and the Princess of Wales on Thursday carried out their first joint public engagement since the end of Kate’s chemotherapy by meeting the bereaved parents of victims of a stabbing rampage in the seaside town of Southport.
The royal couple spent 90 minutes meeting privately with the families of Bebe King, 6, Elsie Dot Stancombe, 7, and Alice da Silva Aguiar, 9, who were killed at the Taylor Swift-themed dance class on July 29. They also met with their teacher.
The couple later met with emergency workers at a community center, and told them how much their efforts had helped the families of the victims.
“I can’t underestimate how grateful they all are for the support you provided on the day,” Kate said. “On behalf of them, thank you.”
William and Kate sat beside each other on a bench and listened to their stories. Once the cameras left, Kate offered a hug to responders who were struggling to express their feelings.
“You’re all heroes,” William said. “Please make sure you look after yourselves, please take your time, don’t rush back to work.”
The Princess of Wales revealed in March that she was undergoing treatment for cancer, in a stunning announcement that followed weeks of speculation about her health and whereabouts.
The princess disclosed her condition in a video message that followed relentless speculation on social media that began when she was hospitalized for unspecified abdominal surgery in January.
In a recent video, Kate said she had completed chemotherapy, and planned to slowly return to public duties, “undertaking a few more public appearances” in the coming months.
But she acknowledged that the path to recovery would be long and she would “take each day as it comes.”
Prince and Princess of Wales meet with families of dance class stabbing attack
https://arab.news/gy25k
Prince and Princess of Wales meet with families of dance class stabbing attack
- The royal couple spent 90 minutes meeting privately with the families of the victims
UNICEF warns of rise in sexual deepfakes of children
- The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images
UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.










