WEST PALM BEACH: The US military has carried out another lethal strike on alleged drug smugglers in the Caribbean Sea, Defense Secretary Pete Hegseth announced Saturday.
Hegseth in a social media posting said the vessel was operated by a US-designated terrorist organization but did not name which group was targeted. He said three people were killed in the strike.
It’s at least the 15th such strike carried out by the US military in the Caribbean or eastern Pacific since early September.
“This vessel— like EVERY OTHER— was known by our intelligence to be involved in illicit narcotics smuggling, was transiting along a known narco-trafficking route, and carrying narcotics,” Hegseth said in a posting on X.
The US military has now killed at least 64 people in the strikes.
Trump has justified the attacks as a necessary escalation to stem the flow of drugs into the United States. He has asserted the US is engaged in an “armed conflict” with drug cartels, relying on the same legal authority used by the Bush administration when it declared a war on terrorism after the Sept. 11, 2001, attacks.
The strikes come as the Trump administration has deployed an unusually large force of warships in the region.
Venezuelan President Nicolás Maduro has decried the military operations, as well as the US military buildup, as a thinly veiled effort by the US administration aimed at ousting him from power.
The Trump administration has yet to show evidence to support its claims about the boats that have been attacked, their connection to drug cartels, or even the identity of the people killed in the strikes.
US carries out new strike in Caribbean, killing 3 alleged drug smugglers
https://arab.news/zptxp
US carries out new strike in Caribbean, killing 3 alleged drug smugglers
- Defense Secretary Pete Hegseth announced the latest strike in a social media posting late Saturday
- He said the vessel was operated by a US-designated terrorist organization but did not name which group was targeted
UNICEF warns of rise in sexual deepfakes of children
- The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images
UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.










