LONDON: British police said on Tuesday they had charged two men and a woman with identity document offenses after the BBC reported the group were accused of spying for Russia.
The individuals are Bulgarian nationals, who were alleged to be working for Russian security services, the BBC said in its report, saying they had been held as part of a major national security investigation.
London’s Metropolitan Police confirmed five people had been arrested by counter-terrorism officers in February under the Official Secrets Act and three had since been charged with possession of false identity documents with improper intention.
A police statement named them as Orlin Roussev, 45, Biser Dzambazov, 42, and Katrin Ivanova, 31. They appeared at London’s Old Bailey Court in July and were remanded in custody until a future date.
The police declined to comment on whether they were suspected of being Russian spies.
Britain has been sharpening its focus on external security threats and last month it passed a new national security law, aiming to deter espionage and foreign interference with updated tools and criminal provisions.
The government labeled Russia “the most acute threat” to its security when the law was passed.
Police have charged three Russians, who they say are GRU military intelligence officers, with the 2018 attempt to murder former double agent Sergei Skripal with the military-grade nerve agent Novichok. Two were charged in 2018 and the third in 2021.
Last year, Britain’s domestic spy chief said more than 400 suspected Russian spies had been expelled from Europe.
Britain has also been one of the strongest supporters of Ukraine since the Russian invasion last year and has imposed a range of sanctions on Russian officials and oligarchs.
Three suspected Russian spies arrested in Britain, BBC reports
https://arab.news/8hkpu
Three suspected Russian spies arrested in Britain, BBC reports
- They were held in February under the Official Secrets Act by counter-terrorism detectives at London’s Metropolitan Police
UNICEF warns of rise in sexual deepfakes of children
- The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images
UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.










