Ex-porn actor German spy guilty of trying to share state secrets

This file photo taken on September 05, 2017 shows German defendant Roque M sitting in the courtroom on the start of his trial in Düsseldorf, western Germany, where he said he pretended to be a jihadist planning an attack in online chatrooms because he was bored, as he went on trial for attempted treason. (AFP)
Updated 20 September 2017
Follow

Ex-porn actor German spy guilty of trying to share state secrets

BERLIN: A former German intelligence agent who was also an ex-gay porn actor was Tuesday given a one-year suspended sentence for attempting to share state secrets while pretending to be a jihadist online.
The 52-year-old named as Roque M., made headlines when he was arrested last November in what initially appeared to be a case of an Islamist mole at work in Germany’s domestic spy agency.
But he was freed in July after prosecutors dropped most of the charges, finding no evidence of an attack plot or ties to Islamist groups.
He told the court that he pretended to be a jihadist planning an attack in online chatrooms because he was bored.
“I never met with any Islamists. I would never do that. The whole thing was like a game,” the suspect said at the start of his trial in the western city of Duesseldorf.
A former banker and a father-of-four, Roque M. told the court that he monitored the Islamist scene as part of his job for the Office for the Protection of the Constitution (BfV), a role he described as “a lot of fun.”
But he said he grew bored on weekends when he was at home watching his disabled son, and immersed himself in the online world of Islamists, feigning to be one himself.
It was “an escape from reality,” he said in court.
He even went so far as to arrange a meeting with a suspected Islamist at a gym, although Roque M. insisted he never had any intention of going.
He was caught after he offered to share classified information about BfV operations with someone who turned out to be a colleague working undercover.
The case initially sparked outrage, with Germany’s domestic spy agency fending off calls for a complete security overhaul for allowing an “Islamist” to infiltrate its team who had passed multiple screenings.
The intelligence agent’s colorful past as a gay porn actor also enthralled the public.
But as no evidence emerged of an actual Islamist plot, prosecutors left Roque M. facing the sole charge of attempting to share state secrets.


UNICEF warns of rise in sexual deepfakes of children

Updated 12 sec ago
Follow

UNICEF warns of rise in sexual deepfakes of children

  • The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images

UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.