JAKARTA: The student suspected of detonating blasts that injured dozens of people at a mosque in Indonesia’s capital last week was motivated by vengeance and inspired by attacks carried out by white supremacists and neo-Nazis, police said on Tuesday.
The blasts, which hit a mosque at a school complex in the capital Jakarta’s Kelapa Gading area during Friday prayers, left 96 people injured.
Police said on Tuesday that seven homemade explosives had been found by Indonesian authorities in and around the mosque, some of them in Coca-Cola cans.
Some bombs were triggered via remote control and some via fuse, and three did not explode, they said. Police said they also found a toy firearm at the scene with inscriptions, one of which read “vengeance.”
Last week, police said the suspect was a 17-year-old student at an adjacent school. Jakarta police chief Asep Edi Suheri did not name the suspect on Tuesday, referring to him as a “child facing the law.”
The alleged perpetrator was a lone wolf motivated by vengeance and loneliness, said Mayndra Eka Wardhana, an official at the Indonesian police anti-terror unit. He said the suspect had been inspired by attacks carried out by neo-Nazi and white supremacist figures and had joined a social media community glorifying grisly violence, but that he did not appear to subscribe to a specific ideology or be part of any militant network. Police cited the perpetrators of shootings such as the 2019 attack at mosques in Christchurch, New Zealand, and the 1999 shootings at Columbine High School in the United States, as possible inspirations for the blasts.
“That inspired the alleged perpetrator,” Mayndra said. “He felt there was no place to share his complaints, neither with his family nor school.” The suspect, who sustained a head injury at the time of the explosions, is recovering after undergoing surgery.
Suspect in Indonesia mosque bombing was inspired by past mass killings, police say
https://arab.news/bhazs
Suspect in Indonesia mosque bombing was inspired by past mass killings, police say
- The alleged perpetrator was a lone wolf motivated by vengeance and loneliness, said Mayndra Eka Wardhana, an official at the Indonesian police anti-terror unit
- He said the suspect had been inspired by attacks carried out by neo-Nazi and white supremacist figures
UNICEF warns of rise in sexual deepfakes of children
- The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images
UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.










