JAKARTA: Indonesia’s parliament passed into law a bilateral extradition agreement with Singapore on Thursday, in a move Jakarta expects to help authorities bring to justice people accused of stashing offshore billions of dollars of state money in the city-state.
The absence of an extradition treaty has been a sensitive issue for Indonesia, which has complained about the difficulty of going after some fugitives accused of embezzling large sums during the 1997-1998 Asian financial crisis.
Under the extradition treaty, signed by the countries’ leaders in January, people who had committed 31 types of crime will be liable to be extradited and it will apply to offenses committed up to 18 years ago, Indonesia has said.
The agreement would also mean that people would not be able to escape justice by changing their citizenship, it said.
Speaking after parliament’s approval, Indonesia’s law and human rights minister Yasonna Laoly said that the law “would give legal certainty for the two countries in the process of extraditing fugitives.”
Singapore has said the agreement “will also be helpful to Indonesia’s own efforts to prevent suspected criminals from fleeing overseas, and for them to be apprehended in Indonesia.”
Indonesia has set up a so-called “BLBI” task force that is going after $8 billion of bailout funds given to bank owners and borrowers after the Asian financial crisis in the late 1990s that were never repaid.
Indonesia has long attempted to pass the law.
In 2007, Indonesian President Susilo Bambang Yudhoyono and Singaporean Prime Minister Lee Hsien Loong oversaw the signing of an extradition treaty and defense cooperation agreement, but it was never ratified by Indonesia’s parliament.
Indonesia parliament approves extradition treaty with Singapore
https://arab.news/yd2wh
Indonesia parliament approves extradition treaty with Singapore
- People who had committed 31 types of crime will be liable to be extradited
- Extradition treaty will apply to offenses committed up to 18 years ago
UNICEF warns of rise in sexual deepfakes of children
- The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images
UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.










