Handling of UK Afghan data breach ‘alarming’: MP

A UK Ministry of Defense data breach that jeopardized the security of thousands of Afghans was dealt with via “a few unrecorded meetings and a handshake,” an MP has said. (AFP/File)
Short Url
Updated 21 October 2025
Follow

Handling of UK Afghan data breach ‘alarming’: MP

  • Kit Malthouse: Incident dealt with via ‘a few unrecorded meetings and a handshake’
  • Thousands of Afghans evacuated after personal details published online, putting them at risk of Taliban reprisals

LONDON: A UK Ministry of Defense data breach that jeopardized the security of thousands of Afghans was dealt with via “a few unrecorded meetings and a handshake,” an MP has said.
Kit Malthouse described the handling of the incident — which saw the details of Afghans who worked with British forces made available online, prompting a massive evacuation program amid fears that those named in the leak could be targeted by the Taliban — as “alarming.”
The breach, containing 33,000 lines of data, and the subsequent evacuation only became public knowledge two years later after a superinjunction imposed by the government was lifted by a court.
The UK Information Commissioner’s Office, which was made aware of the breach, chose not to launch an investigation at the time.
It has now emerged that the ICO also failed to keep any notes about the decision not to investigate, claiming that this was due to the case involving classified information.
John Edwards, the UK information commissioner, told the science, innovation and technology committee of the House of Commons on Tuesday that the ICO had relied on the “honesty” of the MoD when choosing not to investigate.
Malthouse, a member of the committee, responded: “What you’ve broadly said to us is that it was dealt with a few unrecorded meetings and a handshake. ‘See ya,’ nothing to see here.
“It seems extraordinary to me given the severity and the impact of it ... The picture you’ve painted of the way the ICO handled it seems alarming.”
MP Lauren Sullivan told Edwards: “It sounds like your method of investigation relies a lot on the honesty of the person you’re investigating.”
Edwards replied: “We didn’t investigate. Yes we were relying on honesty. Had we later found we were misled, we could’ve investigated.”
MP Chi Onwurah, the committee chair, said: “When I saw some of the details of the Ministry of Defense data breach, I was astounded that that could be part of government data practice — (a) 33,000-line Excel file, with top-secret information, bandied about like confetti. This is not an individual failure ... It was an institutional failing.”
Edwards said the ICO, which launched an investigation into a smaller MoD breach involving 245 Afghans, lacked sufficient trained staff to deal with issues concerning top-secret information, but added that it was irrelevant as the regulator did not launch an investigation in this case.
“We’re able to investigate top-secret matters. We chose not to because it would’ve tied up resources which would’ve been better used elsewhere,” he said. “We were confident that the ministry was taking it seriously.”


UNICEF warns of rise in sexual deepfakes of children

Updated 12 sec ago
Follow

UNICEF warns of rise in sexual deepfakes of children

  • The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images

UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.