UK’s illegal Albanian migrants reportedly paying up to $3,700 to fake guarantors

Albanians made up around one-third of the 47,755 people that arrived in the UK on small boats in 2022. (Getty Images)
Short Url
Updated 26 May 2023
Follow

UK’s illegal Albanian migrants reportedly paying up to $3,700 to fake guarantors

  • Guarantors on TikTok offering to remove electronic tags designed to prevent migrants from fleeing

LONDON: Albanians entering the UK illegally on small boats are offering to pay up to $3,700 (£3,000) to fake guarantors to avoid being held at detention centers, the Telegraph reported on Thursday.

Guarantors were promoting on social media that they could provide the migrants with a UK address to get bail and escape detention.

On TikTok, guarantors were also offering to remove the Albanians’ electronic tags designed to prevent them from fleeing once released into the community, the Telegraph said.

The scam comes as the British Home Office tries to expedite the deportation of hundreds of Albanians who crossed the Channel last year. Albanians made up around one-third of the 47,755 people that arrived in the UK on small boats in 2022.

An Albanian interpreter in London who works freelance for immigration solicitors, said many migrants were trying to get out of detention centers.

“They have got relatives who do not fulfil the criteria to become a guarantor, so the solution has been found inside the Albanian community,” the interpreter told the Telegraph.

“For a payment of up to £3,000, people who have a house are becoming guarantors. Every day, I see people who have no ties at all with the persons who have become guarantors. This is becoming a growing business.

“Courts are not asking at all what sort of relationship the person applying for bail (has) with the guarantor,” they added.

The National Crime Agency was also investigating whether lawyers were assisting people-smuggling groups in abusing modern slavery laws in order to seek asylum for individuals entering the UK. It estimated that “tens” of solicitors could be involved, the Telegraph reported.

Rob Richardson, head of the NCA’s modern slavery and human trafficking unit, said it appeared to be prevalent among Albanian organized crime gangs, where migrants were already being trained on how to make claims to avoid deportation.

“We’ve seen some examples where individuals have got scripts. They’ve been told exactly what to tell policemen to get picked up. And we have concerns about how that works,” he told The Guardian.
 


UNICEF warns of rise in sexual deepfakes of children

Updated 12 sec ago
Follow

UNICEF warns of rise in sexual deepfakes of children

  • The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images

UNITED NATIONS, United States: The UN children’s agency on Wednesday highlighted a rapid rise in the use of artificial intelligence to create sexually explicit images of children, warning of real harm to young victims caused by the deepfakes.
According to a UNICEF-led investigation in 11 countries, at least 1.2 million children said their images were manipulated into sexually explicit deepfakes — in some countries at a rate equivalent to “one child in a typical classroom” of 25 students.
The findings underscored the use of “nudification” tools, which digitally alter or remove clothing to create sexualized images.
“We must be clear. Sexualized images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF said in a statement.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The agency criticized AI developers for creating tools without proper safeguards.
“The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly,” UNICEF said.
Elon Musk’s AI chatbot Grok has been hit with bans and investigations in several countries for allowing users to create and share sexualized pictures of women and children using simple text prompts.
UNICEF’s study found that children are increasingly aware of deepfakes.
“In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures,” the agency said.
UNICEF urged “robust guardrails” for AI chatbots, as well as moves by digital companies to prevent the circulation of deepfakes, not just the removal of offending images after they have already been shared.
Legislation is also needed across all countries to expand definitions of child sexual abuse material to include AI-generated imagery, it said.
The countries included in the study were Armenia, Brazil, Colombia, Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.