UK to trial fast-track asylum process for Iraqis and Iranians

Iraqi migrants sit near a fire waiting to cross into Britain at a makeshift migrants camp in Dunkirk, northern France. (File/AFP)
Short Url
Updated 16 May 2023
Follow

UK to trial fast-track asylum process for Iraqis and Iranians

  • More than 20,000 people to be given questionnaires followed by shorter interviews to determine their right to stay in the country

LONDON: The UK Home Office is to fast-track the asylum applications of more than 20,000 people from Iraq and Iran in an effort to fulfill a pledge by Prime Minister Rishi Sunak to clear a substantial backlog of more than 90,000 claims.

A leaked document, seen by The Guardian, suggests asylum-seekers from the two countries will be asked to complete detailed questionnaires, in English, and return them within 30 days, before appearing for short, in-person interviews with officials. Failure to comply could result in an application being turned down.

The UK had a backlog of 92,601 asylum applications at the end of June 2022. At the end of the year, 20,607 Iraqi and Iranian cases from this backlog remained outstanding, out of 132,000 applications in total. For Iranian applicants, the approval rate is about 80 percent, while 54 percent of Iraqi claims are accepted.

The Home Office described the move as “a new phase in the program to clear the legacy (application) backlog” by grouping applicants into “cohorts.”

It added: “As part of this approach, the first cohorts we will prioritize are legacy claimants from Iran and Iraq, as these are the two highest nationality cohorts of outstanding claims.

“Iranian and Iraqi legacy claimants who have not yet been substantively interviewed will begin receiving questionnaires, which will be tailored to their circumstances, over the next few weeks, helping to reduce the duration of any subsequent interviews.

“Once the necessary information is received, we anticipate that targeted or shorter interviews will be approximately 30 minutes to two hours in length.”

During a similar scheme launched in February, 12,000 asylum-seekers from Afghanistan, Eritrea, Syria, Yemen and Libya were asked to complete 11-page questionnaires. However, officials said that many of the forms were incorrectly filed, which necessitated lengthy follow-up interviews. A report in The Times newspaper put the number of correctly filed forms as low as 10 percent.

Immigration lawyer Colin Yeo told The Guardian: “It looks like good news but premature if they haven’t sorted out the easy cases already.

“It is not clear how this is going to help with more complex cases. Most asylum interviews are about two to three hours anyway, so there’s not much of a time saving if they’re at the upper end of their time estimate.”

Sile Reynolds, the head of advocacy at campaign group Freedom From Torture, said: “We remain concerned that rolling out this policy without further safeguards, including access to legal representation, an interpreter or a full face-to-face interview, could result in survivors of torture being refused protection and returned to their home countries to face persecution.”

The Home Office said: “We need to make sure asylum-seekers do not spend months or years living in the UK, at vast expense to the taxpayer, waiting for a decision. This questionnaire will help us clear the backlog of historic asylum cases by speeding up decisions and allowing case workers to carry out shorter, more focused interviews.

“Individuals who receive one, like all asylum-seekers, are subject to mandatory security checks against their claimed identity, including immigration and criminality checks on UK databases, which is critical to the delivery of a safe and secure immigration system.”


Anthropic CEO says AI company ‘cannot in good conscience accede’ to Pentagon’s demands

Updated 4 sec ago
Follow

Anthropic CEO says AI company ‘cannot in good conscience accede’ to Pentagon’s demands

WASHINGTON: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.
The company said in a statement that it’s not walking away from negotiation but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
The Pentagon’s top spokesman has reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.
Sean Parnell said Thursday on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models, such as its chatbot Claude, from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new US military internal network.
Parnell said the Pentagon wants to “use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has “until 5:01 PM ET on Friday to decide.”
“Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk,” he wrote.
Anthropic didn’t immediately respond to a request for comment Thursday. It said in a statement after Tuesday’s meeting that it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading US company.”
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
As Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”