GENEVA: A top World Health Organization official estimated Monday that COVID-19 vaccination coverage of at least 80 percent is needed to significantly lower the risk that “imported” coronavirus cases like those linked to new variants could spawn a cluster or a wider outbreak.
Dr. Michael Ryan, WHO’s emergencies chief, told a news conference that ultimately, “high levels of vaccination coverage are the way out of this pandemic.”
Many rich countries have been moving to vaccinate teenagers and children — who have lower risk of more dangerous cases of COVID-19 than the elderly or people with comorbidities — even as those same countries face pressure to share vaccines with poorer ones that lack them.
Britain, which has vastly reduced case counts thanks to an aggressive vaccination campaign, has seen a recent uptick in cases attributed largely to the so-called delta variant that originally appeared in India — a former British colony.
Ryan acknowledged that data wasn’t fully clear about the what percentage of vaccination coverage was necessary to fully have an impact on transmission.
“But ... it’s certainly north of 80 percent coverage to be in a position where you could be significantly affecting the risk of an imported case potentially generating secondary cases or causing a cluster or an outbreak,” he said.
“So it does require quite high levels of vaccination, particularly in the context of more transmissible variants, to be on the safe side,” Ryan added.
Maria Van Kerkhove, WHO’s technical lead on COVID-19, noted the delta variant is spreading in more than 60 countries, and is more transmissible than the alpha variant, which first emerged in Britain.
She cited “worrying trends of increased transmissibility, increased social mixing, relaxing of public health and social measures, and uneven and inequitable vaccine distribution around the world.”
WHO Director-General Tedros Adhanom Ghebreyesus, meanwhile, called on leaders of the developed Group of Seven countries to help the UN-backed vaccination program against COVID-19 to boost access to doses in the developing world.
With G-7 leaders set to meet in England later this week, Tedros said they could help meet his target that at least 10 percent of the populations in every country are vaccinated by the end of September — and 30 percent by year-end.
“To reach these targets, we need an additional 250 million doses by September, and we need hundreds of million doses just in June and July,” he said, alluding to the summit involving Britain, Canada, France, Germany, Italy, Japan and the United States.
“These seven nations have the power to meet these targets. I’m calling on the G-7 not just to commit to sharing those, but to commit to sharing them in June and July.”
At a time of continued tight supply of vaccines, Tedros also called on manufacturers to give the “first right of refusal” on new vaccine volumes to the UN-backed COVAX program, or to commit half of their volumes to COVAX this year.
He warned of a “two-track pandemic,” with mortality among older age groups declining in countries with higher vaccination rates even as rates have risen in the Americas, Africa and the Western Pacific region.
WHO: High vaccination rates can help reduce risk of variants
Short Url
https://arab.news/pcu77
WHO: High vaccination rates can help reduce risk of variants
Anthropic CEO says AI company ‘cannot in good conscience accede’ to Pentagon’s demands
WASHINGTON: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.
The company said in a statement that it’s not walking away from negotiation but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
The Pentagon’s top spokesman has reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.
Sean Parnell said Thursday on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models, such as its chatbot Claude, from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new US military internal network.
Parnell said the Pentagon wants to “use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has “until 5:01 PM ET on Friday to decide.”
“Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk,” he wrote.
Anthropic didn’t immediately respond to a request for comment Thursday. It said in a statement after Tuesday’s meeting that it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading US company.”
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
As Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”
The company said in a statement that it’s not walking away from negotiation but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
The Pentagon’s top spokesman has reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.
Sean Parnell said Thursday on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models, such as its chatbot Claude, from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new US military internal network.
Parnell said the Pentagon wants to “use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has “until 5:01 PM ET on Friday to decide.”
“Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk,” he wrote.
Anthropic didn’t immediately respond to a request for comment Thursday. It said in a statement after Tuesday’s meeting that it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading US company.”
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
As Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”
© 2026 SAUDI RESEARCH & PUBLISHING COMPANY, All Rights Reserved And subject to Terms of Use Agreement.










