The US government has given the AI company behind Claude until 5pm to drop its restrictions on military use. The company says it won’t. But it has already given ground elsewhere
If you have heard of Anthropic, you probably know it as the company behind Claude, one of the most widely used AI systems in the world. What you may not know is that it is also, right now, in the middle of a stand-off with the US government that could reshape how powerful AI gets used by militaries, and who gets to set the rules.
The deadline is today. But there is more to this story than the headline suggests.
How we got here
Last July, the Pentagon awarded contracts worth up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. The idea was to bring frontier AI into national security work.
Anthropic’s deal was notable for a specific reason. Claude became the first AI model cleared to operate on the US military’s classified networks. That was seen as a significant milestone, a signal of how seriously the government was taking Anthropic’s technology, and how seriously Anthropic was taking its government relationships.
Those contracts came with conditions. Anthropic’s acceptable use policy, written into the contract, contained two explicit restrictions: Claude could not be used for mass surveillance of American citizens, and it could not be used in fully autonomous weapons systems, meaning AI making lethal decisions without a human in the loop.
At the time, nobody made much of a fuss.
The Venezuela incident
Things changed in January 2026. Reports emerged that Claude had been used, through Anthropic’s partner Palantir, during a US military operation that resulted in the capture of Venezuelan President Nicolás Maduro. Anthropic had not been fully informed about how its technology was being deployed.
CEO Dario Amodei responded by publicly drawing what he called two “bright red lines.” Mass domestic surveillance and autonomous weapons: both off limits, full stop.
The Pentagon did not like that framing.
The ultimatum
On Tuesday this week, Defence Secretary Pete Hegseth met Amodei at the Pentagon and handed him a demand: sign a document granting the military “full access” to Claude for “all lawful purposes,” with no restrictions attached.
Hegseth and other Trump administration officials labelled Anthropic’s safety guardrails “woke AI.” The Pentagon then issued a deadline: comply by 5:01pm ET on Friday, February 27 (today), or face three consequences.
First, the $200 million contract would be cancelled. Second, Anthropic would be designated a “supply chain risk,” the kind of label normally reserved for companies considered extensions of foreign adversaries.
That designation would pressure defence contractors like Boeing and Lockheed Martin to cut ties with Claude entirely, threatening Anthropic’s much broader enterprise business. Third, the Pentagon threatened to invoke the Defense Production Act, a Cold War-era law that has never been used this way, to legally compel Anthropic to remove its safety limits.
Legal experts have described that last move as unprecedented. Amodei himself pointed out the obvious contradiction: the government was simultaneously calling Anthropic a security risk and insisting its technology was essential to national security.
The safety pledge that disappeared
Here is where the story gets more complicated.
On the week Hegseth delivered his ultimatum, Anthropic published a major revision to its internal safety framework. The company’s original Responsible Scaling Policy, written in 2023, contained a binding commitment: Anthropic would not train more powerful AI models unless it could guarantee in advance that adequate safety measures were in place. That hard limit is now gone.
The new version replaces the categorical pause with something far weaker: Anthropic will only delay development if its leaders simultaneously believe the company is leading the global AI race and judge catastrophic risks to be significant. Both conditions must be true at once, a threshold considerably harder to meet than the original rule.
Anthropic’s chief science officer Jared Kaplan told TIME the change reflected a new reality. Unilateral commitments to pause development while competitors press ahead “wouldn’t actually help anyone,” he said.
The company insists the policy revision is entirely unrelated to the Pentagon dispute. That may well be true. Internal discussions on the change reportedly ran for nearly a year.
But the timing, dropping a core safety promise on the same day a government ultimatum arrived, has attracted exactly the kind of scrutiny Anthropic would rather avoid.
One independent reviewer who assessed the new policy said it signals that Anthropic “believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities.”
Anthropic’s response to the Pentagon
Overnight on Wednesday, the Pentagon sent Anthropic revised contract language it described as a compromise. Anthropic rejected it.
“The contract language we received overnight from the Department of War made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons,” the company said in a statement Thursday. “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
Amodei’s blog post on Thursday was blunt. Anthropic understands that the military, not private companies, makes military decisions, he wrote.
The company had never raised objections to particular operations or tried to limit its technology on an ad hoc basis. But mass surveillance and fully autonomous weapons “have never been included in our contracts with the Department of War, and we believe they should not be included now.”
“Threats do not change our position: we cannot in good conscience accede to their request,” he added.
The Pentagon’s chief technology officer Emil Michael responded on X by calling Amodei a “liar” with a “God complex” who was “ok putting our nation’s safety at risk.”
What’s actually in dispute
The Pentagon’s public position is straightforward: it says it has no intention of using Claude for mass surveillance or autonomous weapons, and that those restrictions were never triggered in practice. Senior defence analysts have confirmed that, by all accounts, the military’s use of Claude has never come close to hitting the limits Anthropic set.
So why does it matter? Because the Pentagon does not want to be in a position where it needs a private company’s permission before acting. Legality, a Pentagon official told CNN this week, is the military’s responsibility, not Anthropic’s.
Anthropic’s view is the opposite. The company argues that some uses of AI are simply unsafe regardless of legality, and that a private company building a powerful technology has both the right and the responsibility to set those limits in its terms.
That is the real disagreement. Not what the Pentagon intends to do, but who gets to decide what the technology can be used for in the first place.
Why this matters beyond the $200 million
Losing the contract would not threaten Anthropic’s survival. The company is valued at around $380 billion and has announced a $14 billion annual revenue run rate. The contract represents a fraction of that.
The supply chain risk designation is a different matter. It would mean any company working with the US military would have to prove they have no exposure to Anthropic products in their work with the Pentagon, which could effectively freeze Claude out of a large part of the enterprise market at the worst possible time. Anthropic is reportedly planning an IPO.
There is also a broader signal at stake. Analysts have noted that the Pentagon’s approach sends a clear message to every other AI company in negotiations with the military: do not attempt to put restrictions on how the government uses your technology. Elon Musk’s xAI signed its classified network agreement without any restrictions. OpenAI removed its explicit ban on military use in early 2024. Google is in accelerated negotiations on similar terms.
Anthropic is, for now, holding its two specific lines. But it has already moved on everything else.
What happens next
As of this writing, the deadline has not passed. It is not clear whether the Pentagon will make a public announcement if Anthropic misses it, or how quickly Claude would be removed from military systems if the contract is cancelled. Contractors that rely on Anthropic products would likely be given some time to assess their exposure rather than being cut off immediately. The Defense Production Act threat remains on murky legal ground and would almost certainly face an immediate legal challenge if invoked.
What is already clear is that the company that built its reputation on being the AI industry’s safety conscience is navigating a moment that tests exactly what that identity is worth. On the Pentagon’s two specific demands, Amodei is holding firm. On the broader commitments that defined Anthropic’s founding promise, the ground has already shifted.
Whether those two things are connected is a question the company would rather not answer.

