What the feud between the US government and Anthropic reveals

What the feud between the US government and Anthropic reveals

The Pentagon blacklisted the AI ​​startup. An open conflict which illustrates the growing gap between the Trump government and part of the industry.

About ten days ago, Peter Hegseth, the United States Secretary of Defense, presented Anthropic with an ultimatum: grant full access to its technology to the Department of Defense (within the limits of legal uses) or see its contract with the government revoked and its technology banned from the administration.

Following the refusal of the boss of the young company, which is at the forefront of agentic AI with its chatbot Claude and its various variations adapted to the world of work such as Claude Code and Claude Work, Donald Trump published on his Truth Social network a long, furious message in which he accused the company of being “woke” and of belonging to the “radical left”. The Department of Defense has classified Anthropic as a risk to its supply chain, which prohibits any branch of government from using it. The application of this status to an American company is unprecedented: it was until now reserved for Russian and Chinese companies, such as Huawei.

Anthropic and the American government: a romance that ended short

The start-up has until now been at the heart of the American government’s AI adoption strategy. Last July, it was awarded a $200 million contract from the Pentagon to develop uses of AI in the service of national security. While the government also works with other AI experts, including Google, OpenAI and xAI, Anthropic is the most widely used, notably thanks to its integration with Palantir, and the only one to be deployed on classified systems. Its technology was notably used during the operation implemented to capture Nicolás Maduro in Venezuela.

Things started to go wrong in early January, after Peter Hegseth published a memo in which he announced a strategy aimed at making the American army an “AI-focused fighting force”, calling for the use of this technology free from any “use policy constraints”. This, associated with the use of its technology during the operation in Venezuela, caused internal turmoil at Anthropic: on February 15, one of its researchers specializing in AI security slammed the company’s door.

At the same time, the Defense Department is beginning to put pressure on Anthropic, as well as other companies, to authorize military use of their systems for “all lawful purposes,” including weapons development, intelligence and battlefield operations. Anthropic’s reluctance led Peter Hegseth to bang his fist on the table by imposing a deadline on Dario Amodei, then, in the absence of a compromise, to opt for a radical break.

The Trump administration’s accelerationist approach clashes with that of Anthropic

The start-up expressed its desire to challenge the decision in court. Some American experts, like Dean Ball, former advisor to the Trump administration on AI, see this affair as a dangerous precedent likely to harm the dynamism of the American AI ecosystem, currently engaged in fierce competition with China. “Nvidia, Amazon and Google will have to divest from Anthropic if Hegseth wins his case. This is simply an assassination attempt against a company. I could in no way recommend investing in American AI to any investor; nor recommend anyone to create an AI company in the United States,” he posted on his X account.

The Trump administration, which led a resolutely techno-libertarian campaign in 2024, and was thus able to seduce many Silicon Valley executives who were formerly Democrats and disappointed by Joe Biden’s considered anti-tech policy, has led since its beginnings an unbridled adoption policy on AI, opposing any regulation and favoring rapid adoption of the technology by all branches of government, including Defense.

This laissez-faire approach, however, goes too far for certain voices within tech, of which Anthropic is a part, through its leader Dario Amodei, who regularly warns against the risks inherent in unregulated AI.

The dangers of unbridled use of AI

The company is not the only one concerned about the rapid adoption that the Pentagon wants to implement. Gary Marcus, an American AI expert, is also convinced that large language models are currently far too prone to hallucinations and errors to be used for military purposes. “Generative AI is absolutely not reliable enough to make life-and-death decisions at scale,” he writes on his blog. “If we are going to disseminate LLMs everywhere, we must do so in a way that recognizes and takes into account their unreliability. Deploying them everywhere without sufficient precautions could well lead to disaster.”

It is not a recent study from King’s College London that will reassure Gary Marcus and Dario Amodei: researcher Kenneth Payne and his team put three major language models (GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash) into competition in war games involving international confrontations, notably through border conflicts, competitions for rare resources and existential threats to the survival of governments. Responsible for organizing the defense of a country, AIs have opted for the use of nuclear strikes in 95% of scenarios. Some experts also suspect that the strike on a school in Iran, which killed 150 children, may have been linked to an AI error.

There is also great fear that the Trump administration will use AI for large-scale surveillance in its own country, for example to improve the effectiveness of its mass expulsions or even monitor political opponents.

Judas’ Kiss by Sam Altman

Sam Altman, who maintains a tumultuous relationship with Dario Amodei, himself a former OpenAI employee, stepped into the breach by forging an agreement with the Pentagon, after initially showing his support for Anthropic in its quarrel with the authorities. If Sam Altman claims to have obtained safeguards from the Department of Defense, many experts were quick to point out that the agreement seemed too good to be true and that Altman had obviously agreed to give in to Peter Hegseth’s demands.

However, the company is not yet approved for classified uses, in particular because its technologies are not available via the Amazon cloud, used by the US government. However, this could change after the signing of a timely partnership between OpenAI and Amazon, in which the e-commerce giant plans to inject $50 billion into Sam Altman’s company.

Several AI experts, like Timothy B. Lee, have called on Congress to legislate to limit certain uses and avoid a race to the bottom led by the main companies in the sector in order to land the juicy contracts, even if it means throwing ethics overboard. There is little chance, however, that such a law will be passed before the mid-term elections and the possible return of a Democratic majority to Congress, the current Republican majority seeming determined to follow Donald Trump’s accelerationist approach.

By attacking Anthropic and putting aside the fears expressed by a good number of AI professionals, including Sam Altman himself, regarding a lack of supervision of the technology, Trump is however playing a dangerous game. Silicon Valley entrepreneurs risk saying to themselves that they could be the next to suffer the same fate as Anthropic, and to judge that it is in their interest to go back to the Democrats rather than having a sword of Damocles permanently hanging over their heads.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment