Pentagon Flags Anthropic as Supply Chain Risk Over Claude AI Restrictions
The U.S. Department of Defense has officially labeled artificial intelligence company Anthropic a “supply chain risk,” a move that could force military contractors to stop using the company’s AI tools.
The decision, announced by the United States Department of Defense, targets Anthropic’s AI chatbot Claude. Officials say the designation takes effect immediately and could lead to a phase-out of the technology from military systems and defense projects.
The move comes after a public disagreement between the company and the administration of Donald Trump over how AI technology should be used by the military.
Pentagon Defends Decision to Label Anthropic a Risk
In a statement, the Pentagon said the military must be able to use technology “for all lawful purposes” without restrictions imposed by outside vendors.
Officials argue that when a company limits how its technology can be used, it could interfere with military operations. In this case, they believe Anthropic’s safeguards around its AI products could prevent soldiers and defense agencies from fully using the technology.
Under U.S. federal rules, labeling a company as a supply chain risk means the government believes there is a potential threat that could disrupt systems, weaken security, or limit operational control.
The rule is usually used to block technology from foreign companies linked to rival nations. Applying it to a U.S.-based AI company is highly unusual.
Anthropic Says Restrictions Are About Ethics, Not Military Limits
Anthropic’s CEO, Dario Amodei, says the company placed limits on how its AI can be used because of ethical concerns.
According to the company, it wanted safeguards to prevent two things:
- mass surveillance of civilians
- fully autonomous weapons that operate without human control
Amodei said those limits apply only to broad policy areas, not to normal military operations. He argues the Pentagon’s decision is not legally justified and says the company plans to challenge it in court.
Anthropic also says it has been working with defense officials to find a compromise that would allow continued use of its AI tools while keeping the safeguards in place.
Defense Contractors May Need to Drop Anthropic’s AI Tools
The designation could have major effects across the defense sector.
Companies that build military systems often rely on AI models from outside tech firms. If Anthropic is officially treated as a supply chain risk, contractors may have to remove its software from their projects.
Major defense contractor Lockheed Martin has already said it will follow government guidance and look for alternative AI providers.
The situation could also benefit rival AI developers. Competitors such as OpenAI and Google offer similar large language models that defense agencies could use instead.
Lawmakers Warn the Move Could Set a Risky Precedent
Some lawmakers and technology experts say the decision raises serious concerns.
Kirsten Gillibrand, a member of the Senate Armed Services Committee, criticized the move, saying the rule was designed to block threats from foreign adversaries, not American companies.
Former defense officials have also warned that using supply chain restrictions in this way could discourage innovation in the U.S. tech industry.
Also Read
