President Trump ordered all federal agencies to immediately cease using Anthropic’s AI technology, citing the company’s refusal to allow its tools for domestic mass surveillance or fully autonomous weapons as required by a Pentagon contract worth up to $200 million.
The ban, announced via Truth Social, grants a six-month phaseout of Anthropic technology in government systems and follows a standoff between the company and the Department of Defense.
The big picture: Trump criticized Anthropic for “strong-arming” the Pentagon with its terms of service, insisting government agencies will not do business with Anthropic again.
- Anthropic had been told by the Pentagon to lift usage restrictions on their AI model Claude or risk being labeled a “supply chain risk,” effectively blacklisting the company from government contracts.
- The Pentagon threatened to use the Defense Production Act to compel compliance and maintains that federal law and policy already prohibit use of AI for domestic surveillance or autonomous weapons without human oversight.
The other side: Amid rising tensions, Anthropic CEO Dario Amodei publicly stood by the company’s stance, arguing such applications of today’s AI are unsafe and outside ethical bounds.
- OpenAI CEO Sam Altman voiced similar concerns, stating support for “red lines” against military AI use in mass surveillance or autonomous weapons; OpenAI is pursuing negotiations for safety exclusions in its government contracts.
Zoom in: The dispute arises as Anthropic plans for an IPO and faces heightened scrutiny from investors about its relationship with the U.S. government.
- Industry experts note that it is highly unusual for Pentagon contractors to dictate terms of government use, but the novelty and possible consequences of AI technology make this an unprecedented, highly publicized conflict.