The Department of Defense alleges the AI developer could manipulate models in the middle of war.
SECURITY POLITICS THE BIG STORY BUSINESS SCIENCE CULTURE REVIEWS Menu Account Account Newsletters Security Politics The Big Story Business Science Culture Reviews Chevron More Expand The Big Interview Magazine Events WIRED Insider WIRED Consulting Newsletters Podcasts Video Livestreams Merch Search Search Paresh Dave Business Mar 20, 2026 8:03 PM Anthropic Denies It Could Sabotage AI Tools During War The Department of Defense alleges the AI developer could manipulate models in the middle of war.
Photo-Illustration: WIRED Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday.
The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war .
“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote .