Anthropic seeks to debunk Pentagon’s claims about its control over AI technology in military systems

Anthropic is disputing the Pentagon's claims that it can manipulate its AI tool Claude once deployed in military networks. The company is fighting a designation that stigmatizes it as a supply chain risk in a lawsuit filed last month.
Anthropic told a US appeals court that it cannot control its AI tool Claude once it's deployed in Pentagon military networks. The company is disputing the Pentagon's claims that it poses a supply chain risk. Anthropic's lawyers argue that the Pentagon is retaliating against it by stigmatizing it with a designation meant to protect national security systems. The dispute arose from a contract disagreement over AI use in autonomous weapons and surveillance. Anthropic prevailed in a similar case in San Francisco federal court, but the Washington case continues. The Pentagon canceled a $200 million contract with Anthropic, and OpenAI subsequently struck a deal to provide its technology to the US military.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.