The Pentagon keeps promising to follow the law when using AI, but what are the limits?

The Pentagon has faced scrutiny over its use of AI in targeting decisions during the Iran war, with congressional Democrats demanding answers after a US strike on an Iranian school allegedly killed 168 children. While Defense Secretary Pete Hegseth insists humans make final lethal decisions, legal experts warn the rapid AI-assisted kill chain raises questions about accountability and ethical limits in warfare.
The US military has integrated AI into its operations more extensively than in any previous conflict, using data from satellites, signals intelligence, and software like Palantir’s Maven Smart System to assist commanders in identifying potential targets. According to sources familiar with US operations, AI tools such as Anthropic’s Claude analyze vast datasets far faster than humans, flagging targets for strikes in the Iran war. Defense Secretary Pete Hegseth has repeatedly stated that humans—not AI—make the final call on lethal targeting decisions, emphasizing compliance with the law. However, congressional Democrats, including Reps. Sara Jacobs, Jason Crow, and Yassamin Ansari, have pressed the Pentagon for answers regarding whether AI contributed to a February strike on an Iranian elementary school that killed at least 168 children, as reported by Iranian state media. Legal experts argue that while existing law requires commanders to be accountable for targeting decisions, it lacks explicit limits on AI’s role in the kill chain. The speed at which AI accelerates decision-making—known as the OODA loop (observe, orient, decide, act)—raises concerns about human oversight. Cory Simpson, a former legal adviser to US Special Operations Command, noted that AI exponentially increases the pace of these loops, giving militaries a tactical advantage but potentially compromising ethical and legal safeguards. The Pentagon is also embroiled in a legal dispute with Anthropic, after the AI firm demanded restrictions on how its technology could be used in military applications. Hegseth publicly criticized Anthropic’s CEO, calling them an ‘ideological lunatic,’ while Gary Corn, a former deputy legal counsel in the Office of the Chairman of the Joint Chiefs of Staff, compared the Pentagon’s approach to ‘running with scissors.’ Palantir’s Maven Smart System, deployed across the Department of Defense, has been praised by the Pentagon’s chief digital and AI officer, Cameron Stanley, for transforming targeting processes. However, the lack of clear legal boundaries on AI’s use in warfare continues to spark debate over accountability, ethics, and the risks of unchecked automation in combat.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.