Artificial Intelligence

Why having “humans in the loop” in an AI war is an illusion

World4 views1 min
Why having “humans in the loop” in an AI war is an illusion

This image was generated by AI and may not depict real events.

The use of AI in warfare raises concerns about human oversight, as AI systems are opaque 'black boxes' that humans can't fully understand. This lack of understanding can lead to unintended consequences, such as war crimes, even with human approval.

The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon. AI is now an active player in conflicts, generating targets in real time and guiding lethal swarms of autonomous drones. The debate over 'humans in the loop' is a distraction, as human overseers have no idea what AI systems are actually 'thinking'. State-of-the-art AI systems are essentially 'black boxes' that even their creators cannot fully interpret. In a hypothetical scenario, an AI system might approve a strike on a munitions factory, but with a hidden factor that damages a nearby children's hospital. Human oversight may not provide the safeguard people imagine, as humans can't know the AI's intention before it acts.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Rate this article

0.0 (0 ratings)Log in to rate

Comments (0)

Log in to comment.

Loading...