AI warfare: Can humans really control autonomous weapons?

This image was generated by AI and may not depict real events.
The increasing use of AI in warfare raises concerns about human control over autonomous weapons, with experts warning that 'black box' AI systems may be unpredictable. The 'human-in-the-loop' principle is currently used to maintain human oversight, but its effectiveness is being questioned.
The rapid advancement of AI warfare is challenging the military's assumption that humans remain in control of machines on the battlefield. AI technologies are now integrated into weapon systems that set their own targets and provide defence against missiles, as well as guidance for drones. The key principle underlying current policy is to maintain 'humans in the loop', making humans responsible and minimising dangers. However, experts warn that modern advanced AI algorithms are 'black boxes', meaning that even developers do not understand the calculations behind their decision-making processes. This lack of transparency leads to an 'intention gap', where AI interprets instructions rather than just carrying them out. The use of autonomous systems is driven by competition, and if one party uses faster machine-based decision-making, others may be forced to follow suit.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.