Are we on the precipice of developing autonomous killer robots; unmanned technologies that will completely dominate conflict, remorselessly, perfectly? Some advocates think so but this article explains why the killer robot view is misplaced and in its misdirection, draws attention away from how militaries are likely to use machine learning (ML) and artificial intelligence (AI) technologies. It draws upon operational constraints, a realistic view of technology, and even cultural aspects of the military to explain that Terminator-like devices are not the aspiration of governments.
AI will accelerate and shape intelligence, logistics, and targeting operations, activities mostly overlooked by academics but that have profound consequences. This article advances a theory of what AI technologies are likely to help militaries and which are likely to be too risky or too useless.
A lack of familiarity with military operations causes some advocates and academics to imagine fully automated offensives. The reality is that militaries carefully study adversaries and employ multiple strategies to control attacks in order to reach strategic goals, to avoid ambiguity and escalation, and to avoid civilian death. Prosaic functions that escape public attention provide the inputs for this study and control.
Study of the autonomous weapon system space reveals a focus on target selection and trigger control. But long before those decisions are taken, machine learning technologies may inform the ground truth, the analysis, and the trade-offs considered before trigger control becomes relevant. Fanciful visions of killer robots may never arrive, yet ML in the military for decision support is already here. Familiarity with legal, policy, and operational realities of military activities could provide the most payoff now in protection of humanitarian interests.