Are we on the precipice of developing autonomous killer robots; unmanned technologies that will completely dominate conflict, remorselessly, perfectly? Some advocates think so but this article explains why the killer robot view is misplaced and in its misdirection, draws attention away from how militaries already use artificial intelligence and machine learning technologies. The article draws upon operational constraints, a realistic view of technology, legal constraints, and even cultural aspects of the military to explain that Terminator-like devices are not the aspiration of militaries.
A lack of familiarity with military operations causes some advocates and academics to imagine fully automated offensives. The reality is that militaries carefully study adversaries and employ multiple strategies to control attacks in order to reach strategic goals, to avoid ambiguity and escalation, and to avoid civilian death. Prosaic functions that escape public attention provide the inputs for this study and control.

This article explains legal, cultural, and operational limitations that operate on military procurement and decisionmaking with a focus on a topic ignored in the killer robot literature—the targeting process. With a better understanding of the wants, needs, and limitations of the military, a more aligned view points to opportunities for meliorative intervention.
Table of Contents
By autonomous we really mean automated—highly automated. 5
History of automaticity and autonomy. 7
The Precision Guided Munition revolution: control and contempt 9
Department of War: Autonomy as control 13
Why the focus on killer robots misses the target 19
AWS definitional game theory. 21
The different reasons militaries do not want Terminators. 22
AWSs have complex interactions with the Law of Armed Conflict. 29
The real killer robot: the targeting process. 36
Constructively engaging the AWS debate. 46
The privacy law of intelligence and military operations. 46
From LOAC to law enforcement 49
New opportunities from AWSs: civilian protection. 50
Alternative policy directions: a draft would likely worsen AWS risk. 51
We might stop worrying and learn to love AWSs. 52
Introduction
The United States (US) military is among the largest, most complex, and most interesting institutions in the world. It uses artificial intelligence and machine learning (AIML, hereafter machine learning or “ML”) to power decision support systems for many purposes, including for deciding what and how to attack.[1] Yet, many academics focused on ML have eschewed the military as a focus of analysis. An influential 2016 report, part of a project titled “The One Hundred Year Study on Artificial Intelligence,” explicitly omitted military applications.[2] A seminal exploration of AI safety chose relatively innocuous functions—like the errors a robot maid might make—to illustrate problems.[3]
The reasons for overlooking the military are unclear, because ML clearly is a dual use technology with multifarious applications in conflict.[4] These are obvious to anyone with knowledge of the history of computing and technologies such as computer vision.[5] Thus perhaps one rationale could be a kind rhetorical-mental distancing from military affairs, in order to distract workers from dual use implications.[6] Eliding the military could be motivated by a desire to preserve nation-state competition in ML.[7] It just as well could be a lack of familiarity with the military as an institution within academia.[8]
At the opposite end of the spectrum, a “killer robots” literature takes the problem of death machines as its focus. The literature is connected with an advocacy movement to “stop killer robots.” These imaginaries envision ruthless, perfect killing machines, like the Terminator. And while this literature is martial in tone, it does not situate analysis within in legal, political, and social aspects of the US military.
This article intends to address that lacuna. This article explains that militaries extensively use software, including ML, for myriad functions, such as tracking logistics and force preparedness. But ML is also used to study adversaries, what to attack, and how to attack it. This Article’s primary contribution is an explanation of the deliberate targeting process, one infused with humanitarian concerns. These concerns, a product of cultural, political, and legal commitments along with technological realities, show that the US and aligned militaries are unlikely developers of Terminators. A view more aligned with these realities could help advocates engage and realize a more humanitarian future. To make that argument, this article proceeds in three Parts.
Part 1 of this article explains the development of highly automated, precision weapons systems led, paradoxically, to a situation where automation began to undermine human control. The “killer robots” debate is a late arrival to these developments in two senses. First, automation has been increasing for decades; with leaps in World War II, and later in the form of fully-computer-controlled defensive systems. Second, “killer robots” focuses on trigger control, the last and most consequential moment in attack. Thus, the killer robots frame focuses attention late in the “kill chain,” the structure militaries give to the procedures of attack. This elides targeting doctrine, which is quite deliberative.
Having set the stage, Part 2 explains the factors that temper the pursuit of feared killer robots. In particular, cultural barriers, operational constraints, and legal dynamics make killer robots a bad fit for the needs of the military. It will become clear that the killer robots frame, especially the focus on trigger control, crops out many other consequential yet uncontroversial processes that flow forward to attack decisions.[9] The frame also may be both opportunistic (i.e. a beard for broader campaigns of adversarial pacifism) and often unrealistic (e.g. confecting ban language that proscribes long-used, necessary devices). This part argues that the targeting process, an intricate, redundant, labor-intensive, legalistic, multidisciplinary investigation into choosing targets, is the military’s killer robot. This robot does not look like the Terminator.
Part 3 turns to a range of interventions that might effectively engage the current debate and head off policy drift to more lethal forms of offensive autonomy. These range from privacy law (focusing on integrity of intelligence collection), to reframing the rules of trigger control (imposing law-enforcement-like rules of engagement), to socio-political (imposing a draft), and to humanitarian (implementing ML for civilian protection). This Part concludes with more-likely-than-not scenarios that could cause a takeoff in autonomous weapons, leading to implementations uncontrolled by policy.
Before proceeding, it is important to clarify this Article’s premises. Because it focuses on targeting, it adopts a realist perspective, assuming such targeting occurs lawful armed conflicts under the jus ad bellum.[10] The aim is not to confect new legal constraints for law-abiding states, nor to creatively reinterpret international agreements in ways that inhibit legitimate defense. Rather, this Article begins from the sober assumption that states will continue to rely on coercion in international affairs, and that technological competition—including in military autonomy—will remain a strategic priority.
Further, one needs to approach these subjects with some humility, because the burdens and risks associated with constraints on military action are rarely borne by their advocates. Service members often must imperil their own lives to ensure that others are properly protected under humanitarian norms. Defense officials are more likely to take scholarship seriously if it pursues meliorative reforms pragmatically, with an eye to fairness to the soldier.
[1] AI is a useful marketing term, even for law review articles, but the better term is “machine learning,” which this article will use going forward.
[2] Peter Stone et al., Artificial Intelligence and Life in 2030: The One Hundred Year Study on Artificial Intelligence, arXiv preprint arXiv:2211.06318 (2022).
[3] Dario Amodei et al., Concrete Problems in AI Safety, (2016), https://arxiv.org/abs/1606.06565.
[4] Andreas Brenneis, Assessing Dual Use Risks in AI Research: Necessity, Challenges and Mitigation Strategies, 21 Research Ethics 302 (2025).
[5] Julia A Irwin, Artificial Worlds and Perceptronic Objects: The CIA’s Mid-Century Automatic Target Recognition, Grey Room 6 (2024), The Central Intelligence Agency was among the first institutions to experiment with object detection using the MARK I Perceptron, the first neural network computer, in the 1960s.
[6] In 2018, a group of Google employees convinced the company to forgo work on “Project Maven,” a DOW program focused on recognizing entities. A problem with the employees’ objection is that Google’s commercially-driven computer vision research has clear application to military objectives. A Google-Waymo autonomous car needs to be able to sense and recognize people, other cars, animals and so on. The employee’s campaign was puzzling because Google will never be able to avoid building “warfare technology.” See Letter of Google Employees, n.d.; (Vincent Boulanin, Regulating Military AI Will Be Difficult. Here’sa Way Forward, 3 Bulletin of the Atomic Scientists (2021))
[7] Raluca Csernatoni, Governing Military AI Amid a Geopolitical Minefield (2024), Europe’s landmark AI act does not regulate nations’ militaries.
[8] Herwin W Meerveld et al., The Irresponsibility of Not Using AI in the Military, 25 Ethics and Information Technology 14 (2023), “A recent literature review on data science and AI in military decision-making found that most of the studies examining these topics originate in social sciences. As a result, the debate about the use of AI for military purposes, although of high strategic importance, appears to be limited in terms of its scope and perspective.”.
[9] The House of Lords characterized machine learning use for intelligence gathering, surveillance and reconnaissance (ISR) as uncontroversial. Parliament House of Lords, Proceed with Caution: Artificial Intelligence in Weapon Systems (2023).
[10] It is possible to have compliant targeting within an illegal conflict.
