According to Wareham, the directive does not impose self-control but allows the US to develop autonomous weapons systems if done in accordance with applicable laws and ethical standards.
“It’s not a policy of self-restraint,” said Wareham. “The directive facilitates US development of autonomous weapons systems, as long as it’s done in accordance with existing legal rules and ethical principles. The directive does nothing to curb proliferation.”
Paul Scharre, the director of studies at the Center for a New American Security and a long-time supporter of military drones and AI, noted that the former regulation just stated that a review process was required to get the approval for specific systems, but did not provide any explanation of how to go through it. He further added that it needed to clarify what steps to take, who to contact, and how to initiate the process.
“It wasn’t necessarily clear what to do, who do I talk to, how do I instigate this process,” Scharre said.
The revised directive is “establishing the procedures to implement these audits,” according to Scharre, who has provided a line-by-line breakdown of the modifications in the revised version.
At a media meeting in January, the Pentagon’s head of “emerging capabilities policy,” Michael Horowitz, played down the “slight modifications and amendments” to the 2012 version of DODD 3000.09.
Horowitz commented that it had been a decade since the directive was issued, and the department is obligated to reissue, update, or cancel it every 10 years. Therefore, the decision to release it was determined by the timelines of the Department of Defense bureaucracy.
Horowitz openly acknowledged that the policy does not stop the production of any specific weapon system. He explained, “The directive does not forbid the manufacturing of any distinctive weapon system.”
He explicitly stated that the S&T (Science & Technology) phase of lab work, pre-prototypes, and field experiments are not covered. Even the most state-of-the-art autonomous weapons programs are still in the experimental stage. Examples include the Air Force’s Collaborative Combat Aircraft and the Navy’s Sea Hunter and Overlord unmanned ships.
Horowitz explained that for a weapons program to move out of S&T, the policy dictates that two review levels must occur. The first review must come before formal development (or Milestone B) begins, and another study must appear before the weapon’s fielding. He did not mention it, but the Deputy Secretary can bypass either or both reviews in cases of urgent military need.
Directive 3000.09 now includes the Pentagon’s ethical rules adopted in 2020 and follows the Responsible AI Strategy & Implementation Policy issued in June 2022. These ethics principles mandate that all AI programs must be “responsible,” “equitable,” “traceable,” “reliable,” and “governable” to avoid any potential bias or false results. The strategy then develops criteria, research objectives, and code to implement these principles. It should be noted that “AI” and “autonomy” are not interchangeable, and not all autonomy relies on AI.
The new directive also modifies the duties of various officers since the Undersecretariat of Acquisition, Technology, & Logistics was divided into two in 2018. The most important, in terms of bureaucracy, is setting up a continuous working group to guide the top officials on how to implement the assessments.
Zachary Kallenborn, a policy fellow at George Mason University, explained that having the correct type of staffing is critical for an efficient bureaucratic process, as the policy matters connected to it are often intricate and highly technical, as well as dependent on the context. He posed a few examples to Breaking Defense, such as how the risk and defensive value differ between an underwater drone for submarine-hunting, DARPA’s massive swarm-of-swarms, and Spot, the cannon-toting robo-dog?
Kallenborn noted that constructing Spot the lethal robot is more straightforward due to minor modifications to Directive 3000.09. The newest version stipulates that safeguards for unmanned vehicles are exempt from scrutiny as long as the targets are machines and not people: “Weapon systems that do not require the senior review [consist of] systems that are managed by operators to protect operationally deployed remotely piloted or autonomous vehicles and/or vessels, and are used to pick out and engage material targets,” it explains.
“That’s interesting and definitely makes autonomous weapon use much easier in general,” he said. “The word ‘defending’ is doing a ton of work. If a drone is operating in enemy territory, almost any weapon could be construed as ‘defending’ the platform. A robo-dog carrying supplies could be attacked, so giving Spot an autonomous cannon to defend himself would not require senior approvals. If Spot happens to wander near an enemy tank formation, Spot could fight — so long as Spot doesn’t target humans.”
Of course, enemy tanks tend to have human beings inside. “There’s a lot of vagueness there,” Kallenborn said.
Kallenborn pointed out that one can never be certain what is found within an enemy tank, remarking, “There’s a lot of vagueness there.”
Paul Scharre stated that the updated directive facilitates building other autonomous weapons. For example, systems such as the Army’s Patriot and the Navy’s Aegis used for anti-aircraft/anti-missile defense commonly have a completely automated mode against multiple rapid-moving targets. Essentially, the overwhelmed human operator orders the computer to decide which targets to shoot at and fire without delay, faster than a human could manage. According to the 2012 and 2023 versions of Directive 3000.09, this autonomous defense against “time-critical or saturation attacks” was exempt from assessment. However, the 2023 update extends this exclusion to “networked defense where the autonomous weapon system is not located nearby.”
The Pentagon aims to decentralize its defensive measures by allowing the human supervisor and autonomous defense system to no longer need to be located in the same place. The remote operator cannot view the situation or provide orders if the network becomes inoperable. However, the weapon would still be authorized to shoot. This might be a life-saving measure in case of enemy disruption or hacking, yet it could also cost lives if there is a technical inconsistency, similar to the Patriot launchers in 2003.
Wareham commented from an arms control standpoint that the new policy was “insufficient” as a resolution.









COMMENTS