At first glance, the recent revamp of the Pentagon’s policy on Autonomy in Weapons Systems may appear daunting to defense officials working on robotic weapons. The 15-page document “Autonomy in Weapons Systems” is now extended to 24 pages, which includes numerous ethical precepts to be observed in new weapons programs, a full-page flowchart of decisions that officials must make, and an Autonomous Weapon Systems Working Group to keep an eye on this.

Advocates and experts said that the revised policy is similar to R2-D2; it is helpful and has many valuable features. By including clarifications, a flowchart, and a working group to act as a central hub, the 2023 update of DoD Directive 3000.09 converts the review process initially established in 2012 into a step-by-step procedure that can be followed. Notably, it does not impose significant new restrictions on the design of autonomous weapons.

Michael Klare, a senior visiting fellow at the Arms Control Association, commented that this could come across as an additional layer of control and regulation, which might come across as intimidating. 

“On one hand, it sounds like this is adding more layers of control and regulation, and that might sound daunting,” said Michael Klare, a senior visiting fellow at the Arms Control Association. “On the other hand, I think it’s meant to give a green light to commanders and project managers, [because] they can proceed with a clear understanding of what they’re going to have to go through and what criteria they’re going to have to satisfy.”

“Under the earlier directive, there was more ambiguity,” he said. “It makes it easier.”

The Pentagon has yet to have a weaponry system that has gone through the review process mandated by the Department of Defense Directive 3000.09. A spokesperson stated in 2019 that, up to this point, no such weapon has even been subject to the Senior Review. This could mean that either no proposed program has had the conditions that would necessitate the review or, alternatively, a waiver was granted to any such program due to an urgent military requirement. The new and old policies would then allow the Deputy Secretary of Defense to waive the review in such cases.

Mary Wareham, a founding coordinator of the Campaign to Stop Killer Robots, reported during a Jan. 30 briefing that Pentagon representatives declined to address queries regarding the utilization of the process when questioned, which is a departure from the explicitness of the 2019 statement. 

AI
(Source: mikemacmarketing/Wikimedia Commons)

In order to fulfill its purpose, MDA must make a selection and provide a production contract to NGI before it is required to be able to deploy it quickly.

A visualization of the Next Generation Interceptor (NGI) is presented in the accompanying image below. With this new project, the intent is to enhance the defense of the homeland by advancing the development of the NGI.

The 2023 update of the DoD Directive 3000.09 includes a flowchart for the autonomous weapons program. (DoD graphic)

Regardless of the bureaucratic challenges, the updated 3000.09 simplifies the method for obtaining approval for the production and utilization of autonomous weapons, according to the experts. This benefits those worried about America’s military being surpassed by AI and robotics from Russia and China. However, it could be more reassuring for those whose priority is arms control.

According to Wareham, the directive does not impose self-control but allows the US to develop autonomous weapons systems if done in accordance with applicable laws and ethical standards. 

“It’s not a policy of self-restraint,” said Wareham. “The directive facilitates US development of autonomous weapons systems, as long as it’s done in accordance with existing legal rules and ethical principles. The directive does nothing to curb proliferation.”

Paul Scharre, the director of studies at the Center for a New American Security and a long-time supporter of military drones and AI, noted that the former regulation just stated that a review process was required to get the approval for specific systems, but did not provide any explanation of how to go through it. He further added that it needed to clarify what steps to take, who to contact, and how to initiate the process.

“It wasn’t necessarily clear what to do, who do I talk to, how do I instigate this process,” Scharre said.

The revised directive is “establishing the procedures to implement these audits,” according to Scharre, who has provided a line-by-line breakdown of the modifications in the revised version.

At a media meeting in January, the Pentagon’s head of “emerging capabilities policy,” Michael Horowitz, played down the “slight modifications and amendments” to the 2012 version of DODD 3000.09.

Horowitz commented that it had been a decade since the directive was issued, and the department is obligated to reissue, update, or cancel it every 10 years. Therefore, the decision to release it was determined by the timelines of the Department of Defense bureaucracy.

Horowitz openly acknowledged that the policy does not stop the production of any specific weapon system. He explained, “The directive does not forbid the manufacturing of any distinctive weapon system.”

He explicitly stated that the S&T (Science & Technology) phase of lab work, pre-prototypes, and field experiments are not covered. Even the most state-of-the-art autonomous weapons programs are still in the experimental stage. Examples include the Air Force’s Collaborative Combat Aircraft and the Navy’s Sea Hunter and Overlord unmanned ships.

Horowitz explained that for a weapons program to move out of S&T, the policy dictates that two review levels must occur. The first review must come before formal development (or Milestone B) begins, and another study must appear before the weapon’s fielding. He did not mention it, but the Deputy Secretary can bypass either or both reviews in cases of urgent military need.

Directive 3000.09 now includes the Pentagon’s ethical rules adopted in 2020 and follows the Responsible AI Strategy & Implementation Policy issued in June 2022. These ethics principles mandate that all AI programs must be “responsible,” “equitable,” “traceable,” “reliable,” and “governable” to avoid any potential bias or false results. The strategy then develops criteria, research objectives, and code to implement these principles. It should be noted that “AI” and “autonomy” are not interchangeable, and not all autonomy relies on AI.

The new directive also modifies the duties of various officers since the Undersecretariat of Acquisition, Technology, & Logistics was divided into two in 2018. The most important, in terms of bureaucracy, is setting up a continuous working group to guide the top officials on how to implement the assessments.

Zachary Kallenborn, a policy fellow at George Mason University, explained that having the correct type of staffing is critical for an efficient bureaucratic process, as the policy matters connected to it are often intricate and highly technical, as well as dependent on the context. He posed a few examples to Breaking Defense, such as how the risk and defensive value differ between an underwater drone for submarine-hunting, DARPA’s massive swarm-of-swarms, and Spot, the cannon-toting robo-dog?

Kallenborn noted that constructing Spot the lethal robot is more straightforward due to minor modifications to Directive 3000.09. The newest version stipulates that safeguards for unmanned vehicles are exempt from scrutiny as long as the targets are machines and not people: “Weapon systems that do not require the senior review [consist of] systems that are managed by operators to protect operationally deployed remotely piloted or autonomous vehicles and/or vessels, and are used to pick out and engage material targets,” it explains.

“That’s interesting and definitely makes autonomous weapon use much easier in general,” he said. “The word ‘defending’ is doing a ton of work. If a drone is operating in enemy territory, almost any weapon could be construed as ‘defending’ the platform. A robo-dog carrying supplies could be attacked, so giving Spot an autonomous cannon to defend himself would not require senior approvals. If Spot happens to wander near an enemy tank formation, Spot could fight — so long as Spot doesn’t target humans.”

Of course, enemy tanks tend to have human beings inside. “There’s a lot of vagueness there,” Kallenborn said.

Kallenborn pointed out that one can never be certain what is found within an enemy tank, remarking, “There’s a lot of vagueness there.”

Paul Scharre stated that the updated directive facilitates building other autonomous weapons. For example, systems such as the Army’s Patriot and the Navy’s Aegis used for anti-aircraft/anti-missile defense commonly have a completely automated mode against multiple rapid-moving targets. Essentially, the overwhelmed human operator orders the computer to decide which targets to shoot at and fire without delay, faster than a human could manage. According to the 2012 and 2023 versions of Directive 3000.09, this autonomous defense against “time-critical or saturation attacks” was exempt from assessment. However, the 2023 update extends this exclusion to “networked defense where the autonomous weapon system is not located nearby.”

The Pentagon aims to decentralize its defensive measures by allowing the human supervisor and autonomous defense system to no longer need to be located in the same place. The remote operator cannot view the situation or provide orders if the network becomes inoperable. However, the weapon would still be authorized to shoot. This might be a life-saving measure in case of enemy disruption or hacking, yet it could also cost lives if there is a technical inconsistency, similar to the Patriot launchers in 2003.

Wareham commented from an arms control standpoint that the new policy was “insufficient” as a resolution.