The U.S. Department of Defense released a set of guidelines for how artificial intelligence (AI) should be deployed on the battlefield. These guidelines are intended to provide for the ethical use of new technologies that are increasingly defining modern warfare.

Among the principles laid out was the call for personnel to “exercise appropriate levels of judgment and care.” Additionally, it was stated that there needs to be “explicit, well-defined uses” for the AI.

Before the new guidelines, it was only required for humans to be involved in the decisionmaking process, sometimes referred to as “human in the loop;” whereas now members of the military will have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

In other words, AI will need to have an off-switch.

However, not all are convinced that the new changes are sufficient or sincere.

Among the concerns is that the definition of “appropriate” is subject to interpretation and could be adapted without a consistent standard.

Another possibility is that the Pentagon is seeking to improve its image in Silicon Valley. For example, Google chose not to renew a contract with the Department of Defense in 2018 following a backlash among employees regarding a project that used machine learning to distinguish between people and objects. The project would be used for drones.

Secretary of Defense Mark Esper has been adamant that AI ought to be and will be a central component in future military developments for the United States.