Technology

Pentagon releases new Artificial Intelligence guidelines

DVIDS.

The U.S. Department of Defense released a set of guidelines for how artificial intelligence (AI) should be deployed on the battlefield. These guidelines are intended to provide for the ethical use of new technologies that are increasingly defining modern warfare.

Among the principles laid out was the call for personnel to “exercise appropriate levels of judgment and care.” Additionally, it was stated that there needs to be “explicit, well-defined uses” for the AI.

Before the new guidelines, it was only required for humans to be involved in the decisionmaking process, sometimes referred to as “human in the loop;” whereas now members of the military will have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

The U.S. Department of Defense released a set of guidelines for how artificial intelligence (AI) should be deployed on the battlefield. These guidelines are intended to provide for the ethical use of new technologies that are increasingly defining modern warfare.

Among the principles laid out was the call for personnel to “exercise appropriate levels of judgment and care.” Additionally, it was stated that there needs to be “explicit, well-defined uses” for the AI.

Before the new guidelines, it was only required for humans to be involved in the decisionmaking process, sometimes referred to as “human in the loop;” whereas now members of the military will have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

In other words, AI will need to have an off-switch.

However, not all are convinced that the new changes are sufficient or sincere.

Among the concerns is that the definition of “appropriate” is subject to interpretation and could be adapted without a consistent standard.

Another possibility is that the Pentagon is seeking to improve its image in Silicon Valley. For example, Google chose not to renew a contract with the Department of Defense in 2018 following a backlash among employees regarding a project that used machine learning to distinguish between people and objects. The project would be used for drones.

Secretary of Defense Mark Esper has been adamant that AI ought to be and will be a central component in future military developments for the United States.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Esper said in a statement. He acknowledged that “AI technology will change much about the battlefield of the future.”

The weaponization of AI has unleashed growing concerns about the pernicious consequences that may arise from its adoption. Figures such as Elon Musk of SpaceX and Demis Hassabis at Google DeepMind have been particularly vocal in their opposition.

Despite this, the integration of AI into the military sphere appears to be continuing unimpeded. China has seen numerous starts-up setting up operations in a bid to cash in on technologies that will likely see an application in every major industry and sector of the economy.

In addition, the Marine Forces Special Operations Command (MARSOC) is increasingly exploring the possibility of using AI in their selection process. In particular, it is hoped that machine learning may assist in identifying potential recruits with traits that are seen as beneficial to the unit.

About Naman Karl-Thomas Habtom View All Posts

Naman Karl-Thomas Habtom is the Senior Vice President of the Cambridge Middle East and North Africa Forum and a writer with a focus on international affairs and security policy. He is currently pursuing graduate studies at the University of Cambridge where he is researching Swedish nuclear weapons policy during the Cold War.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In