The Pentagon has been actively investing in developing and using artificial intelligence (AI) for military operations. In recent years, the Department of Defense (DoD) has allocated a significant portion of its budget to AI-related initiatives, with plans to increase spending even further in the coming years.

The DoD is committed to using AI ethically and responsibly and has established guidelines for ensuring that humans remain in control of decision-making processes. The Pentagon’s AI strategy focuses on developing advanced capabilities such as large language models and multimodal generative AI, which could be used for a range of applications, including surveillance, target identification, and autonomous weapons systems.

However, there are potential risks associated with using AI in military operations. For example, there is a concern that AI-powered weapons systems could lead to unintended consequences due to their inability to distinguish between friend and foe. Additionally, the use of AI could create a power imbalance between countries with access to advanced technology and those without it.

“Initial trust can be gained from design and development decisions through soldier touchpoints and basic psychological safety and continuously calibrated trust through evidence of effectiveness and feedback provided by the system during integration and operation. And so challenges exist in measuring the warfighters’ trust, which require additional research and understanding to define what influences that,” said Mike Horowitz, lead for Emerging Capabilities Policy Office.