The Pentagon has been actively investing in developing and using artificial intelligence (AI) for military operations. In recent years, the Department of Defense (DoD) has allocated a significant portion of its budget to AI-related initiatives, with plans to increase spending even further in the coming years.

The DoD is committed to using AI ethically and responsibly and has established guidelines for ensuring that humans remain in control of decision-making processes. The Pentagon’s AI strategy focuses on developing advanced capabilities such as large language models and multimodal generative AI, which could be used for a range of applications, including surveillance, target identification, and autonomous weapons systems.

However, there are potential risks associated with using AI in military operations. For example, there is a concern that AI-powered weapons systems could lead to unintended consequences due to their inability to distinguish between friend and foe. Additionally, the use of AI could create a power imbalance between countries with access to advanced technology and those without it.

“Initial trust can be gained from design and development decisions through soldier touchpoints and basic psychological safety and continuously calibrated trust through evidence of effectiveness and feedback provided by the system during integration and operation. And so challenges exist in measuring the warfighters’ trust, which require additional research and understanding to define what influences that,” said Mike Horowitz, lead for Emerging Capabilities Policy Office. 

There are also ethical considerations surrounding the use of AI in military operations. Questions have been raised about whether it is morally acceptable to delegate life-and-death decisions to machines or algorithms. Furthermore, there are legal implications associated with the use of autonomous weapons systems that must be taken into account when developing policies and regulations related to AI in defense.

While the Pentagon’s investment in AI for military operations could bring many benefits, such as improved efficiency and accuracy in decision-making processes, it is important that these developments are done responsibly and ethically. It is essential that governments around the world work together to ensure that any advances made in this area are done so with consideration for both national security interests as well as human rights concerns.

NOT ChatGPT

The Pentagon’s AI plans are definitely stirring up some buzz in the tech world. From creating autonomous vehicles to enhancing cyber capabilities, the Department of Defense is taking full advantage of the latest technological advancements. And the use of large language models (LLMs) is just another addition to their arsenal.

However, their approach is different from what you might expect.

While large language models like ChatGPT have recently taken the AI world by storm, the Pentagon is exercising caution in using them due to a phenomenon known as “artificial hallucination.” Essentially, this means that these language models can generate seemingly realistic sensory experiences that don’t correspond to any real-world input, which can be problematic in specific military scenarios.

Instead, the Department of Defense will be creating its own, larger language models – ones that have been specifically trained on a carefully selected pool of websites that are deemed trustworthy and in line with the DoD’s values. These LLMs will have a range of uses within the military, from natural language processing and machine translation to sentiment analysis and predictive modeling.

“You need good data, like data that’s applicable to the questions that you want to use AI to answer,” Horowitz said. “You need that data to be to be cleaned, to be to be tagged, and that process is time-consuming. And that process has been. I think,…challenging. And it’s been challenging because we build all of these sorts of pipes of data that were designed to be independent from each other.”

Some military experts believe that the future of warfare will be heavily influenced by the use of AI – and large language models in particular. These tools will be particularly useful in areas where language barriers exist, such as deciphering intercepted communications from enemy forces. Additionally, LLMs can aid in the automation of tasks that would otherwise be done manually, freeing up human operators for more critical duties.

However, the use of large language models in the military has its challenges. One significant issue is the potential for these models to be biased based on the data they are trained on. This could have serious consequences when it comes to decision-making in the field. Furthermore, there is always the risk of cyberattacks and information breaches when it comes to AI technology.

In the end, it remains to be seen just how heavily the Pentagon’s AI plans will rely on large language models. But one thing is for sure – the military is no stranger to innovation, and the use of AI is just another step in their ongoing efforts to stay ahead of the curve. So, while ChatGPT may not be in the cards for the Department of Defense just yet, the future of AI in the military is undoubtedly an exciting one to watch.

—-

Want to know more? Check read Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World by Forrest E. Morgan today!