This premium article is exclusive to SOFREP+ Subscribers - Thank you for your support.
Aerial view of The Pentagon, Arlington, Virginia (Source: Mariordo Camila Ferreira & Mario Duran/Wikimedia Commons)
The Pentagon has been actively investing in developing and using artificial intelligence (AI) for military operations. In recent years, the Department of Defense (DoD) has allocated a significant portion of its budget to AI-related initiatives, with plans to increase spending even further in the coming years.
The DoD is committed to using AI ethically and responsibly and has established guidelines for ensuring that humans remain in control of decision-making processes. The Pentagon’s AI strategy focuses on developing advanced capabilities such as large language models and multimodal generative AI, which could be used for a range of applications, including surveillance, target identification, and autonomous weapons systems.
However, there are potential risks associated with using AI in military operations. For example, there is a concern that AI-powered weapons systems could lead to unintended consequences due to their inability to distinguish between friend and foe. Additionally, the use of AI could create a power imbalance between countries with access to advanced technology and those without it.
“Initial trust can be gained from design and development decisions through soldier touchpoints and basic psychological safety and continuously calibrated trust through evidence of effectiveness and feedback provided by the system during integration and operation. And so challenges exist in measuring the warfighters’ trust, which require additional research and understanding to define what influences that,” said Mike Horowitz, lead for Emerging Capabilities Policy Office.
There are also ethical considerations surrounding the use of AI in military operations. Questions have been raised about whether it is morally acceptable to delegate life-and-death decisions to machines or algorithms. Furthermore, there are legal implications associated with the use of autonomous weapons systems that must be taken into account when developing policies and regulations related to AI in defense.
While the Pentagon’s investment in AI for military operations could bring many benefits, such as improved efficiency and accuracy in decision-making processes, it is important that these developments are done responsibly and ethically. It is essential that governments around the world work together to ensure that any advances made in this area are done so with consideration for both national security interests as well as human rights concerns.
NOT ChatGPT
The Pentagon’s AI plans are definitely stirring up some buzz in the tech world. From creating autonomous vehicles to enhancing cyber capabilities, the Department of Defense is taking full advantage of the latest technological advancements. And the use of large language models (LLMs) is just another addition to their arsenal.
The Pentagon has been actively investing in developing and using artificial intelligence (AI) for military operations. In recent years, the Department of Defense (DoD) has allocated a significant portion of its budget to AI-related initiatives, with plans to increase spending even further in the coming years.
The DoD is committed to using AI ethically and responsibly and has established guidelines for ensuring that humans remain in control of decision-making processes. The Pentagon’s AI strategy focuses on developing advanced capabilities such as large language models and multimodal generative AI, which could be used for a range of applications, including surveillance, target identification, and autonomous weapons systems.
However, there are potential risks associated with using AI in military operations. For example, there is a concern that AI-powered weapons systems could lead to unintended consequences due to their inability to distinguish between friend and foe. Additionally, the use of AI could create a power imbalance between countries with access to advanced technology and those without it.
“Initial trust can be gained from design and development decisions through soldier touchpoints and basic psychological safety and continuously calibrated trust through evidence of effectiveness and feedback provided by the system during integration and operation. And so challenges exist in measuring the warfighters’ trust, which require additional research and understanding to define what influences that,” said Mike Horowitz, lead for Emerging Capabilities Policy Office.
There are also ethical considerations surrounding the use of AI in military operations. Questions have been raised about whether it is morally acceptable to delegate life-and-death decisions to machines or algorithms. Furthermore, there are legal implications associated with the use of autonomous weapons systems that must be taken into account when developing policies and regulations related to AI in defense.
While the Pentagon’s investment in AI for military operations could bring many benefits, such as improved efficiency and accuracy in decision-making processes, it is important that these developments are done responsibly and ethically. It is essential that governments around the world work together to ensure that any advances made in this area are done so with consideration for both national security interests as well as human rights concerns.
NOT ChatGPT
The Pentagon’s AI plans are definitely stirring up some buzz in the tech world. From creating autonomous vehicles to enhancing cyber capabilities, the Department of Defense is taking full advantage of the latest technological advancements. And the use of large language models (LLMs) is just another addition to their arsenal.
However, their approach is different from what you might expect.
While large language models like ChatGPT have recently taken the AI world by storm, the Pentagon is exercising caution in using them due to a phenomenon known as “artificial hallucination.” Essentially, this means that these language models can generate seemingly realistic sensory experiences that don’t correspond to any real-world input, which can be problematic in specific military scenarios.
Instead, the Department of Defense will be creating its own, larger language models – ones that have been specifically trained on a carefully selected pool of websites that are deemed trustworthy and in line with the DoD’s values. These LLMs will have a range of uses within the military, from natural language processing and machine translation to sentiment analysis and predictive modeling.
“You need good data, like data that’s applicable to the questions that you want to use AI to answer,” Horowitz said. “You need that data to be to be cleaned, to be to be tagged, and that process is time-consuming. And that process has been. I think,…challenging. And it’s been challenging because we build all of these sorts of pipes of data that were designed to be independent from each other.”
However, the use of large language models in the military has its challenges. One significant issue is the potential for these models to be biased based on the data they are trained on. This could have serious consequences when it comes to decision-making in the field. Furthermore, there is always the risk of cyberattacks and information breaches when it comes to AI technology.
In the end, it remains to be seen just how heavily the Pentagon’s AI plans will rely on large language models. But one thing is for sure – the military is no stranger to innovation, and the use of AI is just another step in their ongoing efforts to stay ahead of the curve. So, while ChatGPT may not be in the cards for the Department of Defense just yet, the future of AI in the military is undoubtedly an exciting one to watch.
As someone who’s seen what happens when the truth is distorted, I know how unfair it feels when those who’ve sacrificed the most lose their voice. At SOFREP, our veteran journalists, who once fought for freedom, now fight to bring you unfiltered, real-world intel. But without your support, we risk losing this vital source of truth. By subscribing, you’re not just leveling the playing field—you’re standing with those who’ve already given so much, ensuring they continue to serve by delivering stories that matter. Every subscription means we can hire more veterans and keep their hard-earned knowledge in the fight. Don’t let their voices be silenced. Please consider subscribing now.
One team, one fight,
Brandon Webb former Navy SEAL, Bestselling Author and Editor-in-Chief
Barrett is the world leader in long-range, large-caliber, precision rifle design and manufacturing. Barrett products are used by civilians, sport shooters, law enforcement agencies, the United States military, and more than 75 State Department-approved countries around the world.
PO Box 1077 MURFREESBORO, Tennessee 37133 United States
Scrubba Wash Bag
Our ultra-portable washing machine makes your journey easier. This convenient, pocket-sized travel companion allows you to travel lighter while helping you save money, time and water.
Our roots in shooting sports started off back in 1996 with our founder and CEO, Josh Ungier. His love of airguns took hold of our company from day one and we became the first e-commerce retailer dedicated to airguns, optics, ammo, and accessories. Over the next 25 years, customers turned to us for our unmatched product selection, great advice, education, and continued support of the sport and airgun industry.
COMMENTS
There are on this article.
You must become a subscriber or login to view or post comments on this article.