The following piece first appeared on Warrior Maven, a Military Content Group member website.
—
The Pentagon is preparing to fight large Terminator-type armies of autonomous armed robots, given the pace at which artificial intelligence (AI) is being accelerated and integrated into weapons systems and military technologies. It may seem like a sci-fi kind of Hollywood exaggeration, but the technology to do this … or something close to this … is basically here, and improvements in AI-enabled algorithms are arriving quickly. While the US is carefully weighing the implications of these kinds of emerging technologies and the tactical, ethical, and conceptual complexities they present, there is little assurance that potential adversaries will view these variables through a similar ethical lens. This is widely known and discussed. Therefore, weapons developers, tacticians, technologists, and warfare commanders in all the US services are aware of the need to prepare to fight armed robots.
A significant Army intelligence report adds depth and context to these concerns by pointing to a massive discrepancy between US and Chinese concepts of warfare decision-making as it pertains to advancing technology. The text of the report, titled “The Operational Environment 2024-2034 Large-Scale Combat Operations.” (US Army Training and Doctrine Command, G2), describes this juxtaposition in terms of a “dichotomy” —a term used to describe a massive “divergence” between US and Chinese conceptual and doctrinal approaches to the use of AI, computer automation and autonomy. Portions of the report discuss the many variables related to both the “art” and “science” of war separating Chinese from US strategic and tactical warfare thinking.
Extensive Analysis
The report involved an integrated collection of research work examining technological trends, current warfare, concepts of operation, and tactics in an effort to best anticipate the nature of combat in the coming decade. The research relied upon extensive close analysis of current wars, how emerging technologies are being used differently in combat, evolving doctrinal thinking, and a close examination of new concepts of multi-domain networking and Combined Arms Maneuver. The intel report is clear that future warfare environments will not only be more transparent but require forces to fight large amounts of robots, unmanned systems, and even drone swarms increasingly guided by AI.
“U.S. Soldiers should be prepared to face the threat from widely proliferated UAS. Soldiers in every type of unit and at every level should be as familiar with employing counter-UAS technologies as they are with firing their own weapons,” the report states.
After integrating various threads of thought, analysis, historical research and recent warfare experiences, the intel report points to what could be called a “paradox,” “juxtaposition” or divergence separating US and Chinese concepts of warfare decision-making. Looking at current warfare, the report posits that indeed some tactical circumstances may require more “science” than “art,” yet also distinguishes the US Army emphasis upon the need for human decision-making.
The Chinese, however, appear to be addressing these questions differently, as the report identifies what could be called an “abyss” separating Chinese and US concepts of operation with respect to technology and warfare decision-making. Simply put, the research says that, when it comes to decision-making in warfare, China massively favors “science” above human input in the realm of decision making, whereas US Army concepts of operation, doctrine and approach to warfare networking and AI more fully emphasize the human decision-making abilities of the individual soldier.
The following piece first appeared on Warrior Maven, a Military Content Group member website.
—
The Pentagon is preparing to fight large Terminator-type armies of autonomous armed robots, given the pace at which artificial intelligence (AI) is being accelerated and integrated into weapons systems and military technologies. It may seem like a sci-fi kind of Hollywood exaggeration, but the technology to do this … or something close to this … is basically here, and improvements in AI-enabled algorithms are arriving quickly. While the US is carefully weighing the implications of these kinds of emerging technologies and the tactical, ethical, and conceptual complexities they present, there is little assurance that potential adversaries will view these variables through a similar ethical lens. This is widely known and discussed. Therefore, weapons developers, tacticians, technologists, and warfare commanders in all the US services are aware of the need to prepare to fight armed robots.
A significant Army intelligence report adds depth and context to these concerns by pointing to a massive discrepancy between US and Chinese concepts of warfare decision-making as it pertains to advancing technology. The text of the report, titled “The Operational Environment 2024-2034 Large-Scale Combat Operations.” (US Army Training and Doctrine Command, G2), describes this juxtaposition in terms of a “dichotomy” —a term used to describe a massive “divergence” between US and Chinese conceptual and doctrinal approaches to the use of AI, computer automation and autonomy. Portions of the report discuss the many variables related to both the “art” and “science” of war separating Chinese from US strategic and tactical warfare thinking.
Extensive Analysis
The report involved an integrated collection of research work examining technological trends, current warfare, concepts of operation, and tactics in an effort to best anticipate the nature of combat in the coming decade. The research relied upon extensive close analysis of current wars, how emerging technologies are being used differently in combat, evolving doctrinal thinking, and a close examination of new concepts of multi-domain networking and Combined Arms Maneuver. The intel report is clear that future warfare environments will not only be more transparent but require forces to fight large amounts of robots, unmanned systems, and even drone swarms increasingly guided by AI.
“U.S. Soldiers should be prepared to face the threat from widely proliferated UAS. Soldiers in every type of unit and at every level should be as familiar with employing counter-UAS technologies as they are with firing their own weapons,” the report states.
After integrating various threads of thought, analysis, historical research and recent warfare experiences, the intel report points to what could be called a “paradox,” “juxtaposition” or divergence separating US and Chinese concepts of warfare decision-making. Looking at current warfare, the report posits that indeed some tactical circumstances may require more “science” than “art,” yet also distinguishes the US Army emphasis upon the need for human decision-making.
The Chinese, however, appear to be addressing these questions differently, as the report identifies what could be called an “abyss” separating Chinese and US concepts of operation with respect to technology and warfare decision-making. Simply put, the research says that, when it comes to decision-making in warfare, China massively favors “science” above human input in the realm of decision making, whereas US Army concepts of operation, doctrine and approach to warfare networking and AI more fully emphasize the human decision-making abilities of the individual soldier.
Certainly both the US and China heavily emphasize AI, technological development and the “science” of war, yet the report finds the Chinese approach may disproportionately favor “science” at the expense of the “art” or more intuitive and human variables associated with warfare decision-making. Published information in recent years regarding emerging Chinese uses of AI, robotics, unmanned systems and networking seems to support these findings, as the People’s Liberation Army (PLA) has been “fast-tracking” dollars and emerging technologies into the military applications of AI.
The findings of the Army intel research appears grounded within a broad recognition that the PLA is intensely pursuing what the Army analysis calls “Intelligentized warfare,” a term used by the Pentagon to describe the PRC’s emphasis on networking and analyzing data across a multi-domain force.
The best way to understand the PLAs “Intelligentized warfare” is to view it in terms of the Pentagon’s Joint All Domain Command and Control effort. The Pentagon’s now-being-implemented JADC2 is a multi-domain, multi-node joint warfare increasingly infused with high-speed data processing, interfaces and gateways to connect otherwise disparate pools or streams of data across multiple transport layers and AI-enabled analysis.
China’s Intelligentized warfare appears to be an effort to closely copy or replicate the Pentagon’s JADC2, and it provides the technical framework within which the PLA seeks to implement AI.
“Intelligentized warfare demonstrates the importance China places on integrating AI into its military decision making in the pursuit of decision dominance in all aspects of warfare. China’s leadership is concerned about corruption within the PLA’s ranks, especially at the lower levels, and to the extent possible wants to remove the individual soldier from the decision-making process in favor of machine-driven guidance. This is in stark contrast to the U.S. Army’s way of war, which relies heavily on warfare as an artform, as the report describes. The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory.”
The text of the Army’s Operational Environment 2024-2034, Large Scale Combat Operations states.
China’s disproportionate emphasis upon science in the decision-making process, the researchers determined, brings significant combat implications which need to be recognized and understood.
The US perspective, by contrast, is to fast-track AI and its successful implementation within a larger context of manned-unmanned teaming ultimately, yet under human supervision. Mr. Young Bang, the Principal Deputy, Asst. Sec of the Army – Acquisition, Logistics & Technology, explained what could be the Pentagon perspective quite clearly as he pointed to the importance of blending high-speed AI support with attributes unique to human decision-making.
“We have what we call in leadership is an Art and Science….and for us, right, we wanna enable a lot of the science to really accelerate that speed, whether it’s the data, the visioning, the fusion of data. So we could get insights to enable the leader or the commander to make decisions based on military experience,” Bang told Warrior in an interview. “How do we get algorithms in there that will enable us to do things much faster, efficiently, right, and give our soldiers more fighters, more bandwidth so they’re not doing menial tasks so they could actually do higher performing tasks.”
There can be scaled layers of autonomy optimizing the promise of AI and possibly even the use of AI-enabled algorithms for purely defensive or non-lethal use in what the Pentagon describes as potential “out-of-the-loop” AI. However, the Pentagon remains adamant about its “human-in-the-loop” doctrinal requirement when it comes to decisions about the use of lethal force. While such applications may indeed be considered and shown to be capable of saving lives quickly, the Army seems to retain a healthy skepticism or ethically-driven sense of caution regarding AI. In one sense, purely AI-driven out-of-the-loop force could identify threats instantly and fire interceptors at incoming non–human threats such as rockets and drones… in milliseconds.
The speed of life-saving defensive force could be paradigm-changing for force protection, yet the Army report is clear that the value of science and high-speed AI cannot “eclipse” or “replace” the primacy of human decision-making. The concern, however, is that China and other great power advocacy may not approach these questions with a comparable ethical frame of reference or belief in the less-calculable merits of human consciousness and judgment.
“AI is great and will advance us, but from the military side, we always think about the unintended usages of that right. In this context you talked about humans out of the loop or for defensive purposes. This could easily be repurposed for other unintended reasons as well as, again, if we don’t protect or harden, that becomes a vulnerability that could be taken over,” Bang told Warrior.
The report’s findings align with Bang’s comments and also point out the Army’s cultural ethic to support the unique and extremely valuable, if even less tangible, elements of human soldier judgment, ethics, morality and decision-making.
Sure enough the prevailing thinking at the Pentagon regarding AI is to “merge” and “integrate” AI and human decision-making into an optimal blend of attributes leveraging the best of each as they contribute to a larger “synergized” warfare picture. “Manned-unmanned” teaming has become a favorite word at the Pentagon, as weapon’s developers seek to harness the speed, processing power, analysis, information analysis efficiency and an ability to rapidly identify and “discriminate” emerging from both AI and autonomy, while simultaneously calling upon those uniquely human faculties specific to human consciousness, emotion, ethics and cognition to optimize warfare decision-making.
While the Pentagon is of course moving quickly to stay in front of the technological race for AI-superiority and quickly exploring its may applications, the Army intel report is clear to emphasize that, ultimately, many decisions in warfare are best made with human input and discretion.
“The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory,” the Army intel report says.
The PLA and the Pentagon appear build their respective war strategies upon different philosophical paradigms; the US is rapidly leveraging AI, yet simultaneously conscious of its limitations and the importance of key ethical parameters, while the Chinese are increasingly allowing science to play a larger determinative role. Clearly the US sees the rapid introduction of AI as quite in a purely tactical sense, the combination of both human faculties with high-speed AI-enabled computing will best position commanders and their forces to prevail in warfare.
The Pentagon is pursuing two clear, interwoven trajectories; one course is to rapidly push the envelope and the art of the possible regarding the merits of AI in combat, while the other is to concurrently prioritize the ethical use of AI and seek to leverage human abilities and mitigate some of the inherent difficulties and lack of reliability associated with the use of AI.
The US Army has made much progress in this realm, as evidenced in its annual large-scale Project Convergence exercise. The exercise demonstrated paradigm-changing breakthroughs in the realm of human-machine interface in warfare decision-making. In Project Convergence 2020, for example, the Army used an AI system called Firestorm to instantly analyze incoming sensor data and, by bouncing information off a vast database, make instant recommendations to human decision-makers in a matter of second.
Using AI, the warfare sensor-to-shooter curve was shortened from 20 minutes down to a matter of seconds. AI-enabled sensors and computing could immediate identify and verify targets, aggregate and organize information from otherwise incompatible transport layers and pools of collected data and make an instant “recommendation” to human decision-makers regarding how to “pair” sensors-and-shooters quickly to achieve the fastest and best warfare effect. This approach both maximizes the added value of high-speed AI-enabled data processing and analysis while sustaining the importance of human decision-making in warfare and the use of lethal force.
The Chinese, as described in the report, appear to heavily emphasize science and AI’s role in warfare not as something to work in tandem with human cognition but rather function as a superior priority. The research seems to suggest that, unlike the Pentagon, the PLA has less trust in its people to make decisions at the “edge-of-combat” and instead favor “machine-driven guidance.”
Simply put, Chinese military leaders may have less faith in the decision-making abilities of its individual soldiers and instead favor purely scientific and AI-driven decisions.
Not only does the present ethical and doctrinal concerns, as it creates possibilities for lethal technology to be employed in an unethical or irresponsible way, but there are technical reasons why this over-emphasis upon science may be ill advised in combat for certain reasons. Of course the initial consequence is simply that 100-percent computer-driven lethal force can kill non-combatants or the wrong people.
An AI system is only as effective as its database, and experience shows that AI can at times be “spoofed” or confused upon encountering information, evidence or objects that are not part of its database.
US industry, academic and military experts are working intensely to address this and engineer AI-enabled algorithms capable of correctly analyzing more subjective phenomena, AI can still be “spoofed” by an enemy, given false information or exposed to unfamiliar indications designed to generate a false conclusion.
The Army is well aware of this, which is why the service has launched a “Defend AI” 100 and 500-day program to test the limits of AI-reliability and uncover solutions capable of optimizing its value while simultaneously exploring its deficits and vulnerabilities to better “protect” and “harden” AI-enabled systems.
The intel report seems aligned with this Army effort and builds upon it by seeming to connect the rationale of “Defend AI” with its research findings which highlight the value of including more subjective, yet critical soldier-driven decision-making as indispensable elements of emerging doctrinal and conceptual human-machine interface.
The TRADOC report seems to connect this thinking with Army doctrine and concepts of operation by finding that the US Army’s approach is quite different from Chinese thinking because “decision-making authority (within the US Army) is often delegated to lower levels as exemplified by the emphasis placed on cultivating a strong NCO Corps in the U.S. Army.”
A key goal of the Army research seems to be to encourage what the text calls “strategic empathy” and prevent “mirror-imaging,” a tendency to see an enemy through one’s internal, subjective interpretive lens, assumptions and biases. Certainly Chinese thinking of how science and AI should be used in war appears based upon an entirely different frame of reference.
Chinese tactics, concepts of operation and applications of AI, the report suggests, should be understood with regard to Chinese thinking and not through a narrowly focused US-lens. This “dichotomy” between US and Chinese thinking is something the US Army must understand, the report says.
“For the U.S. Army, understanding this dichotomy will help inculcate strategic empathy and avoid mirror imaging. An accurate depiction of an enemy’s strengths and weaknesses coupled with a thorough understanding of their tendencies and preferred ways of war,” the Army report states.
COMMENTS
There are
on this article.
You must become a subscriber or login to view or post comments on this article.