China’s Intelligentized warfare appears to be an effort to closely copy or replicate the Pentagon’s JADC2, and it provides the technical framework within which the PLA seeks to implement AI.
“Intelligentized warfare demonstrates the importance China places on integrating AI into its military decision making in the pursuit of decision dominance in all aspects of warfare. China’s leadership is concerned about corruption within the PLA’s ranks, especially at the lower levels, and to the extent possible wants to remove the individual soldier from the decision-making process in favor of machine-driven guidance. This is in stark contrast to the U.S. Army’s way of war, which relies heavily on warfare as an artform, as the report describes. The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory.”
The text of the Army’s Operational Environment 2024-2034, Large Scale Combat Operations states.
China’s disproportionate emphasis upon science in the decision-making process, the researchers determined, brings significant combat implications which need to be recognized and understood.
The US perspective, by contrast, is to fast-track AI and its successful implementation within a larger context of manned-unmanned teaming ultimately, yet under human supervision. Mr. Young Bang, the Principal Deputy, Asst. Sec of the Army – Acquisition, Logistics & Technology, explained what could be the Pentagon perspective quite clearly as he pointed to the importance of blending high-speed AI support with attributes unique to human decision-making.
“We have what we call in leadership is an Art and Science….and for us, right, we wanna enable a lot of the science to really accelerate that speed, whether it’s the data, the visioning, the fusion of data. So we could get insights to enable the leader or the commander to make decisions based on military experience,” Bang told Warrior in an interview. “How do we get algorithms in there that will enable us to do things much faster, efficiently, right, and give our soldiers more fighters, more bandwidth so they’re not doing menial tasks so they could actually do higher performing tasks.”
There can be scaled layers of autonomy optimizing the promise of AI and possibly even the use of AI-enabled algorithms for purely defensive or non-lethal use in what the Pentagon describes as potential “out-of-the-loop” AI. However, the Pentagon remains adamant about its “human-in-the-loop” doctrinal requirement when it comes to decisions about the use of lethal force. While such applications may indeed be considered and shown to be capable of saving lives quickly, the Army seems to retain a healthy skepticism or ethically-driven sense of caution regarding AI. In one sense, purely AI-driven out-of-the-loop force could identify threats instantly and fire interceptors at incoming non–human threats such as rockets and drones… in milliseconds.
The speed of life-saving defensive force could be paradigm-changing for force protection, yet the Army report is clear that the value of science and high-speed AI cannot “eclipse” or “replace” the primacy of human decision-making. The concern, however, is that China and other great power advocacy may not approach these questions with a comparable ethical frame of reference or belief in the less-calculable merits of human consciousness and judgment.
“AI is great and will advance us, but from the military side, we always think about the unintended usages of that right. In this context you talked about humans out of the loop or for defensive purposes. This could easily be repurposed for other unintended reasons as well as, again, if we don’t protect or harden, that becomes a vulnerability that could be taken over,” Bang told Warrior.
The report’s findings align with Bang’s comments and also point out the Army’s cultural ethic to support the unique and extremely valuable, if even less tangible, elements of human soldier judgment, ethics, morality and decision-making.
Sure enough the prevailing thinking at the Pentagon regarding AI is to “merge” and “integrate” AI and human decision-making into an optimal blend of attributes leveraging the best of each as they contribute to a larger “synergized” warfare picture. “Manned-unmanned” teaming has become a favorite word at the Pentagon, as weapon’s developers seek to harness the speed, processing power, analysis, information analysis efficiency and an ability to rapidly identify and “discriminate” emerging from both AI and autonomy, while simultaneously calling upon those uniquely human faculties specific to human consciousness, emotion, ethics and cognition to optimize warfare decision-making.
While the Pentagon is of course moving quickly to stay in front of the technological race for AI-superiority and quickly exploring its may applications, the Army intel report is clear to emphasize that, ultimately, many decisions in warfare are best made with human input and discretion.
“The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory,” the Army intel report says.
The PLA and the Pentagon appear build their respective war strategies upon different philosophical paradigms; the US is rapidly leveraging AI, yet simultaneously conscious of its limitations and the importance of key ethical parameters, while the Chinese are increasingly allowing science to play a larger determinative role. Clearly the US sees the rapid introduction of AI as quite in a purely tactical sense, the combination of both human faculties with high-speed AI-enabled computing will best position commanders and their forces to prevail in warfare.
The Pentagon is pursuing two clear, interwoven trajectories; one course is to rapidly push the envelope and the art of the possible regarding the merits of AI in combat, while the other is to concurrently prioritize the ethical use of AI and seek to leverage human abilities and mitigate some of the inherent difficulties and lack of reliability associated with the use of AI.
The US Army has made much progress in this realm, as evidenced in its annual large-scale Project Convergence exercise. The exercise demonstrated paradigm-changing breakthroughs in the realm of human-machine interface in warfare decision-making. In Project Convergence 2020, for example, the Army used an AI system called Firestorm to instantly analyze incoming sensor data and, by bouncing information off a vast database, make instant recommendations to human decision-makers in a matter of second.
Using AI, the warfare sensor-to-shooter curve was shortened from 20 minutes down to a matter of seconds. AI-enabled sensors and computing could immediate identify and verify targets, aggregate and organize information from otherwise incompatible transport layers and pools of collected data and make an instant “recommendation” to human decision-makers regarding how to “pair” sensors-and-shooters quickly to achieve the fastest and best warfare effect. This approach both maximizes the added value of high-speed AI-enabled data processing and analysis while sustaining the importance of human decision-making in warfare and the use of lethal force.
The Chinese, as described in the report, appear to heavily emphasize science and AI’s role in warfare not as something to work in tandem with human cognition but rather function as a superior priority. The research seems to suggest that, unlike the Pentagon, the PLA has less trust in its people to make decisions at the “edge-of-combat” and instead favor “machine-driven guidance.”
Simply put, Chinese military leaders may have less faith in the decision-making abilities of its individual soldiers and instead favor purely scientific and AI-driven decisions.
Not only does the present ethical and doctrinal concerns, as it creates possibilities for lethal technology to be employed in an unethical or irresponsible way, but there are technical reasons why this over-emphasis upon science may be ill advised in combat for certain reasons. Of course the initial consequence is simply that 100-percent computer-driven lethal force can kill non-combatants or the wrong people.
An AI system is only as effective as its database, and experience shows that AI can at times be “spoofed” or confused upon encountering information, evidence or objects that are not part of its database.
US industry, academic and military experts are working intensely to address this and engineer AI-enabled algorithms capable of correctly analyzing more subjective phenomena, AI can still be “spoofed” by an enemy, given false information or exposed to unfamiliar indications designed to generate a false conclusion.
The Army is well aware of this, which is why the service has launched a “Defend AI” 100 and 500-day program to test the limits of AI-reliability and uncover solutions capable of optimizing its value while simultaneously exploring its deficits and vulnerabilities to better “protect” and “harden” AI-enabled systems.
The intel report seems aligned with this Army effort and builds upon it by seeming to connect the rationale of “Defend AI” with its research findings which highlight the value of including more subjective, yet critical soldier-driven decision-making as indispensable elements of emerging doctrinal and conceptual human-machine interface.
The TRADOC report seems to connect this thinking with Army doctrine and concepts of operation by finding that the US Army’s approach is quite different from Chinese thinking because “decision-making authority (within the US Army) is often delegated to lower levels as exemplified by the emphasis placed on cultivating a strong NCO Corps in the U.S. Army.”
A key goal of the Army research seems to be to encourage what the text calls “strategic empathy” and prevent “mirror-imaging,” a tendency to see an enemy through one’s internal, subjective interpretive lens, assumptions and biases. Certainly Chinese thinking of how science and AI should be used in war appears based upon an entirely different frame of reference.
Chinese tactics, concepts of operation and applications of AI, the report suggests, should be understood with regard to Chinese thinking and not through a narrowly focused US-lens. This “dichotomy” between US and Chinese thinking is something the US Army must understand, the report says.
“For the U.S. Army, understanding this dichotomy will help inculcate strategic empathy and avoid mirror imaging. An accurate depiction of an enemy’s strengths and weaknesses coupled with a thorough understanding of their tendencies and preferred ways of war,” the Army report states.








COMMENTS