What if a US Navy unmanned surface vessel’s vertical towed array sonar trailing beneath the surface detects an enemy submarine in position to attack US surface warships with torpedoes and instantly shares critical time-sensitive data with both undersea and aerial drones in position to respond? A forward-positioned undersea drone might either function as an explosive to attack the enemy submarine or transmit targeting data to a US Navy submarine in position to attack from safer stand-off distances. Perhaps an aerial drone or helicopter can use laser scanning and EO/IR targeting to find and destroy the enemy submarine target when it comes close to the surface, by virtue of receiving location data from surface and undersea drones. Of greatest significance, a decision to attack and destroy a manned enemy submarine using the processed and networked intelligence can be made by human decision makers performing command and control from the surface, air or undersea. This kind of scenario, drawing upon both the speed of AI-enabled data processing and human decision-making faculties, is precisely the kind of Concept of Operation now being pursued by US weapons developers both within and across the military services. With US military progress advancing these technologies and concepts at lightning speed, many are likely to wonder what US adversaries such as Russia and China are doing in this area.

The Role of AI in Modern Warfare

“Manned-unmanned teaming” and “human-machine interface” could be described as Pentagon “favorite terms” in the realm of weapons development and concepts of operation aimed at preparing for future warfare. Such an approach is deeply grounded in a clear recognition that any kind of future warfare engagement is best approached using a carefully blended or integrated combination of high-speed, AI-enabled analytics, autonomy and robotics and certain attributes unique to human decision-making. The conceptual or even “philosophical” foundation of this approach maintains that of course specific functions including data organization, analysis, high-speed processing and problem solving can be done exponentially faster and more efficiently than human. At the same time, Pentagon weapons developers operate with the widespread recognition that there are faculties and attributes specific to human consciousness and cognition that mathematically-generated computer algorithms simply cannot replicate.

Chinese AI

This US approach continues to generate promising combinations of next-generation technology and human-envisioned concepts of operation in preparation for future armed conflict, yet will US adversaries approach this critical and nuanced blending of manned-unmanned teaming in a similar fashion? Perhaps not, according to a significant new Army intelligence report publishing research findings related to the anticipated combat environment expected to define the coming decade. Among many things the Army’s  “The Operational Environment 2024-2034 Large-Scale Combat Operations.” (US Army Training and Doctrine Command, G2), examines robotics, AI, unmanned-systems, sensing, weapons usage and evolving doctrinal and strategic adjustments to new threats. Major rivals such as the People’s Republic of China, the Army report maintains, are pursuing manned-unmanned teaming with comparable intensity.  In particular, it appears the PLA is attempting to replicate or copy the fast-evolving US progress connecting manned and unmanned systems across multiple domains simultaneously.

“China is focused on developing teaming software that could be used for unmanned underwater and surface vessels under multiple configurations. It is funding research in manned-unmanned teaming, which could provide significant battlefield gains as neither a human nor machine acting on its own is as effective as both working in tandem,” the report writes.

The text of the report also examines some of the variations, complexities and different approaches informing how countries will integrate AI and unmanned systems into its Concepts of Operation. andOne key finding, according to the report, is that not only will future warfare be driven by AI, unmanned systems and ubiquitous “sensors” creating a “transparent” battlefield, but that major adversaries or rivals such as China appear to be prioritizing “science” of AI, autonomy and computing above the “art” or human components to combat decision making. This emphasis introduces key implications addressed in the report.

“China’s leadership is concerned about corruption within the PLA’s ranks, especially at the lower levels, and to the extent possible wants to remove the individual soldier from the decision-making process in favor of machine-driven guidance. This is in stark contrast to the U.S. Army’s way of war, which relies heavily on warfare as an artform, as the report describes. The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory.”  The text of the Army’s Operational Environment 2024-2034, Large Scale Combat Operations states.

Can advanced AI-enabled algorithms incorporate more subjective phenomena fundamental to human decision-making such as emotion, ethics and the mix of variables informing the psychology of human decision-making? This belief of the primacy of human decision-making, Pentagon weapons developers maintain, is particularly critical when it comes to decisions about the use of lethal force. This does not mean or suggest that AI-enabled computing can’t perform time-sensitive warfare tasks with accuracy, precision and speed but rather that an “optimal” approach to warfighting and modern Combined Arms Maneuver requires a key mixture of what’s best with both AI-empowered systems and human cognition. Sure enough, advanced US weapons developers and industry partners are making progress working on advanced algorithms increasing able to make what could be called more “subjective” determinations, such as discerning the difference in meaning between dance “ball” and tennis “ball” by examining a wide range of variables to include context and surrounding words. This being said, many are of the view that even advancing or next-generation AI-information algorithms looking more holistically at a host of variables and indicators in relation to one another in real time, human consciousness simply cannot be “replicated” by computers. This is particularly significant, the Army report indicates, when it comes to decisions about lethal force and the value of human’s weighing and analyzing the “art” of war alongside the “science.”