Paper #18. Artificial Intelligence & the Need for Proactive Doctrine

  • Thomas A. Drohan, Ph.D., Brig Gen USAF ret.
  • Leadership, Strategy
  • No Comments

US Joint Operations doctrine about the Operational Environment (OE) omits the agency of artificial intelligence (AI). How is this a problem?

After all, Joint Publication 3-0 (IV-1-IV-2) defines the Information Environment (IE) as expansively as it ever has, to include “cognitive” attributes. See the bolded portions of this excerpt:

The information environment comprises and aggregates numerous social, cultural, cognitive, technical, and physical attributes that act upon and impact knowledge, understanding, beliefs, world views, and, ultimately, actions of an individual, group, system, community, or organization. The information environment also includes technical systems and their use of data. The information environment directly affects all OEs.  Information is pervasive throughout the OE. To operate effectively requires understanding the interrelationship of the informational, physical, and human aspects that are shared by the OE and the information environment. Informational aspects reflect the way individuals, information systems, and groups communicate and exchange information. Physical aspects are the material characteristics of the environment that create constraints on and freedoms for the people and information systems that operate in it. Finally, human aspects frame why relevant actors perceive a situation in a particular way. Understanding the interplay between the informational, physical, and human aspects provides a unified view of the OE.

The problem arises when we see AI only as a human-made technical system and not as a source of influence, or agency. If this becomes conventional wisdom, we have a significant vulnerability in the operational IE. Why?

AI has become a source of influence in critical operations.

AI Influences

Consider a few examples.

Multi-Domain Operations

Multi-Domain Operations rely on technology with humans in the loop, but through interfaces that essentially represent machines’ conclusions. Increasing, human decisions are based on information from machine-processed data, not the data itself. How many pilots mentally compute a fix-to-fix, rather than accepting route guidance from integrated avionics? There’s often no time for the former as 5th-generation operators orchestrate multi-source situational awareness. The more we rely on technology, the more we accept conclusions presented to us by machine processing.

Other critical examples are Agile Combat Support (ACS) and Agile Combat Employment (ACE).

Agile Combat Support

ACS is the creation and lifeblood of rapid, sustained deployments. An agile combat support enterprise is a huge part of what USAF Chief of Staff General Dave Golfein’s call “to design the Air Force of the future in alignment with the National Defense Strategy.” More automation of operational testing and software development is underway. As Undersecretary of Defense for Acquisition and Sustainment Ellen Lord put it, software is defining combat systems and hardware is enabling them. AI that writes code leads to AI writing software.

Agile Combat Employment

AGE is critical to conducting distributed operations. Effective operations require resilient communications as threats target our platforms and linkages. In rapidly changing, highly contested environments, decisions to switch routing among nodes are shaped by machine processing and confirmed by humans. Who checks the assumptions of those algorithms? Hackers looking for zero-day exploits certainly do. Many other desired objectives, such as reducing deployment footprints, also rely on technologies like small smart munitions and distribution systems (11, 15, 16).

Micro munitions and the distributed ability to employ them provide kinetic options. This technology influences decision makers’ thinking about whether and how to intervene. Similarly, the availability of other AI-enabled capabilities (bots, DDOS attacks) generates options for a multitude of other actors in the IE. All of them can create unpredictable effects.     

Doctrine Reacts

So, why doesn’t our doctrine recognize that AI can be a cognitive actor, more than predictable, pre-programmed algorithms? One answer is that doctrine encourages drawing lessons learned from past experience, rather than anticipating alternative futures:

Joint doctrine presents fundamental principles that guide the employment of US military forces in coordinated and integrated action toward a common objective. It promotes a common perspective from which to plan, train, and conduct military operations. It represents what is taught, believed, and advocated as what is right (i.e., what works best). It provides distilled insights and wisdom gained from employing the military instrument of national power in operations to achieve national objectives.  

Such retrospection becomes institutionalized in how we see our professional identities and related warfighting platforms, even as we watch the future emerge right in front of us. Take social media. Algorithms that create filter bubbles of our own preferences effectively monetize us when we start making predictable decisions. Did our doctrinal approach of distilling principles out of past practices help us anticipate this and non-monetizing behavioral applications of AI technology?

Clearly not. The way we develop and vet doctrine is thoroughly reactive. Fixating on past experience becomes a bigger problem in environments where humans aren’t the most effectively intelligent actors.

Humans Control?

Super AI intelligence is not only coming; it is here. Machines are more effectively intelligent than humans in areas such as experience-based gaming (chess, go), operations research and data science solutions (optimization problems), and maximizing utility functions as a rational actor (decision-theoretic agent). By effectively intelligent, we mean data in context (intelligence) relevant to creating effects. This term is similar to US Marine Corps doctrine (Chapter 3) on effective intelligence, but broadly produced and provided. Back to our initial question of, how is the agency of AI a problem?

AI threatens human control, even in symbiotic machine-human systems wherein machines learn what human preference structures are, and obey them. Stuart Russell asks the important question, is there any biological example of a symbiotic relationship where the less intelligent being is in control of the more intelligence one?

We can answer this question yes, if we make a distinction between narrow intelligence (NI) and general intelligence (GI).

NI can eat GI

According to Max Tegmark, narrow AI refers to an ability to accomplish a narrow set of goals such as driving a car. General intelligence is an ability to accomplish any goal — learning. We can find examples of predators (Orcas) and parasites (barnacles) that are more skilled in narrow skills — hunting and infesting — than their generally more intelligent prey (blue whales) and hosts (crabs) The intelligence of Orcas and barnacles is more effective in contexts—environments—where they can be predators and parasites.

Therefore it’s not just super AI—what Tegmark refers to as general intelligence beyond human levels—that threatens human control. We have seen that relatively obscure advantages in learning can be remarkably resistant to control by “great” powers. Particularly when used together, Diplomatic-Informational-Military-Economic-Social (DIMES) advantages can create synergistic combined effects.

For instance, a superior ability to understand tribal morality (a Social power advantage), exploit disinformation (an Informational power advantage), and create encrypted wealth (an Economic power advantage) can render high profile Diplomacy and Military preeminence ineffective. But under what environmental (natural and artificial) conditions?

Humans, with our neurally networked brains, prefer to think that we are singularly capable of anticipating and shaping those relevant conditions. However, we don’t understand how our most advanced AI (deep neural networks) actually works. Hence we ought to be very concerned about any uncertainties of command and control over AI-enabled threats. Such as autonomous weapons of mass destruction.

Our struggle for theoretical understanding of AI has two practical implications. First, we should prepare for the likelihood that AI will not be wholly controllable. This is not unlike accounting for human agency.  Second, we should characterize the complex features of the operational IE to include the impact of AI. Treating AI as potential actors among interconnected, networked systems can identify emerging threats and opportunities. Especially the ones that are “not in my lane.”

Recommendations

Given the substantial research being conducted on how to control AI, we recommend that military doctrine be organizationally broadened and future-oriented by:

  • Placing military doctrine in a broader scope at lower organizational levels than the National Security Council, where authorities presumably can combine DIMES-wide effects. How? Aligning desired effects across integrated National Military, Defense, and Security Strategies is a start. Currently these strategies generally omit effects.
  • Emphasizing future-casting and sense-making in military doctrine. Characterizing, anticipating and influencing dynamic changes in the operational IE is necessary for relevant operations. How? With doctrine that addresses the reality of complex warfare by shaping more than military conditions. This requires collaboration of civil-military expertise under authorities and permissions that empower initiative and innovation.

A place where military doctrine begins to do this is JP 3-0 and related doctrine on operations design and joint planning.

Designing and planning is a process of interpreting strategic priorities into desired end-states and missions that achieve supporting objectives and derived effects with tasks and activities. Interpreting is more than literally translating, and necessary when priorities are not specified. The latter typically are not, at the political level. Unfortunately, current joint military doctrine defines end-states narrowly as “military” end-states, even though military effects certainly are not limited to military contexts.

Nevertheless, strategists, planners, operators and analysts can think about and recommend how to create alternative futures by specifying ends (“Guidance” in the figure below) and shaping conditions via ways and means (“Operational Approach” in the future below).


If we are not doing this, we are way behind competitors. Now, add not only “non-military” human actors, but also advanced machine-learning as agents. What happens to our strategy-making process? We have to make and test new assumptions about emerging possibilities. We can do this via courses of action that are “red-teamed” from multiple actors’ perspectives (beyond red). This needs to be done using any available idea and resource, then assessed with respect to risks. Otherwise wargaming tends to become rehearsing pre-planned sequences.

Proactive doctrine can exploit the so-far human comparative advantage in strategic learning. It’s notable that machines currently excel in experience-based learning. This is no small matter, as it includes generating new syntheses of discovered relationships. Humans, however, can intuit, deceive, create and destroy machines. In time, AI will be able to perform those cognitive, informational and physical functions as well.

Author: Thomas A. Drohan, Ph.D., Brig Gen USAF ret.

Leave a Reply