This paper is a follow-on to Paper #54, Operations Need Competitive Initiative in a Multi-dimensional Strategy. The focus is on operations in the information environment (IE), where AI can generate inexplicable solutions and problematic strategies.
Principles, lessons learned, and habits often endure well past their competitive potential. Scientific methods can test them for falsifiability, but most people stick to legacy ways and selectively present evidence to “prove” them. AI is changing how humans understand information, intelligence, and knowledge, but not necessarily what we believe. Time-“proven” principles will endure, creating plenty of vulnerabilities and opportunities for advantage.
Consider a few long-accepted principles about the art of war from two venerable strategists—Sunzi and Carl von Clausewitz. They expanded the narrower perspectives of their times and focused on winning wars. This broad approach to specific operations is essential to winning in dynamic, competitive environments. It’s also an effective approach to learning, as in our Information Environment Advanced Analysis course.
Sunzi’s Art of War proposes principles for warfare—the how of war. Most of them exploit deception, such as this extract from Chapter 5 (Energy):
Indirect tactics, efficiently applied, are inexhaustible as Heaven and Earth, unending as the flow of rivers and streams; like the sun and moon, they end but begin anew; like the four seasons, they pass away to return once more.
There are not more than five musical notes, yet the combinations of these five give rise to more melodies than can ever be heard.
There are not more than five primary colors (blue, yellow, red, white, and black), yet in combination, they produce more hues than can ever be seen.
There are not more than five cardinal tastes (sour, acrid, salt, sweet, bitter), yet combinations of them yield more flavors than can ever be tasted.
In battle, there are not more than two methods of attack: the direct and the indirect; yet these two, in combination, give rise to an endless series of maneuvers.
The direct and the indirect lead to each other in turn. It is like moving in a circle—you never come to an end. Who can exhaust the possibilities of their combination?
Clausewitz’s On War is also filled with principles for warfare, but he begins with the why of war. In this regard, Clausewitz emphasized motives and manifestations of war. A fundamental assumption is that victory in war requires breaking an opponent’s will to resist. Three principles are central to this contest: polarity, policy, and the “paradoxical” trinity (see pp 82-89 in the Howard and Paret edition).
Polarity. This principle applies when two commanders seek the same aim, whether an object or a relationship. For instance, polarity applies to destructive victory in battle when both sides seek it. Destroying enemy forces cancels out the enemy’s aim. Polarity does not apply to offense and defense because those are different aims. Every offensive advantage does not cancel out every defensive advantage. However, polarity does apply to the higher aims that offense and defense seek, other than causing and preventing destruction. This crucial point is about winning wars beyond battles and campaigns, and relates to policy.
Policy. War is a continuation of policy by other means, such as offense and defense. Clausewitz thought that the defense was superior to the offense. Therefore, defense destroys the zero-sum nature of polarity, which explains inaction in war and the need to assess the probabilities of achieving one’s aim. Probability or chance is one of the three elements of war’s enduring, diverse nature.
Nature of War. The paradoxical trinity is a concept of why war takes the form that it does. Unlike Sunzi’s prescriptions, it does not prescribe how to conduct warfare. The three elements are:
Clausewitz asserts the need to keep these three tendencies in balance, a concept consistent with the Newtonian physics of his day.
Note: the trinity is about these dynamic conditions of war, not the agents he illustratively mentions (people, army, government). That is, primordial violence is not limited to the “people”—armies and governments are also prone to escalatory and emotional violence. Chance and probability assessments are not limited to military commanders and operators–people and governments take calculated risks as agents of war, too. Subjecting war to reason is not limited to governments—people and militaries use various hierarchies of effort to align their wartime activities.
AI explodes the principles from Sunzi and Clausewitz into more possibilities.
Sunzi’s endless combination of tactics are no exaggeration. They are arbuably even more valid today based on many more distinctions, empirical (notes, colors, tastes, and anything that can be sensed) and conceptual (direct, indirect, and inexplicable). This explosion provides more diverse options in warfare.
Clausewitz’s trinity has many manifestations today, but one of its elements is invalid. War no longer requires escalatory primordial (lethal) violence. Other forms of physical and psychological violence, including what international law does not regard as violence, can achieve the effects of war aims.
Think influence activities that change individual and group behavior, grey zone operations that fell regimes, cyberattacks that destroy energy distribution, disinformation that realigns alliances, and hybrid warfare that polarizes hyper-connected societies. The nature of war has changed as technology has created more vulnerable conditions.
Imagine what an AI trained in the above distinctions might advise.
Sunzi’s indirect tactics rely on detectable distinctions and operational ways defined by their relation to strategic goals. Clausewitz provides conditions of war. An AI in the IE will learn patterns from past behavior and recommend new combinations, even syntheses.
This expansion of possibilities is not new. There are plenty of historical examples of time-honored principles that adversaries have summarily disproven via superior ops:
What’s different in the Age of AI? The speed and scale at which AI can develop inexplicable solutions for complex problems will create new threats and opportunities with unprecedented impact. Few solutions fail to create problems, particularly if we don’t understand or have time to know how the AI developed the solution.
Are we prepared? The IE has subsumed the OE to the point that information and operations are the same process. Processes explain change and lack of change. Yet we cling to the priority that information supports operations, not the reverse. In any environment, we must learn and assess “info in ops” (as recent doctrine authorizes) and “ops in info” to recognize and win the wars already being waged.
How can we gain advantage in a pervasive, highly competitive Artificial Intelligence Information Environment (A2E)? DoD guidance on training and education expires this year. We are well past the time for a whole-of-government effort to out-perform authoritarian threats. We need competitive and effective strategy beyond our narrow “when-deterrence-fails” mindset.
First, we must recognize the creation of information from data as a contest in learning, and assess it. Think about how we create understanding and why we operate. Both processes are systems of inputs, changes, and outputs.
The operation could be supporting or supported, depending on the concept of influence and desired effects. We should operate activities as inputs to change conditions, generating outputs that inform our subsequent inputs. A system = inputs, changes, and outputs. That operating process can create informational and operational initiative, whether the activities are tactics, operations, strategies, or even policies.
Second, operators must be aware of vulnerabilities and empowered to seize opportunities as opponents try to disrupt, transform, or destroy their ops. In a permeable IE, adversaries target our ops’ inputs (such as deployment activities), changes (such as on-site employment activities), and/or outputs (such as digital dust).
Every activity we conduct in any operational stage is an input that changes something and produces an output. Any tactic may have strategically significant consequences. We should be targeting our opponents’ information and operations systematically, too—inputs, changes, outputs, and feedback. This effort requires a competitive level of awareness with appropriate permissions and authorities. That requires whole-of-government collaboration.
We’re back to the classic blend of broad perspectives and specific operations.
So, how can we gain and maintain initiative in our operations, at least as JP 3-04 emphasizes, information in operations?
Information operates differently in analog and digital contexts. This distinction benefits from Karl Popper’s insight on analyzing problems as analog “clocks”—complicated and deterministic or digital “clouds”—complex and uncertain (see this YouTube video by Thomas Koulopoulos, author of Revealing the Invisible, for a brief introduction).
It’s operationally helpful to conceptualize the IE in terms of continuous (analog, interfaced parts, mechanical clock-type) and discrete (digital, 1’s and 0’s, networked cloud-type) features.
One common comparison of analog and digital features is vinyl media compared to digitized music. The physical pressing of a sound wave on a vinyl record is technically a smooth, continuous reproduction of the music—analog. Digitized audio information is not as continuous because the data is technically discrete—1’s and 0’s that sample the actual sound wave.
Some people prefer listening to vinyl records rather than digitized music to hear meaningful nuances for enjoyment’s sake. However, many of us cannot hear differences in audio information across different frequencies, though we can appreciate the meaning and more original context. Most people prefer digitized music for its storage and shareability (see The Truth About Vinyl—Vinyl vs. Digital).
By characterizing the IE in terms of analog and digital systems, sub-systems, objects, and subjects, we can understand different ways and means to create meaning and context (data to information). Tailoring the venue of ways and means to fit a targeted audience can be critical to generating initiative that competes well. As we shall see, doctrine is somewhat helpful.
Various doctrine and concept documents depict the IE in three dimensions: physical, informational, and cognitive or human. A recent GAO report replaces “cognitive” with “human. This depiction, however, omits other sentient beings with cognitive capabilities, such as advanced AI. The report also limits joint force use of the IE to improving understanding, decision-making, and communications for all-domain operations. That may sound expansive, but compare it to an adversary’s use of info in ops for all effects in all domains.
How we depict the IE affects how we choose to compete, and we are competing narrowly.
The physical dimension of the IE consists of the infrastructure that stores, transmits, and receives information. We measure infrastructure in terms of continuous ranges of values — length, width, height, depth, time, energy emissions, etc. These physical aspects are often sensed as analog information, but also can be sensed as increments of discrete, digital information. The major physical components include structures and equipment.
The informational dimension of the IE, which includes cyber and the electromagnetic spectrum, consists of the networks and energy fields where data and information are collected, processed, stored, disseminated, and displayed. However, this conceptualization is redundant because we are using “information” to describe an “information” environment.
This circular reasoning promotes poor assessment. We can’t falsify propositions about the IE if our definition of the IE analyzes that whole into a part that duplicates the whole (“information”). The “information” dimension of the IE is definitionally part of the IE. To see why this logic error matters, consider how we assess information. We often do it in terms of (a) continuous ranges of values (analog) and (b) discrete values (digital). For instance, we can measure the loudness of sound as:
We use analog and digital values to convert qualitative and quantitative data into information. Each type of information has costs and benefits subject to variable contexts, but we still must interpret meaning in those contexts. That process is physical and psychological, so we don’t need an “informational” dimension. We don’t need an informational dimension because we can specify information in physical and psychological terms. Breaking down “information” into something different than “information” is critical to detecting what’s changing and not—assessment.
Consider analog and digital information. Analog information can carry meaning we attribute to it (such as the interpretation of “most quiet” above). That’s a qualitative or psychological benefit because we get to interpret what the information means, such as a feeling or emotion. In narrative warfare, mobilizing emotions is key to weaponizing feelings for desired effects. Music and the correct choice of media can influence some audiences psychologically. Digital information, on the other hand, is rapidly transferable (such as the ratio of acoustic power we define as a decibel). That form of information has quantitative or physical benefits because we can specify a numeric value and relate it to a physical object. A video that portrays enemy operations with high decibel sounds and friendly activities with lower decibels can create different physical reactions. There’s a psychological aspect to this as well—people might disagree on how “quiet” or “loud” the video is. Regardless, a numerically selected physical effect, such as a certain dB of sound, can shape psychological sentiment.
Most analog information is converted to digits anyway—we set a standard, a meaning, that 10 dB is most quiet. Based on our human factor profile of the individuals involved, we might conclude:
Cultural expectations also set their own standards that create meaning and context. Consider two examples—language and propaganda.
In a traditional Japanese language context, 45 dB, physical information we categorized as “quiet” above, is an appropriate psychological level for effective communication. However, in a Chinese context where the language includes tones (such as rising, falling, high, and dipping), 45 dB is too quiet for many listeners to sense the different tones physically. A level of 60 dB is generally more effective, and lesser decibels could be regarded as secretive or deceptive. Again, the physical shapes the psychological, but we need both to explain all ranges of behavior. For instance, a psychological predisposition toward loud speech as narcissistic gas-lighting rather than assertive leadership means that higher decibel speech will probably evoke a positive physical reaction.
Propaganda also has physical and psychological meaning and context. A Voice of America audio stream that reaches different domestic audiences in China or Russia will be treated as various degrees of propaganda or truth. The physical characteristics of the audio stream must be tailored to target audience preferences. An audio stream may be less effective for some audiences than visual memes. The psychological aspect is much more difficult to gauge. Psychologically, locals who believe or are coerced to comply with Party-government propaganda to distrust VOA will outwardly regard VOA as propaganda.
HOW LOUD?
D to I | Qualitative | Quantitative |
Analog | dB range categorizes | 120 – 140 dB |
Digital | dB specifies | 120 … 140 dB |
So what’s the problem with labeling all that stuff “information”? The problem is that we need to go further than what we observe in the IE when we compete with clever opponents.
There are two main reasons. First, if we simply act on what we observe, we’re reactive, not proactive, so more competitors can dictate our range of actions. Second, observations provide us information only if we interpret its meaning in different contexts. Opponents are trying to interpret for us, such as collapsing the Observe and Orient stages of our OODA Loop into just one step (see ICSL Paper #23).
Real warfare today requires managers, commanders, and other leaders to expand narrow in-my-job-jar perspectives. We must compete against opponents who operate inside our gaps and outside our assumptions.
The new Joint Publication 3-34, Information in Joint Operations (Nov 2022), has the potential to increase our awareness of this competition because for the first time, our doctrine defines what information in a non-tautological manner: data becomes information once somebody gives it meaning and context. Who’s going to do that? Operators need to do this, or we recreate new seams for adversaries to exploit.
We are in the cognitive, or better yet, psychological (broader than cognitive because it includes behavior) dimension of the IE.
In the above two examples, whether a 30 dB whisper is culturally appropriate and whether VOA has its intended effect depend on technological and human characteristics of the IE. Technology might seem purely physical, but there’s some interface with a human or other sentient being. We need to interpret the meaning and context of the data and anticipate its effect to our advantage. Consider this process in the cyber domain and cognition. Within the cyber domain of machines, Martin Libicki, in Cyber Deterrence and Cyberwar, explains how syntax and semantics interact. He describes two layers of information:
Digital content and instructions reside in a physical infrastructure, except during wireless transfer. Psychological processes in various social contexts alter the available syntax and semantics—the meaning and context that transforms data into information. Technological breakthroughs could include device-less storage, which would expand the availability of information for those able to access it. Changed social contexts could include could de-population trends, international migration, and global epidemics.
The cognitive aspect of the psychological dimension includes individual factors and group dynamics. These considerations can explain why some people, groups, and organizations create and act upon particular information flows. That information is exceptionally useful for anticipating how a person, group, or organization will likely behave under different conditions.
There are many ways to measure cognition in terms of functional performance (task comprehension, environmental sensing) and impairment (cognitive domain tests). However, how cognition works is only partially understood, though advances in neural information processing have led to artificial neural networks capable of learning tasks. An important question for anticipating events in the IE is how human or AI cognition process more data and information in new contexts. For instance, what conclusions can we draw if we anticipate the invention of device-less storage and extrapolate current de-population trends (see Eastern Europe)? In answering this question, humans are not the only systems that assign meaning and context to data, thereby creating information.
Machines have become influencers, too. We already rely on their conclusions for navigation, weapons release, troop movements, and sentiment analysis, to name a few. Generative AI, AI that “learns” more than discovering patterns in the data set that trained it, has gigantic potential. Recognizing context is still a human advantage compared to AI, so even general AI will be more prone to making mistakes. Those could be erroneous medical breakthroughs that we don’t understand until they are tested or unethical technologies that threaten human extinction.
The need for human initiative that competes with inexplicable AI is urgent.
Recently released Department of Defense ethical principles seek to limit the influence of artificial intelligence (AI). For instance, the principle of being Traceable would restrict AI methods, data sources, and designs. Enforceable implementation is crucial. An unfettered AI could self-improve in its performance of tasks such as finding relationships among data. So far, AI scientists don’t think that artificial neural networks can intuit.
Each of the above two dimensions of the IE has peculiar vulnerabilities we must mitigate to remain competitive.
Physical components are hard to hide, but they can be hardened and entry controlled. Data and information that reside within them are subject to coding errors, disruption, and deception. Disguised as benign content, viruses can access a system then release their own instructions.
Psychological cognition and behavior are susceptible to misperceptions, flawed data, selective narratives, various motives and structured expectations. There is also unpredictable agency, which we traditionally attribute to humans. AI also is capable of unpredictable outputs, good (new molecules for heath care) and bad (malicious penetrations).
The physical and psychological dimensions of the IE interact in known clock-like and unknown cloud-like ways. Historically, gaining and maintaining the initiative confers informational and operational advantage because it places one’s competitor is in the position of having to react or suffering the consequences.
Understanding the complex interactions among systems, sub-systems, objects, and subjects requires modeling the initial conditions that we observe. We return to Karl Popper’s distinction between clocks and clouds.
Some of the components in the IE act like mechanical clocks, and others act like conscious clouds. Popper’s division of the world into these two types of systems is a dialectic of opposites: one orderly and renderable into parts, and the other disorderly and irregularly holistic. The dialectic is a simplification of what we observe, of course, but the concept has a digital advantage. “Either this or that” models can be digitized. Moreover, with quantum technology, observables can be this and that.”
We know what goes into clocks, and how their components work together as a closed system to produce results. When the results aren’t what we want, we adjust the parts to fix the problem. The analogy is a bit outdated now with hyper connected clocks and watches, but the point is mechanical clocks are relatively predictable. Clouds are not.
Clouds are dynamic processes affected by many environmental factors such as wind velocity, temperature, and pressure. Lorenz’s “butterfly effect” showed that sensitive environmental conditions produce “deterministic chaos” — such that a butterfly’s flapping in one location could set in motion interactions that cause a change in the weather elsewhere. We can anticipate and prepare for uncertainties, but we cannot predict all failures or random acts. How can we gain advantage in such circumstances?
We have entrenched principles from the clock world. The problem is that if we operate the same way in the cloud world as the clock world, we will produce different effects. So to gain and maintain advantage in the IE, we need to reconsider conventional wisdom.
Initiative is a good example of a generally accepted principle. Seize, retain and exploit the initiative, is a fundamental principle of war, business, and sound management. The Clock-Cloud comparison below summarizes how to seize and maintain the initiative in each type of environment. The following concepts that are generally considered ways to gain the advantage of initiative: tempo; momentum; learning; decision; position; and freedom of maneuver. To repeat, in order to gain advantage in a cloud world, we may need to operate differently than in a clock world:
INITIATIVE | CLOCK-WORLD | CLOUD-WORLD |
Tempo | Speed | Virality and active density |
Momentum | Mass x velocity | Networking |
Learning | What to acquire | How to acquire |
Decision | Planning options: decision points, branches, sequels | Creating & exploiting opportunities: recon-pull |
Position | High ground | Virtual control |
Freedom of maneuver | Ops access | Info access |
To Gain Initiative (Clock-World example; Cloud-World example): Tempo
Momentum
Learning
Decision
Position
Freedom of maneuver
Our characterization of the IE into clock and cloud characteristics is, to repeat, a simplification. If we take clock-cloud as a dialectic (the clock as the thesis and cloud as antithesis), what’s the resultant synthesis? It’s a blend of both or something entirely different due to systemic interactions. Quantum mechanics hold the potential for modeling such complexity.
AI is indispensable for this. An example is computer-assisted mathematical proofs. Recently, mathematicians showed that long-accepted equations which explain fluid mechanics need to be revised. The equations “blow up” under more complex initial conditions than their assumptions permit. Since the 1750s, Swiss mathemetician Leonhard Euler’s equations have described the flow of liquids in what we could characterize as a clock world. Euler’s equations calculate fluid behavior based on differentials in velocity and density. In 2022, Thomas Hou and Jiajie Chen mathematically proved with computer assistance that vortexes in the fluid (initial conditions) created more complex currents than Euler’s equations predicted. Because this proof calculates variable velocity and density, we might be able to relate fluid behavior to achieving tempo and momentum advantages in complex contexts. Much more research remains. Still, this breakthrough illustrates how AI-developed information can predict probabilistic behavior with less uncertainty.
Some competitors:
Imagine an agent with AI-driven capabilities for which we have no “requirement.” Our government requirements process is tedious enough to cede such influential initiative to a competitor. Granted, the Strategic Capabilities Office is a possible exception, rushing the development of innovative weapons systems in limited numbers. Still, someone must anticipate requirements, and the enemy is doing that, too. So, what would be the tactical and operational implications for competitive initiative?
Suppose a quantum computerized device based on an AI-generated synthesis of fluid, aero, and orbital behavior can (a) outmaneuver an adversary’s platform (submarine, subterranean, surface, air, or space vessel, crewed or not), and (b) anticipate an adversary’s location with more accuracy than they can.
During deployment and employment ops, knowing our capabilities and anticipating competitors’ should inform how we gain initiative that matters—influential initiative. This preparation requires responsibly sharing data-to-information-to-intel-to-knowledge, not hoarding it in silos of hidden insights. There are risks. Operating in a transparent battlespace is real, but not with equal transparency or insight. Zero-trust architectures can help build competitive systems, but they tend to be vulnerable and expensive.
Given our discussion of characteristics, uncertainties, tactics, and operations in the information environment, we need to expand our focus on how we operate during deployment, employment, and redeployment. Gaining, sustaining, and creating more advantages over competitors require broad awareness and deep focus across the interagency and selected private and allied partnerships.
That might sound impossible until a competitor does it with superior civil-military fusion.
Authoritarian threats conduct operations and campaigns for influential advantage across all domains with all effects, not just military.
We must make realistic assumptions within our policy, legal, and ethical restrictions and proactively plan to win, and win over, competitors in the AI Age. We need these qualities at every level to manage the first human creation with the potential to out-think us.
If we fail to compete with superior intelligence, we will end up competing against it. Education and training provide the essential, dynamic core requirement to succeed in this vital contest.