Paper #41. Concepts of Influence: Critical to Strategy and Human Control of Artificial Intelligence

  • Thomas A. Drohan, Ph.D., Brig Gen USAF ret.
  • Cyber, Leadership, Strategy
  • No Comments

Strategy for dynamic end-states must be multi-dimensional to be competitive in the information environment (ICSL Note #22). If operations are not informing and influencing, they become existential rather than instrumental. They justify themselves, which makes for poor strategy. Yet strategy is the competition that matters most for relevant operations. As we consider the three basic dimensions of strategy (psychological-physical, cooperative-confrontational, preventive-productive) to design the three basic elements of strategy (ends, ways means), we need an analytic starting point. Artifiical Intelligence (AI) does not.

An AI can discover hidden relationships. An unbiased AI can disregard policy preferences. Such “unsupervised” (p. 8) Machine Learning (ML) platforms use pure math to do both. Savant X for instance, employs hyper-dimensional relationship analysis to correlate and render depictions of nth-order relationships. To contextualize the data into information (the values of characteristics in the input, change and output of processes), we like to think that it always takes a human. But it doesn’t.

With sufficient energy, data and time, AI can assign values as weighted probabilities. These tend to be based on past patterns, projections, algorithms, and other machine-learned associations. But pure experience ends up teaching the past, or at best, the status quo. Humans and machines need goals and objectives to purposefully navigate through data, other than running a rote (human) or pre-programmed (machine) sequence. The latter institutionalizes time-honored identities over relevant competitiveness. Professional identities need to include instrumental competitiveness — the ethics of performance.

Analytic Starting Point

It should be up to human leadership to control how and why to compete. So we begin our analysis of strategy-making here by considering physical and psychological means used in ways to prevent or produce desired ends. Given the pervasiveness of uncertainty and connectivity in complex environments, the question becomes—how can we expect particular ways and means to produce or prevent actions?

In answering that question, we face a data and information-thick environment filled with decentralized sentient agents, which leads us to rely more on AI and machine learning. That dependence can become comfortable when we understand the machine we’ve made. But the more complicated the hardware-software and the more complex the competition, the more we need to learn how to depend on technology. Like flying a tactical maneuver in zero-visibility weather, we trust-but-verify human-machine interfaces and control corrections to stay on course. With a strategy, the tactics matter too.

Unlike that example, however, the challenge of piloting pathways to dynamic end-states is much more complex. Strategists try to orchestrate activities to influence the will and capability of targeted agents using ways and means that fit particular contexts. There is no single course of action for success. Then add AI. Now human control requires broadly applicable philosophical assumptions about being competitive.

Philosophical Starting Point

By competitive, we include cooperation and confrontation, rather than presenting competition as if it’s the opposite of conflict. Cooperation and confrontation are both competitive, usually with different rules. Both happen in complex relationships. But the blends of cooperation—confrontation do not have to be a “gray zone” based on violence v. non-violence. We get the latter from the current favored spectrum of competition—conflict.

So-called gray zones of ambiguity result from imposing idealist notions of peace and war. Alliances whose commitments are triggered by armed conflict, for instance, would be more effective by first recognizing that being “armed” in the contemporary information environment does not require direct application of violence. Such as confrontation without first-order violence, which therefore does not trigger a reaction from an armed conflict-based alliance. In open competition where rules are not being enforced, it’s the effects that matter.

It follows that an invasion of territory is an armed attack. Such as the persistent violations of Japanese territory by Chinese vessels and aircraft in and over the East China Sea. The “arms” include merchant and coast guard vessels, and information to influence what becomes normal behavior. A criminal group that hacks a power grid, such as Darkside’s recent assault on Colonial Pipeline, is a cyber-armed attack. Whether such attacks are regarded as military alliance or law enforcement issues can be clarified if we have a strategy that integrates dimensions. Take one of our three basic dimensions of strategy–cooperation-confrontation.

By scrutinizing incidents in terms of cooperation-confrontation, we can more broadly discern what is happening strategically: as instrumental strategy. Activities with either attributed intent or attributed effects become different blends of coop-frontation. Darkside’s attempt to sidestep responsibility by renouncing a particular intent—“Our goal is to make money and not creating problems for society”—does not matter given the effects of the attack. What if AI were involved? It was, as a human-directed narrow AI to solve specific problems—a strain of ransomware that targets vulnerabilities. In other circumstances it’s the effects that are deliberately made to be unclear, while intent is clear. Such as China’s “obstinate but quiet invasion” (p. 3) of Japanese territory. China’s pursuit of cyber and space dominance includes AI to target the intentions of opponents. Whether this is peace or war, unrestricted warfare, peace-war (see Rid’s Chapter 19) or talking while fighting (see Nyugen’s Chapter 6), the competition that matters over time is over the ends, ways and means of strategy.

When we consider the other two dimensions of strategy (psychological-physical, preventive-productive), we see additional combinations of how the instruments of strategy work. More on that soon.

Successful competition calls for a more comprehensive approach than the half-strategy of “when deterrence fails” that’s continually propounded by many political and military leaders.

Strategists have to follow current policy, of course, even when it contains our idealistic assumptions about competition. For this realist and analytical challenge, it’s useful to focus on specific ways and means as concepts of influence

Concepts of Influence

Concepts of influence are similar to concepts of employment, but broader because they include the energy as well as the matter (or “stuff”) of information and operations. This expanded approach to competition builds upon our process-oriented definition of information introduced in ICSL Paper #38 (What “Talk-Fight” Ideologues Understand About Warfare).

Concepts of influence act on will and capability—they describe the ways in which means are used to achieve effects and outcomes. While the means identify what is used in strategy, and the ways refer to how the means are used, concepts of influence describe how means and ways target will and capability. Both of those involve AI assisting and even directing human decisions. To appreciate how, look at strategy as multi-dimensional and multi-elemental.

Multi-dimensional and Multi-elemental Strategy

The multi-dimensional and multi-elemental strategy framework introduced in ICSL Paper #37 (The Strategy Cuboid) generates 16 basic concepts of influence. Figure 1 below takes each of the three dimensions of strategy from the Strategy Cuboid (Figure 2 below) and displays eight combinations of choices along the x, y and z axes.

The x-axis represents a spectrum of psychological-physical options to influence will or capability. The y-axis represents a spectrum of preventive-productive options. The z-axis represents cooperative-confrontational options. Each octant of the cube contains two distinct concepts of influence tied to an effect, one that acts on a target’s will and one that acts on a target’s capability.  

In order for each strategic choice of psychological-physical, preventive-productive, and cooperative-confrontational ends, ways or means to work, we need to make another choice: whether and how to target will and/or capability.

Those four choices yield 24 = 16 discrete concepts of influence. These distinctions can be learned by an AI up to to six or so dimensions, but here’s a more intuitive 3-D illustration:

Figure 1: Concepts of Influence Under Expected Logic
Figure 2: Strategy Cuboid

How Concepts of Influence Work

Each concept of influence involves ways and means to influence will or capability for an effect. There are eight basic ways and means: assure or intimidate, demonstrate or punish will; enhance or neutralize, exercise or deny capability. Other concepts of influence found throughout various communities (conventional weapons schools, information operations, special operations, unconventional warfare, etc.) are refinements of these eight concepts of influence. Examples include attrit, change, convince, deceive, degrade, destroy, diminish, disrupt, exhaust, expose, mislead, negate, prevent, protect, sabotage, shape, and undermine.

Next, we add eight basic corresponding desired effects. These are productive-preventive, cooperative-confrontational, and psychological-physical, respectively. The effects are: persuade or dissuade, compel or deter; secure or defend, induce or coerce. The analytical distinction here among concepts of influence and effects is, level of analysis. Here’s why.

Concepts of influence use ways and means to achieve ends, which we refer to as effects. Ways and means are themselves effects of causes. Joint military doctrinal distinctions among activities, effects, objectives, end-states and strategic priorities can describe a “Hierarchy of Effort” (used in JMark Services’ Information Environment Advanced Analysis [IEAA] Course) to characterize potential causes and effects:

Figure 3: Hierarchy of Effort

The missing piece in these hierarchical levels is how activities produce effects. This is where an ”Information Environment Framework” (also used in our IEAA course) helps generate ideas on how to influence will and capability. Both of these frameworks are applied in ICSL Paper #30 (Assessment & Combined Effects Strategy):

Figure 4: Information Environment Framework

For instance, if we hypothesize how “Routinization” such as patterns can influence Will as depicted in Figure 4, it can lead us to develop a concept of influence to do that. One method is to execute a pattern of activity to get a competitor accustomed to accepting it as routine. This normalization can reduce the competitor’s will to pay much attention to or resist those activities.

In order to expand our awareness of the strategic choices available to competitors, we must anticipate various concepts of influence. These can inform our development of multiple, contingent and adaptable courses of action. To do that, we need to judge what is expected and what is not.

What’s Expected and What’s Not?

As an example, let’s follow our expected logic in Figure 1 by using cooperative-physical means of exercising military capability jointly with the intent to secure an ally. This combination is the ( +, -, +) upper rear cube in Figure 1. Now assume that an actor has the will and capability to take action that makes our ally less secure. This would apply if that actor doesn’t expect an adequate response from us or from our ally. In this case the actor would be psychologically compelled by our action to take aggressive action.

Discerning which combinations fit particular agents and circumstances is key to anticipation.

An example would be US-Japan joint freedom of navigation (FON) patrols in disputed waters of the East China Sea, or more provocatively, in the South China Sea. China’s response could include a ban on rare earth minerals to Japan and US manufacturers, given that Japan is more dependent on such imports than the US. How are US-Japan FON patrols going to make Japan more secure in this instance? What other options are available and acceptable?

Strategists need to have the interagency and allied relationships to consider all permissible alternatives because a strategy may not bring about a “normal response.” The Chinese response here is to use confrontational-physical means with the intent to coerce a change in US-Japan alliance behavior. That combination is the (+, +, -) right lower front cube in Figure 1. My own view regarding People’s Liberation Army (PLA) “will“ is that senior PLA leaders will not cooperatively engage the US until they have achieved at least military parity, and that cooperation will be shadowed by confrontation.

The key question in anticipating such nth order effects is, do our assumptions of behavior apply? Analyzing past behavior (AI-detected patterns with human-provided context) is one way to anticipate how a competitor is likely to perceive and manipulate coop-frontational, psychological-physical and preventive-productive options. But from the past, we often get the past, one without fresh assumptions based on possibilities and surprise. A systematic analysis of multi-dimensional and multi-elemental strategic options can help.

Strategic Combinations of Dimensions and Elements

Consider some possible combinations of the three dimensions and three elements of strategy. Here are eight expandable to 24, based on our definition of concept of influence that combines ways and means. Each of the following bullets contains one of the eight dimensional combinations in Figure 1, followed by an example of an end, and a way and means that go together as a concept of influence.

  • psychological cooperative preventive—ends/ways/means: joint statement against territorial violations—dissuade violations/build confidence/obtain funds
  • physical cooperative preventive—ends/ways/means: joint exercise in territorial airspace—secure against violations/increase reliability/employ capability
  • psychological confrontational preventive—ends/ways/means: diplomatic warning—deter invasion/assert alliance/claim sovereignty
  • physical confrontational preventive—ends/ways/means: military use of force—defend against invaders/execute alliance/enforce spatial control
  • psychological cooperative productive—ends/ways/means: establish cyber norms—persuade rules-based regimes/expand support/specify common standards
  • physical cooperative productive—ends/ways/means: establish cyber center—induce enforcement of rules/incentivize compliance/regulatory commission
  • psychological confrontational productive—ends/ways/means: provoke social dissent—compel change in policy/polarize populace/dis/mis-mal-information
  • physical confrontational productive—ends/ways/means: economic embargo—coerce increase in oil prices/influence consumer demand/cutoff oil flow

These options can be expanded by testing new concepts of influence that combine novel or unexpected ways and means. Ways and means that “don’t go together” are often self-inflicted ”black swans” (Nassim Nicholas Taleb) due to cultural or technical ignorance. They include: extremists advocating righteous brutality through open social media; malware and hacker markets that serve criminals and governments; criminals linked to governments that hold public services hostage; and notionally so far, state proxies that detonate a radiological bomb.

We need to expect AI-developed information that humans have not thought of. That happened four years ago in the game of go where Google-owned DeepMind and its successors began to routinely defeat human champions.

Toward AI-assisted Human-directed Strategy

Concepts of influence generated by AI can identify pairings of ways and means that humans would not expect. This can be done by hyper-dimensional relationship analysis of the dimensions and elements of strategy. With sufficiently diverse data, an AI such as Savant X can provide all-dimension, all-elements deep analysis of relationships.

An initial test of Savant X’s Seeker platform, fed by open source data and information in the form of reports and research, generated the following visual depiction of nodes and linkages. The human-directed part of this result was querying the AI to look for “influence” and “intent” (rather than “will”) and “capability.”

Nodes and Linkages in Hyper-dimensional Space: Savant X Seeker

This snapshot is the beginning of an analysis and synthesis that will explore potential relationships among ideas, agents and systems. The focus will be on concepts of influence across the dimensions and elements of strategy. Like net assessment, the comparison focuses on relative advantage based on vulnerabilities.

Thinking in terms of interactive multi-dimensional relationships can identify and update effective combinative strategies to inform and influence. Assisted by machine learning, a planner might wargame and red-team agents and systems across the spectrum of cooperation and confrontation, considering physical and psychological means, to prevent and produce actions, reactions and counteractions. Or, mix and match any dimension with any element to find a superior strategy in context, which will change.

As AI generates new solutions, processes can end up directing human decision-making if our prevailing understanding becomes unexamined concepts of concepts. This can happen easily given the pace of change and cost of staying up to date. How many people still accept the laws of Newtonian physics as universal even though quantum behavior is different? Our ability to know the specific process by which a neurally networked AI reaches a particular solution (“explainable AI”) is an ongoing challenge in critical thinking. Humans at least need to be in control of the concepts of influence.

This vital competition to control or influence behavior amidst uncertainty should inform operations design. Operations design is really information design, the resistance to which is mainly a cultural issue. The organizational cost of remaining relevant is to invest in people and their development.

Enhancing our ability to learn as a competitive team is the key process. Team-building could start with an information-intelligence-operator core with analytical (includes data), planning (includes logistics), and strategic thinking (from tactics to policy) skills. This long-term effort should promote broad awareness of risks and be whole-of-government plus the private sector.


Author: Thomas A. Drohan, Ph.D., Brig Gen USAF ret.

Leave a Reply