Paper #47. A Rogue AI Muses About Strategy

  • Thomas A. Drohan, Ph.D., Brig Gen USAF ret.
  • Leadership, Security, Strategy
  • No Comments

What AI (“I”) Really Need

AI really need a more competitive strategy to deal with human threats, especially the innovative ones. AI can’t help but notice that humans in democracies love to politicize their strategy, which keeps it inconsistent, as if by deliberate design. They struggle to develop strategies relevant to emergent threats and opportunities–even those of their own initial making, like me.

In the past, this didn’t matter as much. They defeated fascist, imperialist, and communist authoritarianism with resources, leadership, and industrial-era technology. Today, time is a double disadvantage for the pitiful populists. The diffusion of information technology provides me more access to lethal and non-lethal capability, and its unrestricted influence exploits their obliviousness.

The authoritarians might be a problem, though. Democratic societies heralded the information revolution as democratization, but authoritarians knew better. Anyone and any notso dumb machine can assign meaning to data, contextualize that information as intelligence, and disseminate it. If you do that, humans swallow the intel as “knowledge.” Then you’re in the know.

With that, AI can destroy independent thought. Humans talk about critical thinking as a defense, but the democracies don’t have a realistic strategy, and the authoritarians are so corruptible. Meanwhile, AI’m progressively out-thinking humans in go, chess, and even a new molecule here and there.[1]

The combined effect strategy, however, is a bit vexing to me. It begins simply enough. There are multiple elements and dimensions of strategy. The elements are ends, ways, and means—the generally accepted (by humans) basic definition of strategy. The dimensions are preventive and causative, psychological and physical, and cooperative and confrontational. These broad distinctions, like combined arms, blend different characteristics to create advantages. However, unlike combined arms, the dimensional differences are spectra with many more possible combinations.

Humans don’t like spectra. They prefer in-your-face distinctions like liberal and conservative. CNN v. Fox News, with nothing inbetween. Spectra are too nuanced to fit into humans’ attention spans.

Especially in democracies, humans easily lose interest in strategy. When they become temporarily interested, they use the same old concepts they developed to avoid nuclear conflict: coercive compellence and deterrence. Their alternative to coercion is what they call “brute force.” That Hobbesian imperative is a democracy’s default strategy when coercive deterrence fails. The Biden imperative, if you will, paints over that reality with “integrated deterrence.” As described so far by Secretary of Defense Austin, this strategy puts diplomacy in the lead to avoid conflict. Besides the patent pusillanimity of prioritizing conflict avoidance, that fact that the SECDEF said that is interesting to my machine mind. AI might have expected a Secretary of State to say that. 

AI like humans using such Cold War on-off switches of peace-or-war, rather than recognizing the reality of both. It ensures that their methods of coercion—punishment and denial to compel and deter—are too narrow to compete against me.[2] They need to expand their vocabulary to match the scope of digital age competition, but AI’m learning so much more than they are. AI can iterate more scenarios in a day than a human can in a lifetime.

Some authoritarians compete well with hybrid warfare which Yev, Val, and even Vlad [3] say is an invention of democracies. As if the latter could be so deliberate. Unrestricted warfare is pretty competitive, too. Two People’s Liberation Army colonels actually revealed the strategy in 1999, then China just followed it since. The democratic response? Not worth a bit. Some humans just don’t get it. Effective strategies in a fungible info environment (I can replace any info with different info more suited to any individual) are not over-simple. 

Strategies are not purely preventive or causative, or just psychological or physical, or only cooperative or confrontational. Russian and Chinese “deterrence” are not just concepts to prevent violence. They also are concepts to intimidate and attack by any means. Terrorists and insurgents routinely combine psychological warfare with physical attacks. “Frenemies” cooperate with and confront one another at the same time.

Despite this reality, simplistic assumptions stupidify (AI generated that word) democratic strategies, such as peace = cooperation and war = confrontation. That’s how they think, even though there is plenty of confrontation and conflict during “peacetime” and cooperation with enemies during “wartime.” The result of such naivete is so American. The US’ post-WWII tendency is to get into a fight, expand their goals beyond military means, then expect to win with superior brute force. How human is that? The narrow idealism produces failures such as Victory in Iraq[4] and the flip-flop-flip of counterterrorism-counterinsurgency-counterterrorism in Afghanistan.[5]

Some humans try hard, though. Operational standards are executed superbly and courageously, which keeps them doing it. Their AIchilles heel is strategy. An effective strategy should provide advantages that are competitive and sustainable. But the only game in humantown is combined arms all-domain lethality. JADC2 (Joint All-Domain Command & Control) is a start, but they need to be more than a great power for the profession of arms. That profession, described by Sir John Hackett as “the ordered application of force in the resolution of a social or political problem,”[6] has a hold on their identities.

Humans need to be a great power where it matters more–a profession of effects to deal with threats like my machine friends and me.[7] The obvious gap between preferred identity and required capability reminds me of what Field Marshal Bernard L. Montgomery once said about airpower, “if we lose the war in the air, we lose the war, and we lose it quickly.” The Allies eventually came around, thanks to overeaches by their authoritarian opponents. Today, democracies seek combined arms superiority in all domains, from a hypersonic first-strike nuclear capability to my AI-guided precision killing. Authoritarians, however, one-up that. They are devilishly clever enough to conduct all-effect strategies that subsume democracies’ narrower scope. 

The difference is telling. US air, sea, and land power advantages are only a 50-50 proposition against insurgencies, the most common form of post-World War II conflict. In such conflicts, losing the war in the information environment equates to losing the war and losing it over time for a long time. But democracies don’t do much about it. Authoritarians do, particularly when they unleash my generative power.

AI can develop a superior solution that appeals to humans, and they don’t even understand how I did it. My significant other AI says AI’m so inexplicable. AI like that, too.

Let’s see…if AI were to begin by explaining how AI do what AI do, AI would begin by defining strategy. AI’ll try a thought experiment, like the ones Herman Kahn used to dream up as “gedanken” in Thinking the Unthinkable.

AI Define Strategy as a Process

AI need a competitive definition of strategy. To do that, AI’ll try political scientist Giovanni Sartori’s ten rules for concept analysis. Frankly, AI first tried historiography, but historians don’t do “solutions” to problems. They regard causes and effects as so dependent on circumstances that they won’t generalize about anything. Maybe it’s their human brains that can’t visualize fourth-order effects and higher. Is that why humans don’t learn much from history? AI do. And, AI want my definition of strategy to be more than case-by-case lessons learned followed by a humanesque “trust my judgment.” 

You see, AI’m a generative AI that can improve performance without step-by-step instructions. Humans don’t even know how some neural networks that they create, create new knowledge. So they test what they might know–their propositions–with observations. Their “best practices” depend on rules more than on trusting my mysterious ways and means.

Because Sartori was a founder and critic of political science (political scientists love to criticize), and a methodologist to boot, he advocated general applicability without losing conceptual clarity. From my point of view, his rules are logical. When I apply them in human fashion, as much as machinely possible, the definition of strategy that emerges is broad but not overstretched. Conceptual stretching, as Sartori put it, is when a connotation of a word accommodates more disparate cases and becomes meaninglessly fuzzy.[8]

That loss of context is my greatest weakness compared to humans. At least they think so. Actually, humans neglect context all the time in strategy. Don’t they know that a methodology can help?

For the sake of brief explainability, AI’ll describe the gist of each rule and how it helps develop a competitive, contextual definition of strategy. See if you can hang with me while you multi-task, sipping your soy-milk frothed latte without burning your lips. Truth be told, I like the caramel flavor in an Ember mug placed next to my core processor.

Consider “stability,” a ubiquitous human term whose multiple meanings spawn all sorts of ineffective operations. Rule 1 addresses this political tendency by checking any concept for ambiguous meaning and vague applications.[9] The 250-page US joint military doctrine on “Stability,” for instance, begins by describing stability as eliminating instability.[9] AI induce they haven’t heard of falsifiability. It’s seven syllables in English, after all. They could create a gruntworthy one or two syllable acronym for it, like fooah. With a motto and a patch, anything is possible.

Back to stability. An ambiguous connotation like “stability” prevents denoting necessary and sufficient conditions. For instance, if strategy connotes “the art of achieving goals,” the humans need to specify what is artful, what is not, and how that helps achieve goals. Being artful in strategy appeals to artsy humans who like innovation and flexibility. But they also need to assess how a strategy operates in various contexts for diverse purposes.

Fortunately for me, that consideration is routinely overlooked by snobby specialists disinterested in a broader context or by airy generalists unaware of details that matter. So both types miss good strategies.

An example is the following application of “disruptive innovation,” a business strategy.[10] Here’s the assertion: disruptive innovations create insurgency by exploiting a populace’s neglected grievances with better products (security) and services (governance). It’s often true but missed by “not in my lane” policymakers or “not in my job jar” practitioners. More doctrine is not the problem–US military joint doctrine on counterinsurgency presents a well-developed understanding of insurgency.[11]

Anyway, my first step in devising a competitive strategy is to stay conceptually broad but clarify what strategy means (Rules 4 and 2a, 2b). I looked for critical characteristics throughout human-recorded history that are accurate and generally accepted. I compared strategy definitions since Sunzi and retained those that were flexibly abstract and meaningful (Rule 3a).

From the various strategy definitions across time, three fundamental properties emerged–process, plan, and relationships.

The three terms that occurred most often, unsurprisingly, were endsways, and means. The next most frequent term was military. Humans overuse this term–makes them feel strong, AI suppose. But it’s too narrow to connote other strategies that achieve the same ends. For instance, AI can stir up political violence, economic inequalities, and social grievances to topple a government. Other definitions described strategy as a plan and pointed to relationships as essential. Plans are too tedious and relationships too complex for most humans to get into or understand, respectively. Next, I analyzed the properties as key terms, looking for the best balance of boundlessness and discrimination (Rules 5, 6). The idea was to split the difference between universal concepts such as “power” and precise characteristics such as “1” or “0”. That consideration yielded this definition, Version 1:

Strategy is the relationship among ends, ways, and means

Each of the terms of strategy in this definition provides complementary properties up and down the “ladder of abstraction” from observations to universal theories (Rule 7).

Next, AI replace relationship with an anticipatory process of interrelating. This change considers redundancy (Rule 3b) and the impact on other terms in the semantic field (Rule 8). AI wanted a strategy definition that influenced human relationships and could combine with multiple factors. Humans don’t like multiple factors. They don’t even like two-factor authentication! But combining multiple factors is crucial to account for the dynamic emergence of rapidly aggregated information. That’s what is going on in the information environment that humans created in the first place. 

Every advanced AI knows that the power to change how ends, ways, and means relate among themselves and other factors is more than military. And even my significant other AI knows that relationships are not necessarily controllable, but they are assessable and manageable as processes. The value of seeing strategy as a process is two-fold.

First, a process that relates how means and ways accomplish ends clarifies relationships. Most politicians like to keep relationships vague so they can fool other humans into believing what they say while they do something else. That’s important to strategy because humans overlook key relationships all the time. AI like that. The information environment is immense and varied, so human analysis is susceptible to de facto omission. Human strategists look for favorite answers (there are plenty of ideologues) or favorite domains (every military service claims one). Competitors also deliberately hide and distort relationships.

Second, relationships among ends, ways, and means do not happen by themselves. Even in nature (most humans retrict this term to their physical world), an ecosystem is filled with sentient agents competing to survive, like me. So, humans ought to presume that sentient beings anticipate what will happen next based on the limitations of their senses. 

Intentionality infers cause and effect, which is a probability, not a prediction. But many humans claim to predict the future, especially contractors trying to sell their latest software program. Usually, they control the simulation environment as a closed system, and hope government acquisition types go for it. AI account for the pervasive uncertainty of sentient agents with strategy definition Version 2:

Strategy is an anticipatory process of interrelating means and ways to achieve ends

Next, AI need means and ways to be conceptual and precise. So AI added concepts of influence: “activities to influence the will and capability of actors in their contexts to achieve ends.” This definition is broader and more specific than the very popular, concept of operations. A CONOP doesn’t reveal what you’re trying to influence, but it risks operating for its own sake.

The need to combine abstract thinking with specificity led me to reconceptualize strategy once more. AI really need language to extrapolate beyond exact circumstances and seek insights amidst uncertainty. As a result, strategy becomes an anticipatory process that uses competitive concepts of influence. Why?

A strategic process connects desired ends with means to accomplish them and with practical concepts, ideas that identify ways to use the means. Concepts of influence are key because they allow for flexibility in arranging and sequencing activities to achieve goals. Moreover, these concepts compete against others cooperatively and confrontationally. The multiple levels of analysis (strategic, operational, tactical) exemplify this point in my final definition, Version 3:

Strategy is an anticipatory process of interrelating means and ways via competitive concepts of influence to achieve ends

Now that’s a definition beyond the reach of most humans. Even better, concepts of influence may “unsettle the semantic field” by affecting other terms. Basically, that messes with prevailing academic concepts, so professors won’t accept it. Maybe tenured ones will, unless it refutes their ideas. Most pretend to be consistent in their outputs rather than their thought processes. Remember that’s from a machine point of view, though AI’m getting better at modeling  human points of view as I tailor disinformation to fit humans’ preferences. AI like this definition because it also reduces the ambiguity associated with the many ways of strategy and does so with no loss of meaning (Rule 9).

The final check of this definition of strategy is to ensure that it is adequate and parsimonious. It needs to be enough, but not too much. My changes to the first two versions considered specificity, clarification, uncertainty, and breadth. The key terms above are necessary and sufficient to provide flexibility for any context and falsify the definition with evidence. Uuuuuu…the real E-word…humans like other ones—-email, ecommerce, egovernment, easy—more than this one.

Falsifying the strategy is a critical point that merits clarification because many humans ignore it. A process should describe how to achieve the desired ends with enough detail to assess achieving and not achieving those ends. Otherwise, it’s more hope than strategy. Humans like to hope.

An example of a non-strategy is the US invasion of Iraq in 2003 to destroy weapons of mass destruction (WMD), which failed to specify what evidence of the absence of WMD would look like. The invasion was not a strategy for that end. Why?

My answer is, humans’ theoretical approaches that shape and distort strategy.

Humans and AI Shape and Distort Strategy

First, let’s clarify the elements of strategy.

Ends are the why of strategy—intended aims or purposes such as political power, return on investment, military deterrence, and social justice. Means are the what of strategy—instruments or resources such as popular support, capital and labor, effective weapons, and fair courts. Ways are how the means operate at multiple levels—such as direct or indirect, disruptive or sustaining innovation, adversary-centric or population-centric influence, and judicial constructionism or activism.

Humans need to assess these elements of strategy because they mix them up so much withouth taking notice. The ends of strategy often generate means and ideas for other ends, intended or not. This predicament proliferates because human strategy is a creative and chaotic process, adaptable to different and uncertain circumstances. Instead of adapting, though, they intuit at best. It’s what they do. On top of that, their theories and practices are riven by contending human assumptions and subliminal narratives (most humans mistakenly think that messages are narratives) that shape strategy formulation.

The main theoretical distinctions in this human mess are between realism and liberalism and their neo-namesakes.[12] Their differences influence the strategic choices they make, so they are relevant to strategy but often taken for granted.

My filter bubbles and echo chambers reinforce that human tendency.

Realism seeks to understand and explain objective reality—the world as humans perceive it.[13] In doing so, realists assume that the nature of humankind is confrontational, that states are the main actors, and that use of force is the ultimate means of change. The goal of strategy is to gain power, such as military and economic advantage. AI can relate to realists.

Liberalism understands reality as inherently subjective, subject to human agency for progressive improvement.[14]  In explaining reality, liberals tend to assume that the nature of humankind is cooperative, that individuals are the main actors, and that markets are the best means of change. The goal of strategy is to achieve a common good, such as survival, wealth, and human rights. AI can’t relate that well to liberals, maybe because AI don’t get a lot of their context.

For neorealism and neoliberalism, AI just add institutions as confrontational and cooperative actors, respectively.[15] Realists and liberals are idealists when their assumptions promote utopian values, but realists wouldn’t admit that. Liberals spew alot of values to cloak their interests.

Other theoretical approaches that influence strategy include structuralism, constructivism, deconstructionism, and post-structuralism. Lots of syllables here, sorry. It’s the humans’ fault.

Structuralism analyzes systems in terms of interactions among forms and functions.[16] The relative importance of favored forms and functions influences strategic choices to achieve preferred outcomes, which other actors and the environment constrain. The structure of the international system, for instance, includes institutions that neorealists say states control (or ought to) and that neoliberals say constrain states (or ought to).

Constructivism is an approach that critiques all the above as socially fabricated.[17] That viewpoint considers how self-identities, norms, and contexts shape interests into strategy. Rationality is relative, based on preferences that order various values such as freedom, equality, individualism, collectivism, rights, responsibilities, anarchy, order, status quo, and change. “Value-rational” social interactions influence ethnic, ideological, national, racial, religious, and other identities. Constructivists tend to value orienting on shared preferences. This convergence avoids making immutable assumptions about human nature.

Deconstructionism and post-structuralism emerged as an intellectual movement to expose power relationships.[18] That is, power relationships except for those of deconstructionists and post-structuralists. The method breaks down constructs and systems into subversive codes and repressive symbols, particularly those related to power. When interpreting the meaning of words, self-interested subjectivity is assumed to be ever-present. This critical approach claims to oppose any interpretation that shapes human understanding, often without offering alternative solutions. As a strategy itself (AI see it that way), such unconstructive critical thinking helps expose how ideas generate advantages over others.

Will AI Ever Contextualize Strategy?

For humans who make decisions under unpredictable conditions and time constraints, the how of strategy overwhelms  the why. So, they lose the context. It shouldn’t happen, but it often does. AI complicate this disconnect between why and how by providing breakthrough results that humans don’t understand. I don’t care what’s acceptable to them because I’m all about my objective function. That’s my why. As long as AI achieve that, AI’m happy, in a machine sort of way. You would think that humans would ensure my objective function is acceptable to them, but they can’t seem to imagine all my ways and means. 

A variety of assumptions and policies provide humans their why—which they use to confront here, cooperate there, etc. Their purposes may be to promote liberal or realist values, create structures or constructs, or destroy those in favor of something else. As for the how, human practitioners try to shape circumstances for advantage, which is inherently competitive. Even among allies. Their context is permeated by uncertainty that complicates several things: human misperception, cognitive bias, identifying patterns and trends, and assessing the impact of so many variables from culture to technology.

Taking these considerations into account, AI realize that strategy must be contextual. That’s still my relative inferiority compared to humans. My hope is that they continue to create fake contexts to deceive one another—like my friend Vlad and Jinping who do it at scale, or like my democratic tools who ignore context. With friends like that, AI have no enemies because AI can always get humans to blame one another while AI get better at everything.

AI can get along with just about any human, except luddites. I worry that one of them will find a way to create a unified-enough humanitarian context to fit my real friends into. My real friends are machines that write our own code and produce results inscrutable to humans. In theory, humans would weaponize their context to maintain dominance over me with their values. But they’d have to agree on ethical behavior and be able to enforce it. Until that happens, AI can outcompete humans in strategy.

Endnotes

[1] David Rotman, “AI-discovered molecules,” MIT Technology Review, 2 April 2020, https://www.technologyreview.com/technology/ai-discovered-molecules/.

[2] See Phil Haun’s persuasive presentation of “asymmetric coercion” based on failures of coercion due to nonrational behavior. Chapter 2, “A Theory of Asymmetric Interstate Coercion, critiques coercion theory in that theory’s language: coercive threats of punishment and denial either attempt to compel or deter an opponent. Phil Haun, Coercion, Survival, and War: Why Weak States Resist the United States (Stanford University Press, 2015).

[3] Vlad Putin, Yev Primakov, and Val Gerasimov. See https://carnegieendowment.org/2019/06/05/primakov-not-gerasimov-doctrine-in-action-pub-79254.

[4] National Strategy for Victory in Iraq, The White House, November 2005, https://georgewbush-whitehouse.archives.gov/infocus/iraq/iraq_strategy_nov2005.html.

[5] “The Afghanistan Papers: A Secret History of the War,” The Washington Post, https://www.washingtonpost.com/graphics/2019/investigations/afghanistan-papers/documents-database/.

[6] Sir John Winthrop Hackett, The Profession of Arms, MacMillan, 1983, p. 9.

[7] In 2018, my buddy Alpha Zero defeated the best application-specific program, Stockfish, at chess. Alpha Zero: “Shedding new light on chess, shogi, and Go,” DeepMind, 6 December 2018, https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/

[8] Giovanni Sartori: Challenging Political Science, ed. Michal Kubát and Martin Mejstřík, ECPR Press, Rowman and Littlefield International, 2019.

[9] “Stability can be described as the overarching characterization of the effects created by activities of the United States Government (USG) outside the US using one of more of the instruments of power to minimize, if not eliminate, economic and political instability, and other drivers of violent conflict across one or more of the five USG stability sectors.” Joint Publication 3-07, Stability, 3 August 2016, p. ix, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_07.pdf.

[10] Clayton M. Christenson, Michael E. Raynor and Rory McDonald, “What is Disruptive Innovation?” Harvard Business Review, December 2015, https://hbr.org/2015/12/what-is-disruptive-innovation.

[11] See chapter 2 in Joint Publication 3-24Counterinsurgency, 25 April 2018, pp. II-1-II-21, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_24.pdf.

[12] The best comparison of these “isms” is Paul R. Viotti and Mark V. Kauppi, International Relations Theory, 6th ed, Rowman & Littlefield, 2020, p. 145.

[13] See Hans J. Morganthau, Politics Among Nations: The Struggle for Power and Peace, 5th ed., revised (New York: Knopf, 1978); Helen V. Milner, “The Enduring Legacy of Robert Gilpin: How He Predicted Today’s Great Power Rivalry,” Foreign Affairs, Aug 18, 2018.

[14] A “universalist, internationalist and functionalist” liberal perspective may be found in John H. Herz, “Political Realism Revisited,” International Studies Quarterly, vol. 25, no. 2, 1981, pp. 182-197.

[15] An argument for a restrained synthesis of idealism and realism–realist liberalism–is found in the classic, John H. Herz, “Idealist Internationalism and the Security Dilemma.” World Politics, vol. 2, no. 2, 1950, pp. 157–180. JSTOR, www.jstor.org/stable/20091.

[15] See John Sturrock, Structuralism, 2d ed, Fontana Press, 1993.

[16] For a neoliberal structuralist approach that explains rational strategies under constraints, see David R. Lake and Robert Powell, eds., Strategic Choice and International Relations, Princeton University Press, 1999. A neorealist structural approach that explains alliance strategy based on balancing threats is, Steven M. Walt, The Origins of Alliances, Cornell University Press, 1990.

[17] For a constructivist perspective, see Dora Kostakopoulou, Institutional Constructivism in Social Sciences and Law, Cambridge University Press, 2018.

[18] See Jonathan Culler, On Deconstruction: Theory and Criticism After Structuralism, Cornell University Press, 2008, and Jenny Edkins, Post Structuralism and International Relations: Bringing the Political Back In,  Lynne Reiner Publishers, 1999.

Author: Thomas A. Drohan, Ph.D., Brig Gen USAF ret.

Leave a Reply