Democracies do not recognize the all-domain, all-effects warfare that authoritarians wage unless there is direct state-sponsored violence. Instead, we wage “when-deterrence-fails” lethal warfare. Authoritarians exploit the blindspot with informatized operations that seize territory, disrupt adversary control, centralize control, keep opponents divided, and set the terms for peace. The strategy is competitive, effective warfare. Artificial Intelligence will not close this strategy gap unless democracies embrace combined effects more than combined arms.
Combined effects strategy competes with all-effects warfare in kind. The fundamental challenge is to create an effective strategy within legitimate confrontation and cooperation. When-deterrence-fails warfare is an unnecessary restraint. Geneva Law and International Humanitarian Law recognize some forms of non-violent “attack.” The Law of Armed Conflict (LOAC) defines “armed attack” as violent but interprets “use of force” more broadly to include cyber damage or destruction. As information technology expands what actors can do to each other, international law is adapting to include more acts and consequences.[1]
An effective strategy is holistic, agile, and asymmetric.
Holistic strategy expands competition to areas where opponents are vulnerable. This expectation can change when an opponent adapts to the environment (hybrid warfare) or develops better technology (quantum-encrypted communication). For any actor, the parts of a whole may combine in unanticipated ways, outstripping abilities to manage the interactions. Threats with coercive synergies and cohesive narratives can overwhelm isolated or fragmented societies. The creation and collapse of wholes explain the rise and fall of empires, from the first “irrevocable expansion” of China to the breakup of the Soviet Union. The information environment is the most encompassing whole, subsuming all types of operations and effects in all domains.
The agility to create holistic syntheses can be decisive. Historical examples include:
Strategic agility is more important than operational agility because it adjusts ends, means, and ways. Altering the desired end is most difficult, but critical to ensuring that we don’t expand our goals beyond our committed ways and means. Operation Desert Storm stands as an exemplar as President George H.W. Bush resisted pressure to expand our goals beyond getting Iraqi forces out of Kuwait. Too much agility can wreck desired outcomes that require time to achieve, not just exquisite timing. The flip-flop-flip of counterterrorism-counterinsurgency-counterterrorism in Afghanistan should be instructive.
Asymmetry in strategy is an interactive relationship among strengths and weaknesses. Asymmetric engagements refer to ways and means, such as heavy armor and artillery against more maneuverable light infantry. The purpose, of course, is to create asymmetric effects. So we should apply this principle to ends, ways, and mean, not just force structures. A net assessment of asymmetries encompasses the resources, concepts, and constraints of various threats vying for all types of advantage.
Often overlooked, differences among goals provide a basis to exchange interests. That’s a diplomatic strategy in the lead of military strategy, which is more complex than a strategy based on a common goal or common threat. Differences among allies’ goals, such as US and Turkey vis a vis Russia or various Kurdish forces in Syria, require close diplomatic-military coordination just to be relatively effective.
Artificial Intelligence (AI) can help identify holistic, agile, and asymmetric attributes to integrate effective strategy in legitimate competition and warfare.
Narrow AI learning outpaces humans in more ways as training data expands, providing broader and deeper contexts. AI’s ability to integrate many data streams at once can recognize keywords as attributes. Why not try contending combinations of ends, ways, and means constrained by legality and ethical human values?
Currently, the human brain is better at learning many different tasks, while AI is better at the specialized functions that it’s trained to do. However, with machine learning and big data, AI can produce holistic syntheses that humans cannot. Human-developed AI invented a molecule to treat chronic diseases by filtering billions of choices from a database of potential compounds.[2] The majority of AI researchers anticipate Artificial General Intelligence in decades, not centuries.
A future AI could process the countless choices in strategy, providing options for human decision-making. In combined effects strategy, the cooperative-confrontational, psychological-physical, and preventive dimensions are choices similar to Sunzi’s “inexhaustible combinations” of sounds, colors, tastes, and surprise & straightforward operations for strategic advantage. If we cross-reference the many types of ends, ways, and means from the various communities of practice, the possible choices instantly proliferate. Specifying the circumstances in which one strategy has a comparative advantage over another gets complex fast, as shown by these contending ways, means, and ends in two combined effects strategies:
Iranian Persuasive Compellence and Deterrence
versus
Saudi Arabian Secured Inducement:
An artificial neural network AI that writes its own computer code could conceivably generate many such options that humans put into context. AI trained to recognize natural language, such as Microsoft’s Codex, generates code in a dozen computer languages, restrained by its syntax and rules of logic. Like the minimal language in the combined effects framework, placing Codex’s generated prose into layers of context is limited. With 175 billion rules, Codex still can’t match the human brain’s ability to encode and cue in relevant neural connections while preventing them from being overwritten by irrelevant ones.[3]
AI is likely to improve itself recursively as massive amounts of cross-referenced data provide thicker integrated development environments that model the global information environment.
We humans get to make the judgment calls on what constitutes competition and what crosses over into warfare. Democracies’ red line is violent ways and means, which permits authoritarians to achieve the ends of war by other ways and means with impunity. AI-assisted combined effects strategy can help us to recognize this gap, attribute intent, and make responsible and more effective decisions.
[1] Michael M. Schmitt, ’Attack’ as a Term of Art in International Law: The Cyber Operations Context, 2012 4th International Conference on Cyber Conflict, https://ccdcoe.org/uploads/2012/01/5_2_Schmitt_AttackAsATermOfArt.pdf; Beth D. Graboritz, James W. Morford, and Kelley M. Truax, “Why the Law of Armed Conflict (LOAC) Must Be Expanded to Cover Vital Civilian Data,” The Cyber Defense Review, Fall 2020 vol. 5 no. 3, pp. 121-132, https://www.jstor.org/stable/pdf/26954876.pdf.
[2] Artificial intelligence-created medicine to be used on humans for first time,” BBC News, https://www.bbc.com/news/technology-51315462.
[3] Anne Trafton, “How the brain switches between different sets of rules,” MIT News Service, https://news.mit.edu/2018/cognitive-flexibility-thalamus-1119.