Our previous paper offered an assessable definition of “information“ to address two persistent problems in US security strategy: (1) the mismatch between narrow military doctrine and its broad effects; and (2) a “competition continuum“ below armed conflict. Why does this matter? The Information Environment is expansive, accessible and dynamic, characteristics that enable competitors to exploit US gaps with broad all-effects warfare. To compete in this arena, information intelligence is crucial.
Recall that we presented a broad, process-based and falsifiable definition of Information in ICSL Paper #38: the values of characteristics in the input, change, and output of processes.
Intelligence is a process and product that develops the contextual meaning of information. Different tradecraft among communities with different responsibilities and identities (analysts, planners and operators—strategists all) produces various meanings. That diversity is critical to quality intelligence.
Unlike information, the term intelligence does have a falsifiable DoD definition (115), and it consists of three parts:
Intelligence — the product resulting from the collection, processing, integration, evaluation, analysis, and interpretation concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations; activities that result in the product; and organizations involved in such activities.
How does intelligence compare to information? First, consider the similarities. The intelligence product is an information output which also serves as an input to intelligence activities that are are information inputs. The intelligence organizations involved in those activities are processing inputs into outputted information:
The difference intelligence makes with respect to information is context and meaning. That is, intelligence is more than human or artificial software that collects, analyses and synthesizes information. Intelligence includes the ability to judge contextual and meaningful relevance to problems. Artificial intelligence can provide correlations, but interprets context and meaning based on its database, processing and experience. AI requires a huge training database and can process massive amounts of data, but its neural networks are few compared to organic brains. Advances in biotechnology and software is producing artificial emergent behavior that adjusts to contexts by strengthening nodes or creating new connections.
The contribution of intelligence to strategy is influenced by the quality of a commander or decision maker’s intent. The intent proscribes what is deemed to be relevant, whether the information is human or machine learned. Intelligence is a contest of relevant information. The crux of the contest is determining what information means in the battle-space. To meet this need, we introduce the concept of information intelligence (I2).
The need to scrutinize processed information becomes acute as automation outstrips timely human understanding. That has already happened in areas such as pattern recognition, particularly in large data sets. If machine-learnt becomes machine-taught, humans abdicate power to make responsible decisions. Yet, humans learn by interacting with artificial intelligence that learns via experience. As in education, a singular focus on student or teacher limits the interaction that drives mutual learning. Humans need to be in control of the benefits and risks of technology even as the pursuit of advantage increases our dependence on machines. Examples include the collaborative sensing grid and a host of technologies that pose existential risks.
This paradox of increased scrutiny and dependence is intensified by uncertainty. Interactions among humans and self-aware chatbots, for instance, produce unpredictable effects that some Russian scientists describe as “a fundamentally different reality.” Clearly, there is a need for strategists to judge what and whom to trust. This integrity is information intelligence.
Integrity is the foundation for cooperative and confrontational relationships as it confers a degree of predictability, as in keeping one’s word, or securing internet protocols. All relationships are subject to change, but without integrity, expectations and data can be manipulated to exploit implicit trust. Moreover, the explosion of available venues provides inexhaustible possibilities for deception that multiplies uncertainty. The pursuit of technologically verified “zero trust” security to replace a perimeter-centric approach reinforcesintegrity as accountable identity.
Consequently, information intelligence is vital as competitors seek more data to place into diverse contexts, and assess more information to influence interdependent outcomes. How can machined information and people be trusted in open exchanges of ideas, markets, information, and intelligence?
Information intelligence (I2) is about what and whom to trust. We need to trust information as we process it into intelligence. We define information intelligence as:
Information intelligence is the integrity of contextualized data and processed information
Our context of I2 is a dynamic competitive environment of alternative data, information, and intelligence. To represent this contested context, we will focus on strategic and anticipatory intelligence. The US National Intelligence Strategy defines those types rather generally as process and product, and collection and analysis, respectively:
The stakes and uncertainty of strategic anticipatory intelligence are high, so the integrity of data and information is critical to contesting the information environment. The intelligence competition may be analyzed as four fights over contested purposes—collection, tradecraft, design, and leadership.
These four fights interact with one another. As we collect, conduct tradecraft, design operations and develop leaders, the integration of those processes is a competition to out-think and out-execute rivals. The integrity of contextualized data and processed information can enhance or diminish these processes. Without information intelligence, operations are more likely to produce inconsequential effects, generate unanticipated consequences, and fail to imagine surprises. Information intelligence can strengthen accountability and control of operations, but we need to mind three political, military and cultural gaps.
The first gap is the political tendency to ignore, shift the blame, or anonymize information effects. It’s a persistent practice, above and beyond four-star military commanders’ authority. In 2020, a memo signed by nine US regional military commanders requested declassification of adversary disinformation to assist “waging the truth in the public domain against America’s 21st century challengers.” While information operations doctrine and capabilities have been expanding, over-centralized authorities and permissions prevent execution. At the same time, joint IO doctrine describes information effects as influence (a broad term) in a very narrow set of effects—disrupt, corrupt or usurp (adversaries) and protect (our own). Similarly, the Joint Weapons School teaches effects in terms of fires that are direct, indirect, cumulative, cascading, unintended, lethal and non-lethal.
What’s missing is the full range of dimensions, elements and targets of strategy detailed in ICSL Paper #37 (The Strategy Cuboid). These shortfalls are not adequately resolved in inter-agency and allied processes that fail to own information effects. As a result, adversaries’ hybrid and coop-frontational strategies exploit the gaps, enabled by our prevailing competition continuum that views war as conflict “when deterrence fails.” As argued in Paper #37 and elsewhere, the competition continuum of cooperation and conflict needs to be replaced by one of cooperation and confrontation. Why? Competition often consists of cooperation and confrontation (coop-frontation), which may be conflictual (armed conflict) or not. This blending of peace-war contrasts with the US approach to war, an off-on switch of “when deterrence fails.”
The second gap is the related military focus on capabilities rather than effects. Indeed, if war necessarily involves violence (a Clausewitzian nature of war), lethal capabilities are the way to victory. This thinking has been applied to information warfare, where information-related capabilities (IRC) have been expanding. IRCs now include “Tools, techniques, or activities using data, information, or knowledge to create effects and operationally desirable conditions within the physical, informational, and cognitive dimensions of the information environment.”  The gap in the effectiveness of capabilities is masked by the previously noted circular doctrinal definitions of information. Those undermine testable assessment even as information warfare capabilities increase.
Assessment in complex environments is not easy in any case, but we at least need to define information in a falsifiably broad way to enable commanders to specify desired physical and psychological conditions. Given the ubiquity of information, the term “information-related effects” is a better way to focus capabilities and related processes on desired outcomes. The outcomes that matter most are informational. Executing such a conceptual approach to using and creating information is limited by attitudes more than by technology.
This is the third gap, a well-entrenched cultural mindset that resists change:
information is subordinate to operations
The problem with this mindset about “operations” is that it fails to focus on the full range of effects. Considering why can inform solutions to this problem. Operational job jars, lanes and identities proscribe capabilities. The boundaries are reinforced by legacy budget buckets that authorize and appropriate funds for training over education, hardware over software, and military bands over foreign service officers, to sample just a few examples. To obtain funding independence and training continuity, the standard stratagem is to form a new organization, such as US Special Operations Command (1987) following the failure of Operation Eagle Claw (1979). By creating a highly classified and organic entity, the larger challenge of integrating whole-of-government capabilities and effects for strategic advantage was largely bypassed. So, has the exceptional success of special operations produced disproportionate, or even proportionate, victories in the information environment?
The focus on particular operations is fatal to competitive strategy in an information environment that is so dynamic and non-linear. That is, nuances in initial conditions and operations can create considerable wide-ranging differences in information effects. This places a premium on flexible command and control of effects. When one zero-day data breach in a defense contractor can disrupt networked supply chains, steal data, insert ransomware, setup future exploitation and influence domestic politics, situational awareness and C2 need to be broad.
Consider “advise-and-train” missions that teach how to secure and conduct campaign-level command and control, compared to those that instruct maneuvers and marksmanship. Both sets of skills need to be learned for survival on the field of battle, but C2 is needed to create broad advantages out of engagements. Even at a “tactical” level of force-on-force engagement, information controls effective operations. Such engagements should contribute to operationally and strategically significant effects. Decisions about how to adjust operations for desired effects depend on information. In this vital sense, information can and ought to be supported by, operations.
Assessment is an Information Contest
Asking the foremost question of any strategy, what do we want to cause and prevent?, leads to the kind of information needed to establish desired effects. Forces need to survive and be able to inflict kinetic destruction, but what are the next-order and enduring effects—what behaviors do we need to prevent and/or cause? That is the information needed to design combinations of consequential operations, operations that influence.
Routinized organizations are slow to recognize the need to change operations to be more effective. Indeed it has been nearly 20 years since then-Brig Gen Dave Deptula argued that the Information Age precipitated no less than a change in the nature of warfare. There have been phenomenal improvements in rapid precision fires, but the more challenging task remains, even though it is the primary consideration in strategy: specifying feasible strategic effects. Effects are where the main contest of ideas occurs to shape decisions.
Quantum and Information Age technologies punish strategic indecision while expanding opportunities to control an adversary’s ability to act. Indeed, one of the best uses of an OODA Loop is to get an adversary to orient on the “wrong” problem. Waging information war while provoking the enemy to invest in and apply irrelevant strengths is one such asymmetric strategy. Unless human nature changes, threats will adopt any available means to achieve desired effects.
The US National Security Strategy (NSS) calls for information effects. All of the objectives under NSS Goal 4—Advance American influence—seek favorable information conditions. The other three national goals are also informational effects: protecting the homeland, promoting prosperity, and establishing peace through integrated power. More leaders are conceding that “information is operations” as it becomes obvious that information creates effects. Leadership should advocate, more than follow, innovation. There are plenty of commercial examples.
In 1948, Bell Labs announced that its new piece of hardware—a tiny electronic semi-conductor marketed as a “transistor” to replace vacuum tubes—“may have far reaching significance in electronics and electrical communication.” That same year, Claude Shannon invented a theory of communication which created the bit (binary digit, 0 or 1). This concept provided a quantifiable measurement of information which, combined with the transistor, revolutionized electronics and expanded human awareness with reams of information.
As we see, hear, taste, touch, intuit, and otherwise sense more, we are susceptible to more avenues of influence. The internet began with plenty of optimism about its potential to advance egalitarian values. Other than the monopolistic marketing of goods and services, democratic governments were slower to recognize the value of the internet for propaganda, radicalization and divisiveness. It follows that we need broad operational concepts to create and counter information that targets assorted audiences.
Consistent with domestic laws, operational concepts need to anticipate a variety of processes considered outside one’s writ of responsibility. These include how local events are interpreted globally, how military operations affect public diplomacy and business, the impact of operations on cultures, and how narratives will form. The scope of joint military doctrine addresses all of those activities either directly as a type of warfare or operation (information, counterinsurgency, stability, e.g.) or indirectly by coordinating with the responsible agency (the State Department’s public diplomacy and strategic communication, e.g.). At best, the information effects of operations are assessed to the extent that Government departments derive their own strategies that fit into the language (typically the verbs) of the national security strategy. Did the operation protect, promote, advance, secure, disrupt, deny, and so forth.?
In contrast, adversary operations consider feasible information effects, then generate operations to achieve them. One way is to exploit different political authorities, such as Title 10 and Title 50. While no statute prohibits mutual support among agencies with different sources of authority, mutual support requires coordination often with different oversight. For instance, the legal requirement for “intelligence” operations under the CIA to include Congressional notification where “military” operations under the DoD do not, provides a venue for foreign influence on Congress. This vulnerability increases when the Presidency and Congress are controlled by different political parties. Given those circumstances, the selection of a feasible effect such as “sow discord” is easy. To compete with that, we need to understand how information is created, believed, and applied.
Our proposed definition of information as inputs, changes and outputs requires a broader scope of context, or intelligence. Joint intelligence doctrine defines intelligence as new understanding of information, the purpose of which is to provide assessments to commanders. This imprecise definition does fit the previously noted DoD dictionary definition of intelligence as product, activities and organization.
Joint intelligence doctrine then goes on to apply the organizational definition of intelligence to describe the “nature of intelligence” at strategic, operational and tactical “levels of war.” That is, strategic intelligence is what is provided to senior leaders that could impact national security interests; operational intelligence is what is provided to combatant and joint force commanders, and tactical intelligence is what is provided to commanders planners and operators for battles, engagements and missions. This level-of-organization definition requires holistic judgment about information effects because any person, operation or engagement can create significant impact.
To assess impact, we distinguish between measures of performance (MOP) and measures of effectiveness (MOE).
A measure of performance (MoP) assesses the degree to which an activity or task is being conducted to standards. The standards need to be relevant to the context. In the IE, a MOP can be for any actor, not just “friendly” forces. A quantitative example given in a US Joint Chiefs of Staff best practices paper is “number of IEDs discovered.” A qualitative example is “integration with supporting commanders.”
A measure of effectiveness (MoE) assesses desired changes in behavior, capability or environment based on mission objectives. The best practice quantitative example related to the preceding MOP is, “number of IED discovered vs number of IED effective attacks,” while the qualitative example is, “sentiments of host nation leaders and populace on the security situation.” In both sets of examples, the MoE relies on MOP as a baseline, then adds information that relates to desired changes.
Adaptive technology and leadership can enhance assessment. The reverse is also true. Interconnectivity in the IE means that the MoE approach to assessing progress or regress of desired change permits any organizational level to be strategic with respect to goals. Technological advances in distributed capabilities enable more actors to generate strategic effects. Many of the same technologies also enable commanders to micro-direct rather than distribute control. When this happens, who is thinking about third and fourth order effects? Effective leadership must ensure that our answer is not, no one.
At the same time joint intelligence doctrine equates organizational levels of intelligence with levels of war, it proscribes a general relationship among data, information and intelligence:
Sensor technology has compressed this relationship. The ability to do on-board collection and processing, as well as networked intelligence functions, means that some sensors can provide data in context. Moreover, the functions of collecting, processing and exploiting, and analyzing and producing, can happen more quickly and systematically. As multi-domain operations creates networks of these processes, there is a pressing need for leaders to ensure human responsibility in determining the adequacy of the information.
This task is more than machines filtering and fusing intelligence. The process requires trusting machine data and human-machine interfaces—what some refer to as explainable intelligence. In this effort, the human-machine interface is crucial to monitoring indicators of machine-processed information. As AI circuitry becomes more complex, humans will add context and judgment to processes the contours of which we understood simply as concepts. This pattern of abstraction is similar to monitoring an aircraft’s indicators of performance while flying an instrument approach to land in zero-visibility weather, without detailed knowledge of the aircraft’s operating systems. However the mechanical interaction of engine, flight control, navigation, pitot-static, communication, hydraulic, electrical, fuel, pressurization, and brake systems are complicated rather than complex. For trust to pertain, AI indicators of performance and effects must suit the complex context.
There are stark differences among competitors regarding the trust and context of information. The contest is to contextualize data and process information for strategic effects. Russian reflexive control and Chinese informationized warfare create malign influence that democracies struggle to counter. The race for AI and quantum applications is on as well. We can assess tangible destruction and construction, but how do we assess the information intelligence of hybrid operations? Russian warfare in Ukraine and Chinese warfare in the South China Sea blend both.
Strategists need to understand how actors manipulate trust and shape context so we can contest and preempt effects. Problematically, many of our competitors in the information arena do not respect the same red line of violence between “peace” and “war” that our political-legal system perpetuates. Operations in the information environment include conflict short of, and combined with, armed force. The previous two chapters provided examples from Chinese and Iranian strategy. An expanding literature on narratology and intellectual v. emotional processing provides insights into how dis-, mis- and mal-information work insidiously to shape behavior. As the transmission-centric US joint doctrine transitions to counter such techniques, the central issue is trust.
How should we assess broad information intelligence in a contested environment of distributed operations struggling for control? We start by considering how to assess trust with respect to information. Consistent with our definition of information as the values of characteristics, we propose three basic elements of trust—motive, validity and context.
Motive differs different from intention in that motives relate to deeply held values more than they relate to contingent interests. Scholars and practitioners distinguish values from interests in terms of “as the world ought to be” idealism v. “as the world is” realism, and long-term v. short-term, but consistently mix the two concepts. The great historian Thucydides, wrote of realist fear, honor and interests driving human behavior during the Peloponnesian War, but also advocated ideals of justice and moderation. Former Prime Minister and Foreign Secretary Lord Palmerston is quoted for saying that Britain has neither permanent allies nor enemies, only permanent interests. But he also advocated selective liberalism in Europe.
The utility in distinguishing between what is more or less changeable is based on an assumption about trust. That is, shared consistency is more trustworthy than shared circumstances. Common motives or principles are a more durable basis for trust than shared intentions or plans. For machines, given transparent rules there is no distinction between programmed “values” and process-dependent “interests.” For humans, however, it’s generally harder to exchange different values than different interests.
There are many measures of statistical validity, which for our purposes may be thought of as reliability and relevance. Is the information an indicator (reliable) of what it represents (relevant), or is it being selectively misused? Depending on the availability of data, information is prone to production from un-representative samples or from machine-processed selections that must be put into context. The need to ascertain reliability and relevance closely conforms to a concept known as construct validity,—the ability to generalize:
When we infer from a specific program or activity to a more general result or effect, we have constructed a cause and effect argument. The key question, “Can we generalize to the constructs?”, relates to the Hierarchy of Effort in Figure II 1-2. Namely, do our activities which are designed to influence will and capability cause the desired effects?
There are many contexts into which we can place data, to construct information. Contextualizing data requires expertise. Placing electronic emissions into the context of known signatures takes a different expertise than interpreting source code or recognizing cultural nuances in spoken language. Technical knowledge and red-teaming can help assign significance to data from the perspective of users and competitors linked to the data in some way. Generally the context of interest to a commander includes the impact of the data, its cost, and associated risk.
We relate the information to our strategy that links activities to effects to objectives to goals. That is, how important an impact will the information likely have?
We estimate what is relevant to the desired effect and the environment. Costs are DIMES-wide and vary by relative importance to impact, time frame, and actions or forgone opportunities.
The following depiction is one way to integrate information intelligence into assessment. The illustration of trust and context includes measures along a low-high range of quantitative or qualitative values. In this 3-D view, we represent trust and break down context into impact and cost. We can consider trust, impact and cost together along a low-high range of quantitative or qualitative values. The three factors can be weighed for their relative importance and a composite score assigned to help judge what the relevant information intelligence is for the situation at hand.
For a 4-D view, we added a second diagram to illustrate how selected components of trust—validity’s reliability and relevance in this case—can be comparatively displayed. Instead of low to-high trust, trust may be depicted in quadrants of low-to-high relevance and low-to-high reliability. This can be quite useful to inform decisions in contexts where reliability and relevance do not march in lock step. They often vary, such as when more reliable information or people are less relevant, and vice versa.
An AI that uses latent semantic analysis and singular value decomposition could find relationships among our components of trust to include motive (programmed values), validity (reliability and relevance) and context (impact and cost). Intention could be analyzed via algorithms or hyper-dimensional relationship analyses of interests. The interactive 3-D display would show nodes and linkages among ideas and people that, once selected, show 4th and subsequent dimensions of relationships.
The age-old question of how we can trust information and people becomes more challenging as automated technologies write code, synthesize new solutions, and exceed the capability of independent human judgment. How do we trust machine-processed information? Perhaps warning lights on the parameters of ideas, similar to ops limits on mechanical systems? We answered this question by defining information intelligence in contested contexts, advocating information-controlled operations with broad intelligence, and discussing multi-dimensional assessment.
Beginning with the integrity of contextualized data and processed information, we described the competition over four purposes of intelligence. These purposes are to be better than competitors in collection, tradecraft, design and leadership. The purposes in turn should become ways and means to control operations for enduring information effects.
Beyond the entrenched mindset of combined arms, it’s the combined effects in the IE that outlast battlefield victories. Effects interact in many ways, some of which are synergistic. Several ICSL papers systematically describe causative and preventive, cooperative and confrontational, physical and psychological effects as combinations of persuasion, dissuasion, inducement, security, compellence, deterrence, coercion, and defense. While the language of deterrence and defense dominates US security strategy, adversaries combine all of the effects in peace-war.
These combined effects out-maneuver the narrow doctrinal confines of armed conflict as well as the hopeful “when deterrence fails” assumption of war versus peace. Strategists need to be aware of all effects as the explosion of information broadens the need for intelligence and as machine processing capabilities proliferate. At the same time, the integrity of information is a basic source of influence and security.
Information intelligence requires updating current doctrine in light of new technologies because doctrine and operational concepts influence assessment. Where are we winning and losing? The organizational “nature of war“ definition, for instance, does not capture what’s going on in the IE. Certainly doctrine lags new contexts during times of rapid change. And, doctrine is supposed to be authoritative and not prescriptive. However, many analysts, planners, operators and commanders consider doctrinal frameworks as places to begin thinking about problems.
To ensure that doctrine does not end thinking, strategists (analysts, planners, operators) should consider information as a process of change. Leaders need to understand how authorizations and authorities affect the development of intelligence for multi-domain, all-effects strategies. Continuing our present course of narrow doctrine and “when deterrence fails” warfare leads to integrating separate efforts that at best, converge. Convergence without synergy is not likely to produce superior information effects against adaptive competitors or increasingly, AI agents. On top of that, if control is over-centralized rather than distributed, information flow will become constricted. This predicament leads to less self-awareness and is a strategic vulnerability.
By combining the dual meanings of information and intelligence into a concept of I2, strategists can overcome two popular premises that destroy initiative. The first premise is that information is relevant when it supports or conducts operations. The second is that intelligence is about collecting data, then analyzing it as a product for operations. Both assumptions limit how warfare waged to win information effects.
Instead of mustering specialized information-related capabilities for desired effects, strategists should be designing interactions of information-related effects that provide asymmetric advantages. Information-related effects focus on superior purposes of strategy— effects, objectives, priorities, goals. This broadens the options compared to information-related capabilities. Kinetic capabilities should support information effects, not just the other way around. Courses of action should integrate combinations of information effects, not just combinations of information-related capabilities.
The integrity of contextualized data and processed information is an operational advantage for achieving the information effects of operations. Without winning those, operations become pyrrhic and pointless. Because information is a process, its arena is a continuous competition. Clearly, better capabilities must be accompanied by better strategy, which relies on I2.
The next article will discuss how to implement a superior strategy of combined effects in the information environment by integrating analysis, planning and operations.
 Joint Publication 2.0, Joint Intelligence, Chairman of the Joint Chiefs of Staff, 22 October 2013, p. ix, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf.
 The human brain has approximately 86 billion neurons, compared to 10,000 neurons in an artificial neural network. https://www.verzeo.com/blog-artificial-neural-network-vs-human-brain.
 Published research on neural networks may be found online in the journal, Neural Networks. See Article Collections at https://www.sciencedirect.com/journal/neural-networks.
 See discussion of connectionism in M. Mitchell Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos, p. 289.
 Brian W. Everstine, “USAF Developing “Cyber Flight Plan” to Determine Intel’s Future,” Air Force Magazine, 4 September 2019, https://www.airforcemag.com/USAF-Developing-Cyber-Flight-Plan-to-Determine-Intels-Future/.
 For instance, The Future of Life Institute’s focus areas of existential risk include AI safety, nuclear weapons, biotechnology and climate change. See https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/.
 Samuel Bendett, “What Russian Chatbots Think of Us,” Defense One, 2 September 2019, https://www.defenseone.com/technology/2019/09/what-chatbots-think-about-us/159583/.
 Joint Publication 3-13: Information Operations, Joint Chiefs of Staff, 20 November 2014, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_13.pdf.
Joint Publication 2-0: Joint Intelligence, Joint Chiefs of Staff, 22 October 2013, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf.
 The National Intelligence Strategy of the United States of America, Office of the Director of National Intelligence, 2019, pp. 8-9, https://www.dni.gov/files/ODNI/documents/National_Intelligence_Strategy_2019.pdf.
 An exceptionally broad, overlapping approach to collection may be found in Wayne Michael Hall and Gary Citrenbaum, Intelligence Collection: How to Plan and Execute Intelligence Collection in Complex Environments, Praeger Security International, 2012.
 An example is structured analysis. See Richards J. Heuer, Jr. and Randolph H. Pherson, Structured Analytic Techniques for Intelligence Analysis, CQ Press, 2010.
 For a business presentation on designing how a system works to clarify, interact, explore and envision, see Dave Malouf, Design Operations, YouTube presentation, March 2017, https://youtu.be/GzsUk2k24hk.
 A Marine Corps example that focuses on vision, communication and energy is the Commander’s Leadership Handbook, January 2016, https://www.usmcu.edu/Portals/218/LLI/CCSPW/Commanders%20Leadership%20Handbook.pdf?ver=2019-01-31-120930-877.
 Joint Concept for Operations in the Information Environment (JCOIE), Chairman of the Joint Chiefs of Staff, 25 July 2018, p. I-4, https://www.jcs.mil/Portals/36/Documents/Doctrine/concepts/joint_concepts_jcoie.pdf?ver=2018-08-01-142119-830.
 The Center for Internet Security provides ongoing analysis and recommendations regarding the cyber-intrusion that exploited a Solar Winds software application in December 2020: The Solar Winds Cyber-Attack: What You Need to Know, https://www.cisecurity.org/solarwinds/.
 David A. Deptula, Effects-Based Operations: Change in the Nature of War, Aerospace Education Foundation, 2001, https://secure.afa.org/Mitchell/reports/0901ebo.pdf.
1947: Invention of the Point-Contact Resistor, Computer History Museum, https://www.computerhistory.org/siliconengine/invention-of-the-point-contact-transistor/.
 Joint Publication 2.0, Joint Intelligence, Chairman of the Joint Chiefs of Staff, 22 October 2013, p. ix, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf.
 Ibid, p. I-24.
 “Insights and Best Practices Focus Paper: Assessments and Risk,” Third Ed., Joint Staff J7, March 2020, p. 4, https://www.jcs.mil/Portals/36/Documents/Doctrine/fp/assessment_risk2020.pdf?ver=2020-03-31-150705-920.
 Joint Publication 2.0, Joint Intelligence, Chairman of the Joint Chiefs of Staff, 22 October 2013, p. I-2, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf.
 Jan Christoph Meister, The Living Handbook of Narratology, https://www.lhn.uni-hamburg.de/node/48.html.
 A usefully map-rich version of Thucydides’ tome is The Landmark Thucydides, Robert B. Strassler, ed. and Richard B. Crawley, trans., Free Press, 2008.
 For a contrast in Palmerston’s liberal principles and realist practices, see David Brown, Palmerston: A Biography, Yale University Press, 2012.
 William M.K. Trochim, The Idea of Construct Validity, https://conjointly.com/kb/construct-validity-idea/.
 Savant X is such an AI and may be found here: https://savantx.com/.