Our need to scrutinize machine-processed information becomes acute as automation outstrips human understanding.
If machine-learnt becomes machine-taught, we abdicate power to make responsible decisions.
At the same time, our imperative for technological advantage increases dependence on complex processes such as the “collaborative sensing grid.
Uncertainty persists. Interactions among humans and self-aware chatbots already produce unpredictable effects Russian scientists describe as “a fundamentally different reality.” Clearly we need to judge what and whom to trust. We need a new kind of integrity, information intelligence.
Integrity is the foundation for cooperative and confrontational relationships as it confers a degree of predictability, as in keeping one’s word, or securing internet protocols. All relationships are subject to change, but without integrity, expectations and data can be manipulated to exploit implicit trust. Moreover, the explosion of available venues provides inexhaustible possibilities for deception that multiply uncertainty.
Consequently, information intelligence is vital as competitors seek more data to place into diverse contexts, and assess more information with potential to influence interdependent outcomes. How can we trust information and people as we thrive on open exchanges — ideas, markets, information, and intelligence?
This paper develops a four-fold answer by: defining information intelligence; placing information intelligence into its operationally contested context; critiquing shortfalls in joint doctrine for information (Joint Publication 3-13) and intelligence (Joint Publication 2-0); and discussing assessment.
Information intelligence (I2) is about what and whom to trust. Let’s break down the definitions of information and intelligence separately, then try combining them into a more holistic definition.
If we try to think of something that is not information, what would we think about? Meaningless data is not yet information. The key is context. Data has potential to mean something if used in a context. This acquired meaning includes false data, such as the flood of disinformation splattered out from Russian bots and trolls.
Information is also ubiquitous (see Gleick’s The Information), with innumerable contexts and definitions. Maybe that is why we see circular definitions of information (does not use “information” to define “information”) in the DoD Dictionary of Military and Associated Terms. The reference lists types of information, and concepts that relate to it: activities; environment; info power, info-related capabilities; info warfare; etc.
One broad yet non-circular “academic” definition of information is:
That is, processes produce outputs with characteristics on which we can place a quantitative and/or qualitative value.
So, let’s use the following definition of information because it is brief, but also accommodates technical-social processes and meanings:
characteristics of data put into context
The values of characteristics apply when we assess information in terms of qualities or quantities.
Unlike information, the term intelligence does have a disprovable (we can test what it is not) DoD dictionary definition, and it includes products, activities, and organizations.
Intelligence is: the product resulting from the collection, processing, integration, evaluation, analysis, and interpretation of available information concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations; activities that result in the product; and organizations engaged in such activities (115).
Combining the preceding two definitions yields the following distinction: information is data in context, while intelligence is collecting and thoughtfully processing information. Why care?
Because we need to judge how we can trust information before we process it into intelligence. So, let’s create a definition of information intelligence:
Information intelligence is the integrity of contextualized data and processed information
Next, we place this I2 definition into a relevant context — the highly interactive and contested environment of alternative data, information, and intelligence.
I2 is applicable to all types of intelligence, but is particularly meaningful for strategic intelligence and anticipatory intelligence. The US National Intelligence Strategy (8, 9) defines those types as:
The question of what and whom to trust applies to all situations because uncertainty is pervasive. In the information environment (IE), the overriding context of trust is that it’s contested. Actors fight for the kind of information and people they need to compete and prevail.
Four types of competition become apparent when we consider four contested purposes of strategic and anticipatory analysis.
These four fights interact with one another, regardless of whether we execute them that way (as a joint force) or not. As we collect, conduct tradecraft, design operations and develop leaders, the integration of those processes is a competition to out-think and out-execute rivals.
I2 is crucial because contextualized data and processed information enable intelligence to achieve goals. Without information intelligence, operations are prone to produce inconsequential effects, generate unanticipated consequences, and fail to anticipate strategic surprise. Let’s see what information and operations doctrine says about information and intelligence.
Information operations doctrine has been expanding to include more variety of information-related capabilities so that the joint force commander can have all appropriate tools available. Joint IO doctrine describes information effects as influence (a broad term) and as a narrower set of effects—disrupt, corrupt or usurp (adversaries) and protect (our own). The Joint Weapons School teaches effects in terms of fires that are direct, indirect, cumulative, cascading, unintended, lethal and non-lethal.
Informed by the Joint Concept for Operations in the Information Environment, information-related capabilities have been updated to include “Tools, techniques, or activities using data, information, or knowledge to create effects and operationally desirable conditions within the physical, informational, and cognitive dimensions of the information environment” (I-4). How do we do this?
Executing this conceptual approach to using and creating information is limited more by attitudes than by technology. This self-imposition confers advantages to those who are willing to adopt it.
Unfortunately, a well-entrenched mindset bars the way:
information is subordinate to operations
The problem with this mindset about “operations” is that it fails to realize a full range of effects. The mindset is fatal to competitive strategy because nuances in operations can create considerable differences in information effects.
Consider advisor missions that teach campaign-level command and control of capabilities, compared to missions that instruct maneuvers and marksmanship. Both sets of skills need to be learned for survival on the field of battle, but C2 is needed to create strategic advantages. Why? Information should control operations. Decisions about how to adjust operations for desired effects depend on information. In this important sense, information can and ought to be supported by, operations. How?
Asking the question, what do we, and relevant competitors, want to cause and prevent? leads to the kind of information we want as desired effects. Forces need to survive and be able to inflict kinetic destruction, but what are the next-order and enduring effects—what behaviors do we need to prevent and/or cause? That is the information needed to fashion more combinations of consequential operations.
Routinized organizations are slow to recognize the need to change operations to be more effective. Indeed it has been nearly 20 years since now-Lt Gen (ret) Dave Deptula argued that the Information Age precipitated no less than a change in the nature of warfare. We have made phenomenal improvements in rapid precision fires, but more work remains to be done in terms of gaining acceptance of the primary consideration in strategy: specifying feasible strategic effects.
Quantum and Information Age technologies will increasingly punish such strategic indecision while opportunities to control an adversary’s ability to act broaden. Unless human nature changes, threats will adopt any available means to achieve desired effects.
The US National Security Strategy (NSS) calls for information effects. All of the objectives under NSS Goal 4—Advance American influence—seek favorable information conditions. The other three national goals are also informational effects to be operationalized: protecting the homeland, promoting prosperity, and establishing peace through integrated power.
More leaders are conceding that “information is operations” as it becomes obvious that information creates effects. Leadership should advocate, more than follow, innovation. There are plenty of commercial examples.
In 1948, for instance, Bell Labs announced that its new piece of hardware—a tiny electronic semi-conductor marketed as a “transistor” to replace vacuum tubes—“may have far reaching significance in electronics and electrical communication.” That same year, Claude Shannon invented a theory of communication which created the bit (binary digit 0 or 1). This concept provided a quantifiable measurement of information which, combined with the transistor, revolutionized electronics and expanded human awareness with reams of information.
As we see, hear, taste, touch, intuit, and otherwise sense more, we seek and are vulnerable to more avenues of influence. It follows that we need broad operational constructs to create information appropriate to relevant audiences. The concept should include a wide variety of operations such as: conveying local events on a global scale; broadening the scope of diplomatic issues; increasing public awareness of economic competition; exposing groups to different cultural practices; exploiting the band-width of electromagnetic waves; countering and spreading internet memes; and incentivizing humanitarian norms in social behavior.
Whether the above influence operations are conducted by humans and/or machines, the information generated provides new opportunities, poses new threats, and changes how how we think about operations. Meta-cognition, big data, uncertainty, and the hopes and fears of artificial intelligence, for instance, involve arranging, processing and assigning meaning to data. To compete well, we need to understand the many ways information is created, believed, and applied.
Joint intelligence doctrine defines intelligence as new understanding of information, the purpose of which is to provide assessments to commanders (JP 2-0, ix). This rather narrow definition fits within the DoD dictionary definition. Granted, doctrine is supposed to only be authoritative and not prescriptive so why not have a more focused definition? The problem is that in practice, many misconstrue doctrine as a place to end rather than begin thinking. Narrow doctrine gets institutionalized and constrains thinking, unless we invite critique.
For instance, JP 2-0 suffers from an organizational definition of what is deemed to be strategic, operational and tactical levels of war (I-24). That is, strategic intelligence is what is provided to senior leaders that could impact national security interests; operational intelligence is what is provided to combatant and joint force commanders, and tactical intelligence is what is provided to commanders planners and operators for battles, engagements and missions. How is this a constraint?
In the Information Age, any organizational level can create information and operations that impact national security. The validity of this assertion becomes clear when we take a perspective of measuring effects and not just measuring performance.
A measure of performance (MoP) assesses the degree to which an activity or task is being conducted to relevant standards. In the IE, this measurement can be for any actor, not limited to “friendly” forces. A quantitative example given in a Joint Chiefs of Staff Insights and Best Practices Paper (4) is “Number of IEDs discovered,” while a qualitative example is “Integration with supporting commanders.”
A measure of effectiveness (MoE) assesses desired changes in behavior, capability or environment based on mission objectives. The best practice quantitative example related to the preceding MOP is, “Number of IED discovered vs number of IED effective attacks,” while the qualitative example is, “Sentiments of Host Nation leaders and populace on the security situation.” In both sets of examples, the MoE adds more information that relates to desired changes in support of goals.
The MoE perspective on assessing desired change permits any organizational level to be strategic with respect to goals. Technological advances in distributed capabilities enable more actors to generate strategic effects. Many of the same technologies also enable commanders to micro-direct rather than distribute control. When this happens, who is thinking about third and fourth order effects? Our answer must not be, no one.
At the same time doctrine describes an over-rigid “nature of intelligence,” (JP 2-0 Chapter 1, which includes “levels of war”), it proscribes a general relationship among data, information and intelligence (I-2) that modern sensors have compressed.
The ability to do on-board collection and processing, as well as networked intelligence functions, means that some sensors can provide data in context. Moreover, the functions of collecting, processing and exploiting, and analyzing and producing, can happen more quickly and systematically. As multi-domain operations creates networks of these processes, we need to be responsible for determining the adequacy of the information.
This task is more than machines filtering and fusing intelligence. The process requires trusting machine data and human-machine interfaces—what some refer to as “explainable intelligence.” In this important effort, the human-machine interface is crucial to understanding machine-processed information. As in human intelligence, and particularly as AI becomes more complex, humans add context and judgment.
Competitors are contextualizing data and processing information for strategic effects. Russian reflexive control and Chinese informationized warfare are creating malign influence we have not adequately countered. The race for AI and quantum applications is on as well. We can assess tangible destruction (Russia in Ukraine) and construction (China in disputed territory of the South China Sea), but how do we assess the information intelligence of advanced hybrid operations?
If we can understand how actors contextualize data (which can be false data, too) and process that into information (narratives), we can develop strategies to contest and preempt effects. This may entail conflict short of, or combined with, armed force. There is a vast and expanding literature on narratology (section 3.5) and narrative warfare about the many contexts, intellectual and emotional processing, and strategies of narratives. Our transmission-centric joint doctrine may be in a supertanker turn to embrace such techniques.
How should we approach assessing broad information intelligence in a contested environment of distributed operations struggling for control? Let’s start by considering how to assess trust with respect to information in terms of validity and context.
Validity. There are many measures of statistical validity which most people translate as reliability. For our purposes, validity may be thought of as reliability and relevance…is the information an indicator (reliable) of what it represents (relevant), or is it being selectively misused? There is so much data that information is based on a limited selection of data put into context (or no data at all).
This judgment of reliability and relevance closely conforms to a broad interpretation of “construct validity” — the ability to generalize. A good way to visualize this, because it’s easy to relate to lines of effort, is the following depiction. When we infer from a specific program or activity to a more general result or effect, we have constructed a cause and effect argument. Note the key question: “can we generalize to the constructs?”
Context. There are many contexts into which we can selectively place data, to construct information. Contextualizing data often requires expertise. Placing electronic emissions into the context of known signatures takes a different expertise than interpreting source code or recognizing cultural nuances in spoken language. To illustrate a context, let’s assume that the following is important to a commander: the impact of the data, its cost, and its trustworthiness as information. To become information, the data may be machine-processed or human expertise-derived.
Next, let’s consider impact and cost, matters of interest in most assessments.
Impact. We relate the information to our strategy that links activities to effects to objectives to goals. That is, how important an impact will the information likely have?
Cost. We estimate what is relevant to the desired effect and the environment. Costs are DIMES-wide and vary by relative importance to impact, time frame, and actions or forgone opportunities.
The following depiction illustrates that impact, cost, and trust (validity and context) can be be judged together along a low-high range of quantitative or qualitative values. The three factors can be weighed for their relative importance and a composite score computed to help judge what the relevant information intelligence is for the situation at hand. The same method could apply to individuals and groups. Key questions: how trustworthy, and what is the impact of, their information or intent; what are the costs involved?
The age-old question of how we can trust information and people will become more problematic as automated technologies write code, synthesize new solutions, and challenge human judgment. We answered this question by providing a definition for information intelligence in contested contexts, advocating information-controlled operations with broad intelligence, and discussing potential ways of assessment.
Beginning with the integrity of contextualized data and processed information, we described the competitive context of collection, tradecraft, design and leadership. Information control over operations effectively weaponizes information to persuade, dissuade, deceive, induce, deter, and achieve other effects, both directly and indirectly. Operations should embrace such expansive effects as information broadens intelligence and machine processing capabilities proliferate. At the same time, the integrity of information is a basic source of influence and relative security.
Information intelligence requires re-interpreting current doctrine in light of new technologies. This need to update experience-based guidance is expected. Doctrine lags new contexts during times of rapid change. While doctrine is supposed to be authoritative and not prescriptive, most analysts, planners, operators and commanders embrace its models to begin thinking about problems.
To ensure that doctrine does not end thinking, we need to expand how we understand information and how we permit ourselves to develop intelligence for multi-domain, all-effects strategies. Continuing with integrating separate efforts will produce less proactive effects and develop less anticipatory intelligence. On top of that, if control is over-centralized rather than distributed, information flow will become constricted. This predicament leads to less self-awareness and is a strategic vulnerability.
By pulling in the dual meanings of information and intelligence into a concept of I2, we question two popular premises that destroy initiative. The first premise is that information is relevant when it supports or conducts operations. The second is that intelligence is about collecting data then analyzing it as a product for operations. Both assumptions limit how we can wage and win complex warfare.
Instead of mustering specialized information-related capabilities for desired effects, we should be aiming for holistic interactions of information-related effects. The value of an Information-Related Effects concept is to focus on superior purposes of strategy— effects, objectives, priorities, goals, thereby broadening our options. Options such as kinetic capabilities supporting information effects, rather than presuming it’s the other way around. Courses of action should integrate combinations of information effects, in addition to combinations of information-related capabilities.
The information intelligence requirements of technology are operational advantages. Better capabilities must be accompanied by better strategy.