Predictive Intelligence: Do We Really Need It?

by Major Forrest Lamar Davis

In the extremely volatile "futures" market, Wall Street's mainframes, analysts, and extensive collection network struggle to predict the growth of a single soybean plant, achieving a degree of exactness which leads many observers to call futures the most speculative of all investments. With tremendous data, files, and experience, many sportcasters attempt the same, establishing a similar record of mixed success. Many French generals predicted "Germany will attack through Belgium as they did during the Great War." We all know what happened when the Panzers breached the Ardennes.
Predicting the future is difficult; some would argue impossible. U.S. Army intelligence doctrine currently embraces a "capabilities-intentions" school of thought which demands not only an assessment of enemy capabilities, but an estimate of the enemy's "most likely" or "most probable" course of action (COA). The S2 should tell us what the enemy can do and what he will do. The S2 can and should predict the future.
Yet, by our own doctrinal definitions, the requirements for predictive accuracy are so stringent as to render prospects for success virtually nil. We cannot predict the future, but we can guess. And we do every time we brief the enemy's most likely or probable COA. The purpose of this essay is to argue that we can no more predict future enemy actions than Wall Street can precisely plot the demand for the soybean. "Predictive" in our context should mean anticipatory, future-oriented, and proactive though often it purports a definitive description of what will be. The difference is subtle, yet very significant. One definition feeds initiative while the other fuels reaction. In this essay, I wil briefly outline three intelligence schools of thought, argue that we are straddling the "capabilities-intentions" fence, and then highlight why our current position promises significant risks with virtually no potential benefits. My conclusion is simple: we should cease the consideration of the "most likely" enemy COAs during our deliberate decisionmaking processes.

Three Intelligence Schools of Thought

Commanders and intelligence officers tend to embrace one of three intelligence schools of thought in terms of analysis. They are the descriptive-, capabilities-, and intentions-based approaches. The descriptive school states that S2s should describe the battlefield in factual terms weather, terrain, and enemy order of battle and let the commander form his own estimate. This school asserts that only the commander can appropriately assess the battlefield in its entirety, and intelligence is just one of the many sources of factual information. Critics of this system argue that this mindset makes the intelligence officer nothing more than a historian: they are probably right.
The capabilities school encourages S2s to extend their analysis to enemy capabilities relative to the friendly mission. Like descriptive adherents, these intelligence officers study the variables of weather, terrain, and enemy order of battle. They extend their analysis to answer a specific capabilities question: "What could the enemy do to keep us from accomplishing our mission?"
The intentions school suggests that S2s should go further still, not only ascertaining enemy capabilities relative to the friendly mission, but enemy intentions as well. S2s who follow this school of thought strive to answer the difficult "where, when, and in what strength" caveats which often garnish our priority intelligence requirements. They should design not simply enemy COAs, but "most likely" or "most probable" enemy COAs. S2s in this category "predict."
The three schools seldom exist in isolation and, due to their complementary nature, some overlap is common. S2s most comfortable with the descriptive school may at times venture into capabilities analysis, following essentially a "descriptive-capabilities" type of approach. S2s who are primarily capabilities adherents sometimes build products more akin to the intentions school, following a "capabilities-intentions" methodology. Current Army intelligence doctrine embraces the school of capabilities-intentions, with particular emphasis on intentions.

Straddling the Fence

Any commander given the choice between what the enemy "can do" or "will do" will choose the latter. "Can do" (capabilities) tells the commander what possibilities the enemy enjoys, each offering varying degrees of risk. "Will do" (intentions) gives the commander a single COA to counter, allowing him to reasonably concentrate his forces and avoid or mitigate risk. Perhaps this explains our current doctrinal position. We conduct capabilities analysis because it is in the realm of the possible. We emphasize intentions because our consumers demand more. We straddle the fence: one leg in the possible, the other in the unlikely. Our doctrine is capabilities-based, but intentions-oriented. Our capstone manual, FM 34-1, Intelligence and Electronic <%-2>Warfare Operations, plainly says-
predictive intelligence should tell the commander what the enemy is doing, can do (capabilities), and his most likely course of action (intentions). 2
FM 34-130, Intelligence Preparation of the Battlefield, follows this lead and charges S2s with "identifying and developing threat courses of action in order of probability of adoption." 3 Command and General Staff College Student Text 101-5, Command and Staff Decision Processes, institutionalizes the concept further by requiring "COAs open to the enemy and the probable order of their adoption." 4 In short, intelligence doctrine requires the S2 to consider all possible enemy COAs and, next, leap from this analysis to provide most likely COAs, probability assessments, or, rather, predictions. I say "leap" from analysis to predictions as this is almost certainly what the intelligence officer will be forced to do.
According to FM 34-1, the analyst can be assured of effective predictions only when a number of overlapping conditions are met. These include the following:
While our intelligence collection and analysis systems are the finest in history, we still lack the capability to satisfy these three requirements of predictive accuracy. Our machines do not give us a holistic view of the moral and physical domains of war; rather, they give us some snapshots of the enemy in various stages of deployment or redeployment. By analogy, our machines replicate the effects of a strobe light on a dance floor, leaving the analyst to determine if the awkward, sometimes contradictory images of the enemy represent attack, defend, reinforce, or withdraw where, when, and in what strength.
The variables defining how two opponents will act and react in a battle to the death are, I believe, too varied for human comprehension. Predictive accuracy requires perfect intelligence. We are smart and our intelligence machines are the finest in the world, but we have not arrived at perfection yet.

Risks of Predictive Intelligence

Given the likelihood that our predictive estimates will not prove accurate, what risks are we taking by continuing to straddle the capabilities-intentions fence? I see two. Given our prospects for predictive "inaccuracy," where is the value-added in demanding our S2s provide most likely or most probable COAs? I see none. Ironically, the risks predictive intelligence supposes to mitigate are in fact increased with its use.
First, to the degree that commanders accept their S2s' predictions as reasonably accurate and plan accordingly ("the S2 said the enemy would come down avenue of approach A, so our main effort is here"), the command assumes mission risk. Through infiltration or combined arms maneuver, the enemy can attack from the north, south, east, west, and vertically in virtually any scenario and, unless the S2 can read the enemy's mind, we had best prepare for each. Weighing the main effort should stem from our assessment of the terrain, our capabilities, the enemy's capabilities, and our mission. We should weigh our main effort based upon facts, not on what we think the enemy might do.
Second, by linking friendly actions to supposed enemy actions, we risk passing the tactical initiative to the enemy. Agility and initiative demand that we move quickly inside the enemy's decision cycle and stay there. We do not react to him; he reacts to us. We seize and retain the initiative. Linking friendly actions to supposed enemy actions contradicts these most basic warfighting tenets. 9
So where is the value-added in demanding most likely enemy COAs? Do they help us weigh the main effort? Not without substantial risk. Do they enhance our abilities to execute doctrine? No. In fact, most likely COAs are no more probable than any other COA open to our thinking opponent. Most likely COAs are simply COAs, and should be considered equally with every other enemy COA capable of denying friendly accomplishment of the mission. The only value-added in pursuing this product of predictive intelligence is the illusion of certainty, which promises to evaporate the moment the first foot crosses the start point.

Conclusion

Our concern is not so much "what will the enemy do? " but "what can the enemy do to keep us from killing his soldiers, breaking his equipment, or seizing important pieces of terrain?" The difference is subtle, yet significant. This difference stems from two schools of thought; one seeks intentions, while the other looks for capabilities.
I argue that we need to plant our feet squarely in the realm of capabilities, leaving the pursuit of enemy intentions to those graced with holistic collection machines, integrated battlefield synchronization skills, and the rare ability to see the world through another's eyes. Predictive intelligence enhances risk and detracts from our execution of doctrine. There is no value-added in continuing to straddle the capabilities-intentions fence.
Commanders need to know. Demand that the S2s produce a most likely enemy COA, and you will receive their best estimates. Make that estimate the basis of plans, and you have leveraged your command toward risk. As an Intelligence Corps, we "see through a glass darkly" and cannot predict the future. But we can guess, and we do every time we brief an enemy most likely course of action.
We would do well to eliminate most likely or probable assessments from our deliberate decisionmaking process.

Endnotes

1. I have borrowed generously from Colonel Elias Townsend's RISKS: The Key to Combat Intelligence (Harrisburg, PA: The Military Service Publishing Company, 1955), especially with respect to "capabilities and intentions schools." My use of the term "descriptive school" stems from 1993 personal conversations with Colonel Richard D. Quirk, then Commander, 525th MI Bde, Fort Bragg, North Carolina.
2. FM 34-1, Intelligence and Electronic Warfare Operations, 27 September 1994, 2-7.
3. FM 34-130, Intelligence Preparation of the Battlefield, 8 July 1994, 1-7.
4. CGSC Student Text 101-5, Command and Staff Decision Processes,1-4.
5. FM 34-130, 1-3.
6. Colonel Trevor N. Dupuy, Numbers, Predictions, and War (Fairfax, VA: Hero Books, 1985), 33.
7. FM 34-130, 1-3.
8. Ibid.
9. FM 100-5, Operations, June 1993, 2-6 through 2-9.
Major Davis is currently a project officer in Battle Command Battle Lab Huachuca. He has also served as S2, 2d Battalion (Airborne), 504th Infantry, 82d Airborne Division. He has a Master of Science and Administration degree from Central Michigan University and a bachelor of arts degree in Political Science from the Univerity of Texas, El Paso. Major Davis can be reached by phone at (520) 533-4668 and DSN 821-4668, and via E-mail davisf @huachuca-emh30.army.mil.