[Back]

[Index]

[Next]
USAF INTELLIGENCE TARGETING GUIDE
AIR FORCE PAMPHLET 14- 210 Intelligence
1 FEBRUARY 1998

Chapter 9
COMBAT ASSESSMENT


68 Page 69 70 69


Chapter 9
COMBAT ASSESSMENT

9.1. Overview. Effective campaign planning and execution require a continuing evaluation of the impact of combat operations on the overall campaign. Combat assessment (CA) evaluates combat operations effectiveness in achieving command objectives and recommends changes to tactics, strategies, objectives, and guidance. It has several sub- assessments including MA, BDA, and MEA. The military end state, as written in the campaign estimate and modified during an operation, is directly linked with CA. CA com-pares the results of the operation to the objectives to determine mission success or failure within the guid-ance parameters. More important than a review, it looks forward to determine if additional missions are needed and/ or if modification to the objectives are necessary. The two pillars of combat assessment prep-aration are audience- awareness and prior planning. The desired outcome are a satisfied commander, staff, and operators. Combat assessment is one concept with many implementations.

9.1.1. Combat assessment examines lethal and non lethal strikes on the enemy (targets and target sys-tems) to determine the effectiveness of operations. It answers the question: "How good of a job are we doing and what is next?" Combat assessment provides information to commanders, battle staffs, planners, and other decision makers. This wide audience complicates definitions and functions as applied across all components and joint staffs. Concept robustness is, therefore, all the more impor-tant for effective application. CA is focused on effectiveness not efficiency, (except to follow the mil-itary tenet on Economy of Force), and in order to be an accurate measure of effectiveness great care and significant effort should be placed on developing useful measures of merit to gauge effectiveness of the air campaign.

9.2. Who Is It For. Combat assessment tailors products for a particular C2 environment. CA belongs to the warfighter. As such it is the intent, or audience, of the assessments that molds the product. BDA has many uses and is the most visible product for combat assessment. It has the broadest audience. This overshadowing has unfortunately relegated the other assessments into obscurity. Munitions effectiveness assessment's audience includes the operators (e. g., aircrew, artillery crews), force packaging planners, and weapon logisticians and producers. Mission assessment's audience includes operational and strategic planners and commanders, both supported and supporting.

9.3. Where Is It Done. Combat assessment is done at all levels of the joint force (figure 9.1). The Joint Force Commander (JFC) establishes a dynamic system to support CA for all components. Normally, the joint task force J- 3 will be responsible for coordinating CA, assisted by the joint task force J- 2. The JFC and component staffs continuously evaluate the results of operations and provide these to the JFC for the overall evaluation of the current campaign. They must take into consideration the capabilities/ forces employed, munitions, and attack timing in assessing the specific mission and operations' success and effects against the specific targets attacked, target systems/ objectives, and remaining enemy warfighting capabilities, relative to the objectives and strategy. Future enemy courses of action and remaining enemy combat capabilities should be weighed against established JFC and component targeting priorities to determine future targeting objectives and recommend changes in courses of action. Although CA is listed as the end of the joint targeting process, it also provides the inputs for process reinitiation and subsequent target development, weaponeering, force application, force execution, and combat assessment. CA must be done jointly by targeteers, operators, engineers, and intelligence analysts. It is an ongoing, dynamic process that drives current and future targeting decisions. It is done to various levels of detail including 69


69 Page 70 71 70


near- term (e. g., diverts, alerts, targets of opportunity) mid- term (e. g., ATO cycle) and long- term (e. g., campaign phasing and analysis). It should come from all sources and be integrated into the battle man-agement processes.

Figure 9.1. Levels of Combat Assessment.

9. 3.1. Combat assessment is the purview of the warfighters. CONUS organizations have the best resources to exploit peacetime intelligence. However the very large volume of theater intelligence and operational data developed during wartime is unavailable to CONUS organizations in its original form and timing. The majority of the data is in the theater after operations commence. Specifically; aircraft weapon video, mission reports, operational reports, situation reports, tactical reconnaissance, UAVs, JSTARS, and some U- 2 data. These are the important sources of combat assessment. More-over the commanders' responsibility and intent reside only with them in theater.

9.3.2. Components provide their own CA up the chain of command, laterally to supporting com-mands, and to those they are supporting. Tactical units do not often produce their own complete CA but provide input through their mission (air) and operations reports for BDA, MA, and MEA. As an example, Special Operations participates in combat assessment through debriefs of SOF after mis-sions are completed. Mission success is gauged by whether the targeted enemy facilities, forces, actions, and/ or capabilities were affected as desired. A review of battle damage assessment (for DA missions) is critical to the JFC and other components who can change the course of action or order restrikes in response to the current operational situation.

9.3. 3. Combat assessment includes three principal sub- assessments: Battle Damage Assessment (BDA), Munitions Effectiveness Assessment (MEA), and Mission Assessment (MA). Battle Damage 70


70 Page 71 72 71


Assessment is the timely and accurate estimate of damage resulting from the application of military force, either lethal or non- lethal, against a predetermined objective. Munitions Effectiveness Assess-ment details the effectiveness of the munitions damage mechanisms and the delivery parameters. Mission Assessment addresses the effectiveness of operations for tasked or apportioned missions.

9.4. Battle Damage Assessment (BDA). BDA is the timely and accurate estimate of damage resulting from the application of military force, either lethal or non lethal, against a predetermined objective. Battle damage assessment can be applied to the employment of all types of weapon systems (air, ground, naval, space, IW, and special forces) throughout the range of military operations. BDA is primarily an Intelli-gence responsibility with required inputs and coordination from Operations. Battle damage assessment is composed of physical damage assessment, functional damage assessment, and target system assessment (figure 9.2). BDA is the study of damage on a single target or set of targets. It is used for target study and target system analyses, reconstitution estimates, weaponeering, database updates, and for deciding restrikes. BDA was previously known in the air- to- surface arena as "bomb damage assessment" which still retains its own definition in JCS Pub 1- 02. The BDA process answers the following questions:

  • Did the weapons impact the target as planned?
  • Did the weapons achieve the desired results and fulfill the objectives, and therefore purpose, of the attack?

  • How long will it take enemy forces to repair damage and regain functionality?
  • Can and will the enemy compensate for the actual damage through substitution?
  • Are restrikes necessary to inflict additional damage, to delay recovery efforts, or attack targets not successfully struck?

  • What are the collateral effects on the target system as a whole, or on other target systems?

    9.4.1. Physical Damage Assessment . A physical damage assessment is an estimate of the extent of physical damage to a target based upon observed or interpreted damage. This post- attack target anal-ysis should be a coordinated effort among combat units, component commands, the JTF, the combat-ant command, and national agencies. Some representative sources for data needed to make a physical damage assessment may include the following: mission reports, imagery, weapon system video, visual reports from ground spotters or combat troops, controllers and observers, artillery target sur-veillance reports, SIGINT, HUMINT, IMINT, MASINT, and open sources. The unit that engaged the target initially assesses physical damage and may recommend an immediate reattack before sending the report to the appropriate BDA cell for further analysis. Tactical objectives can be compared to the levels of physical damage achieved to identify any immediate problems with force employment (e. g., MEA) or to identify any requirements for reattack.

    9.4.2. Functional Damage Assessment . The functional damage assessment estimates the remain-ing functional or operational capability of a targeted facility or object. Functional assessments are inferred from the assessed physical damage and include estimates of the recuperation or replacement time required for the target to resume normal operations. This all source analysis is typically con-ducted by the combatant command, in conjunction with support from the national- level assets. The combatant command BDA cell integrates the initial target analyses with other related sources includ-ing HUMINT, SIGINT, and IMINT. BDA analysts then compare the original objective for the attack with the current status of the target to determine if the objective has been met. 71


    71 Page 72 73

    72


    9.4.3. Target System Assessment . Target system assessment is an estimate of the overall impact of force employment against an adversary target system. These assessments are also normally con-ducted by the combatant command, supported by national- level assets, and possibly other commands which provide additional target system analysis. The combatant command fuses all component BDA reporting on functional damage to targets within a target system and assesses the overall impact on that system's capabilities. This lays the groundwork for future recommendations for military opera-tions in support of operational objectives.


    Figure 9.2. Battle Damage Assessment Example.

    The following example of an attack on a headquarters building illustrates the processes described above. Ini-tial physical damage assessment indicates 40 percent of the west wing of the building sustained structural damage. Functional assessment subsequently reveals sensitive communications equipment located in this wing was damaged to the extent that command and control activities have ceased. Assessing the effects of the attack on the target system, it was determined the loss of command and control from this headquarters severely degraded coordination in the forward area requiring subordinate commanders to rely on slower and more vulnerable means of communication.


    9.4.4. BDA must be tailored to the decision maker and phased into the planning and execution cycles. Inputs into assessments must be planned and scheduled. Theater tactics, techniques, and procedures manuals must include assessment timelines. Instantaneous ground truth is impossible. Availability of selection and collection and assessment times must be anticipated in planning. Comprehensive BDA requires too much time even in a perfect world. This fact drives the phasing of BDA. The time phases should, therefore, correspond to the planning cycle. Table9.1.illustrates BDA in air- to- surface oper-ations.

    Table 9.1. Air- Surface Damage Assessment Matrix.

    NOTE: The time phases listed are intended as a conceptual guide, i. e., generally phase 1 reporting is used to guide combat operations; phase 2 for ATO generation timelines. The Matrix shows that time frames most impact the ATO cycle, not necessarily the times used in intelligence reporting. JP 2- 01.1 does not use these time frames for BDA purposes. 72


    72 Page 73 74 73


    9.4.5. BDA strike histories are important and must be kept for air delivered munitions. The fact that several strikes hit a target before a report is produced, or several reports from different sources are produced after one strike complicates the analysis process. Report source( s) also impact assessments over time (figure 9.3).

    Figure 9.3. Target and Time Arrow.

    9.5. Munitions Effectiveness Assessment (MEA). MEA is the function that weaponeers, engineers, and operators use to analyze the effectiveness of the munitions damage mechanisms and the delivery parame-

    73


    73 Page 74 75 74


    ters. Essentially there are two types of MEA, short- term feedback for the operators, and long- term analy-sis for the weapons development and acquisitions communities. MEA includes:

  • Recommending changes in methodology, techniques and tactics, fuzing, or weapon selection to increase effectiveness.

  • Guiding imagery interpreters and intelligence analysts in conducting their analysis.
  • Recommending development of new weapon capabilities and techniques.
  • Determining problem areas in the employment of weapons.

    9.5.1. MEA includes both the munitions data and the weapon platform delivery conditions. It is per-formance- based therefore comparisons between expected performance and actual performance are appropriate. Two central questions to ask are:

  • Were actual delivery parameters similar to those expected?
  • Were there any unanticipated operational limitations?

    9.5.2. In wartime, MEA is the most often overlooked portion of combat assessment, but has the high-est payoff for weapons and tactics development. Collection of delivery parameters must not be over-looked in contingency operations and takes few resources beside forethought. In peacetime this data is collected and incorporated into the Joint Munitions Effectiveness Manuals (JMEM) and other ser-vice and platform specific products. These manuals include methodologies from target acquisition and delivery parameters to weapons effects. The Joint Technical Coordinating Group for Munitions Effectiveness (JTCG/ ME) manuals were developed to provide tri- service- approved and accepted data and methodologies to permit standardized comparison of weapon effectiveness. This means, in a practical sense, a Department of Defense planner in the Pentagon, a Navy weapons officer on the USS Carl Vinson, and an Air Force targeteer in Korea, should be able to draw essentially the same conclu-sion about Mk 82 general- purpose bomb effectiveness against Fan Song radars and have standardized data available for use in weapons procurement, stockpile decisions, or development of force employ-ment options. On 14 December 1963, the Joint Chiefs of Staff asked military services to develop a joint manual that would provide effectiveness information on air- to- surface non nuclear munitions, and they named the Army as executive agent. The resulting coordination drafts of the Joint Munitions Effectiveness Manual for Air- to- Surface Non nuclear Munitions (JMEM A/ S), as prepared by the JTCG/ ME, were reviewed by the Office of the Secretary of Defense, the military services, and the Defense Intelligence Agency. These manuals have been continually updated and are now being con-verted to hypertext documents to speed weaponeering.

    9.6. Mission Assessment (MA). MA addresses the effectiveness of operations for tasked or apportioned missions. While battle damage and munitions effectiveness assessments address force employment


    Foot Stomper Box
    . There is a new term, and a very useful one, that may be included in the future as a subset of MEA, Bomb Impact Assessment. The explosion in the number of drop- and- forget munitions such as the Joint Direct Attack Munition (JDAM) has expanded the need for a new mechanism or techno-logical technique( s) to answer the most basic of questions. Bomb (munition) Impact Assessment answers three simple questions; 1) Did the bomb hit its desired impact point? 2) Did the bomb detonate high order? 3) Did the bomb fuze function as intended? These few questions, if answered, take much uncer-tainty out of operational assessments given the known JMEM probabilities of effects. 74


    74 Page 75 76 75


    against individual targets and systems, MA evaluates a tasked mission such as interdiction, counterair, or maritime support. It directly impacts the JFACC's apportionment nominations and the resultant JFC's decision. Mission assessments are made by the supported commander.

    9. 6. 1. The cumulative damage to targets does not represent the total effectiveness of the operation because it does not account for the intangible effects on enemy activities, for the effectiveness of non lethal force employment, or for enemy alternative courses of action. There are also many other factors to consider; the enemy rate of supply and resource consumption, enemy mobility, use of reserves, cushion, availability of repair materials, reconstitution or recuperation time and costs, plus the status of defenses. Additionally, mission assessment examines the effectiveness of tactical considerations such as tactics, penetration aids, and enemy and friendly countermeasures.

    9.6.2. MA answers the questions outlined below. Answers to these questions help determine the effectiveness of the operations to meet mission objectives:

  • Is the assigned mission's operations achieving command objectives and intent?
  • Do the objectives require modification?
  • Do the missions' level of effort require modification for that phase of the operation?
  • How effective were operations in terms of impacting the enemy's actions or capability?
  • What specific changes in combat operations would improve friendly efforts to degrade the enemy's will and capability to wage war?

  • Does a particular enemy target system require more, or less, emphasis in future combat opera-tions?

  • Were there any unanticipated operational limitations?
  • Were there any unintended consequences of the operation; that is, did strikes achieve some bonus damage or inflict undesirable collateral damage?

    9.6.3. The mission is successful if the enemy is reacting as intended. MA's inputs come from internal and external sources. The measures of merit (effectiveness) are not common for all missions. Differ-ent missions require organizations to tailor responses for the process level in which they are involved. Some examples follow.

    9.6.4. Major inputs for Close Air Support (CAS) mission assessment come from the Air Support Operations Center "bean counts". This is passed to the AOC by the CAS Summary Message (CAS-SUM). This is not a true assessment. The ground component commander is the primary decision maker for CAS mission assessment since he is the supported commander. His assessment in turn is used to identify possible target systems for air support.

    9.6.5. Defensive Counter Air (DCA) mission assessment inputs flow throughout the Theater Air Con-trol System. DCA mission assessment is primarily an Operations decision of the Area Air Defense Commander, if appointed. Intelligence inputs come from mission debriefs, the threat, and a solid knowledge of enemy tactics and capabilities. When blended with the JFACC's goals and strategies these provide the basis for a DCA apportionment recommendation.

    9.6.6. Strategic Attack mission assessment is easy in a macro sense. Total enemy capitulation equates to 100 percent effectiveness. However, it is difficult, at best, to determine how an enemy per-ceives any threat or risk to himself. Does he calculate the threat using statistics and probabilities based upon actual observed threat? Does he perceive a threat in vague general terms without detailed 75


    75 Page 76 77 76


    analysis or statistical evaluation of its actual magnitude? Does he care about destruction of his eco-nomic infrastructure compared with the bigger issues of power or ideology? If an enemy can be made to perceive a particular course of action is hopeless, his behavior can be channeled into another direc-tion. Enemy perception of a great threat or high risk is as important as the actual threat.

    9.6.7. Interdiction mission assessment is based on its impact to the enemy force's capabilities, not just how many trucks or bridges are destroyed. For example, if the enemy believes they will suffer unac-ceptable losses in resupplying his forces through route A, they may begin to move supplies through route B which is longer and requires more transit time. In effect, the enemy will be delayed and his timetable disrupted, which can be a measure of success for the interdiction operation. The quantities of supplies reaching the enemy forces at the forward edge of the battle area (FEBA) has historic sig-nificance in the outcome of battles and should be considered one of the main measures of effective-ness. Unfortunately, this quantity is extremely difficult to measure. Reduced levels of fighting (changing from offensive to defensive operations) is another measure of effectiveness.

    9. 6. 7.1. Duration of individual choke point closure caused by interdiction appears to offer one reasonable surrogate measure of enemy resupply capability. Another may be counting trucks passing clandestine sensors along roads (the Vietnam Era IGLOO WHITE program did this). This is costly but it can be coordinated with strikes to destroy vehicles. Trucks "reported" destroyed through MISREPs is probably the poorest method used in analysis. (There are varying degrees of destruction or damage and uncertain excess transport capability.)

    9.6.7.2. Another way to assess interdiction results is to measure post- offensive buildup in the first few days, weeks, or months after an operation. Rate and type of resupply and reconstitution are indicators of what the enemy would like to do and of the damage sustained.

    9.6.7.3. There are at least three measures of very little use in analyzing interdiction effectiveness; namely, sorties flown, bombs dropped, and days in combat. Unfortunately General Schwartzkopf said this is what he had to do during the Gulf War.

    9. 6. 7.4. Two factors stand out in all interdiction analysis. First, large ground force campaigns without deep interdiction are less effective than a coordinated operation including both. Con-versely, a deep interdiction campaign without coordinated ground operations to increase the enemy's supply consumption would be less effective than a coordinated campaign. Second, it is the enemy's behavior or activity which must be modified. Destruction of individual targets and the resulting "body" count, as an effectiveness measure contributes little to the overall objective. It is the activity that must be attacked or struck. It is the effectiveness of the attack on the activity that must be measured.

    9.7. How It Is Done. All objectives should have measures of merit developed during the planning phase and refined in the target development process in an iterative fashion. Intelligence assists the commander in determining when objectives have been attained so joint forces may be reoriented or operations termi-nated. Intelligence evaluates military operations by assessing their effect on the adversary situation with respect to the commander's intent and objectives, and those of the adversary. The intent is to analyze with sound military judgment what is known about the damage inflicted on the enemy to try to determine: what attrition the adversary has suffered; what effect the efforts have had on the adversary's plans or capa-bilities; and what, if any, changes or additional efforts need to take place to meet the objectives. CA requires constant information flows from all sources. However, the same basic information is generally collected for all assessments. The information (data) gathered for mission assessment is similar, and in 76


    76 Page 77 78 77


    many instances the same, as collected for battle damage assessment and munitions effectiveness assess-ment. A collection plan, tailored EEIs, and the objectives' measures of merit are required to do the assess-ment. The JFC apportions joint force reconnaissance assets to support the CA intelligence effort that exceed the organic capabilities of the component forces. The component commanders identify their requirements and coordinate them with the joint force J- 3 or designated representative. CA differs from wartime to contingency operations. The closely held nature of some contingency operations has led to badly coordinated collection plans. Very specific EEIs should be developed. Intelligence analysts and collectors must be knowledgeable of the targeting objectives, weapons, timing, and targets.

    9.7.1. There are many different measures of merit, an example of one follows, from a general compo-nent objective down to a specific measure of merit :

  • Achieve Air Superiority
  • Deny Enemy Air EW and GCI support
  • Degrade Air Defense Network by 75%
  • Destroy SOCs I, II, III
  • Enemy Air Intercepts non- existent/ non- effective

    9.7.2. Information must get to the right offices. The ATO must be disseminated to the imagery inter-preters to provide them numbers and types of weapons along with the desired point of impact. The ATO must also be distributed to other intelligence analysts (e. g., SIGINT, MASINT, HUMINT) for time- over- target and aimpoints. A good example is the SEAD mission, especially anti- radiation mis-sile operations that must be coordinated with the SIGINT collectors to the extent possible.

    9. 7. 3. Intelligence contributes to the mission/ strike cycle function of assessment. Assessment can either be inductive (using sensors or aircrews to directly observe damage inflicted) or deductive (using indirect means to ascertain results). Examples of inductive observation could involve secondary explosions seen by the aircrew or movement stopping after attacks. Assessment can be deduced if damage is unobserved but verified by third party sources. Indirect bomb damage assessment can also be inferred from the miss distance (the distance between weapon detonation and the target). A mea-sure of success of the attack is the impact on the activity the enemy is performing through the respec-tive target system. Qualitative assessment should be used in addition to quantitative analysis. Single methods of measurement should be avoided, because they may lead to unsound or distorted results.

    9.7.4. Time impacts CA and the functions an organization performs. Component force headquarters are concerned with near- term and mid- term combat assessment for the majority of their work. This of course leaves out the long range (strategic) planners at the components. The current operations targe-teer and Intelligence analyst maintain a continuous awareness of the battle situation, the targeting objectives and near- term combat assessment in order to provide the best recommendation to the battle managers. The targets branch usually contains the Combat Assessment Cell. This cell of targeteers and other disciplines provides the main fusion center for near term and mid- term (component) into combat assessment. It also provides the inputs for collection requests across the spectrum of combat assessment. It is the focus of all intelligence inputs to combat assessment. It is responsible for man-aging inputs to, coordinating, and analyzing combat assessment information and its distribution to decision makers and other users. Requirements for most long- term assessment studies will be gener-ated at the theater. Theater wide intelligence producers such as Joint Intelligence Centers provide their studies and analysis to the joint force commander and to lower and higher echelons. JCS/ J2T and DIA will normally be the national level focal point for input into long- term combat assessment. 77



    [Back]

    [Index]

    [Next]
    USAF INTELLIGENCE TARGETING GUIDE
    AIR FORCE PAMPHLET 14- 210 Intelligence
    1 FEBRUARY 1998