The long-term goal of the program in Image Understanding (IU) is to develop computational theories and techniques for use in artificial vision systems whose performance matches or exceeds that of humans, exploiting sensing throughout the breadth of the electromagnetic spectrum to characterize the world in both space and time. The shorter-term 5-year goals of the program are to carry out applications-directed research on machine vision, provide a suitable IU software environment, and further develop IU capabilities for specific applications.
Imaging sensors provide two-dimensional arrays of values that measure properties of a scene (such as intensity, range, or phase). Image understanding (IU) algorithms create a "description" of the world from sensor images, suitable for particular purposes. Thus, for autonomous vehicle navigation, a description may be an indication of road edges or of obstacles in the path. For an intelligence application, the description might be a synopsis of changes of military significance to a site. This translation from an array of numbers to constructs meaningful in the world must be carried out despite object occlusion, shadows, reflections, and other disturbances. Contextual information such as knowledge of the domain being sensed must often be used to accomplish this translation.
Applications-directed research is meant to solve problems that are roadblocks to implementing certain applications. Such research may address problems in phenomenology of the sensing process, fusion of diverse sensor data, use of learning processes in IU, use of speech and natural language in aiding the interactive image understanding process, and the incorporation of contextual knowledge and reasoning into the IU process. Some of the topic areas being addressed in image understanding/exploitation include:
Target Detection/Recognition: Given autonomous air vehicles such as the TIER series that provide a huge volume of tactical SAR imagery, there is a strong need for the ability to carry out ground-based analysis in a timely manner. IU/ATR systems will play a strong role in solving this problem -- the IU algorithms cue the ATR systems to suspected targets and provide context useful for recognition.
Reconnaissance, Surveillance, and Target Acquisition (RSTA): Techniques for an unmanned ground vehicle are based on advanced filtering techniques for target detection and model-based analysis for target identification. Fusion of information from sensors of different types, e.g., EO, FLIR and LADAR, as well as terrain information, is often required in the analysis.
Image Exploitation: The RADIUS program, cosponsored by ORD/CIA, develops IU techniques related to model-supported exploitation (MSE), allowing an image analyst to obtain a correspondence between an image and a stored three-dimensional model of a site. Using the MSE approach, automatic change detection and trend analysis can be performed based on image analyst needs. Three-dimensional site models provide a highly constrained context for applying IU techniques reliably.
Surveillance and Monitoring (SAM): This research area seeks to develop automated methods for exploiting video data. Tools are being developed for video photogrammetry and mosaicking, as well as for tracking, modeling and recognition of moving objects and events. Collectively, these developments are enabling greater automation of tactical reconnaissance, physical security, and surveillance in urban environments.
Construction of Simulation Databases: As visualization becomes increasingly important in simulation systems for training and mission rehearsal, the need for rapid construction of accurate, up-to-date, spatial databases of battlefields becomes critical. The manual database construction process in use today is the bottleneck preventing more widespread adoption of simulation throughout the military. Image understanding techniques are shortening the timeline required to build a simulation database and improving the accuracy with which they are constructed. Techniques of vision-based cartographic model construction are required to update and provide additional detail to map data, based on satellite and aerial photography, multispectral data, and interferometric SAR.
Image Understanding Environment (IUE): The IUE is a software environment for supporting research and development work in image understanding. The IUE provides a platform for making IU algorithm design more effective and for algorithm and data. The IUE will support various application scenarios, including photointerpretation, smart weapons, navigation, and industrial vision.
The need for automated or semi-automated vision arises in many defense and civilian domains. Substantial progress has been made over the past decade. Because of advances in IU algorithms and computer speed there are an increasing number of real-world vision problems that are now solvable in automatic target recognition, analysis of intelligence images, autonomous navigation, medicine, and manufacturing. Some particularly noteworthy achievements include stereo terrain modeling, unmanned vehicle road following, ATR for unobscured vehicles, industrial part recognition, visual inspection systems, and indoor robot navigation.
IU has important DoD applications: automatic and interactive target recognition, navigation of autonomous vehicles, reconnaissance, cartography, and simulation. To attain more robust solutions to these problems requires both a concerted effort in advancing our fundamental understanding of vision processes as well as the creation of applied demonstrations that stress the application of IU technologies.
RADIUS is a joint ARPA/ORD program to improve the effectiveness of the image analyst by providing semi-automated and automated exploitation tools. It is based on the concept of a two- or three-dimensional "site model" that is used by image understanding algorithms for change detection, counting, and visualization. The goal of the project is to develop a system that can be transitioned to the image analyst within 5 years. The project focuses on two aspects of Model-Supported Exploitation (MSE), (1) semi-automated and automated site model construction, and (2) exploitation based on site models. A Phase I study (Hughes, 1993-1994) developed the concept of operations and other requirements. Using the results of Phase I, a two-year Phase II implementation project was initiated in late 1994 (Martin Marietta). An initial implementation was delivered to the NEL in late 1994, and updates will be delivered over the next two years.
Semi-Automatic IMINT Processing Systems (SAIP): The Semi-Automatic IMINT Processing System (SAIP System) will provide an enhanced capability to conduct wide area search and reconnaissance. Rapid processing will make the system particularly valuable in the wide area search for time critical targets such as ballistic missile related vehicles or in the search over wide areas for deployed military forces. The primary sensor data source will be imagery produced by Synthetic Aperture Radar (SAR) sensors flown on the U-2R aircraft and the High Altitude Endurance Unmanned Aerial Vehicles (HAE UAV); however, tools are also provided to assist the image analysts in exploiting electro-optical imagery of small scenes. The growth potential exists to accommodate national sources of imagery.
Return to Image Understanding for Battlefield Awareness