LIST OF FIGURES

Figure 1 McKenna Site and Surrounding Study Area (Ft Benning, GA)

Figure 2 McKenna MOUT Site (From DMA Geodetic Survey, May 1996)

Figure 3 Aerial View of McKennat MOUT Site from Northwest

Figure 4 Virtual View of McKenna MOUT Site from Northwest

Figure 5 Generic M&S Database Construction

Figure 6 Comparison Between The Four Data Generation Cases

Figure 7 Relative Positions of Control

LIST OF TABLES

Table 1 Differences Between DMA Coordinates and Measurements on the Coordinate Display Form

Table 2 Differences Between Checkpoints Used for Control Between the PSI and DMA Coordinates

Table 3 Variation Between PSI and DMA Coordinates

Table 4 Comparison Between PSI and TEC Coordinates

Table 5 Azimuth Angle Differences

FINAL REPORT OF THE MILITARY OPERATIONS IN BUILT-UP AREAS (MOBA) - TERRAIN DATABASE (TDB) PROJECT
EXECUTIVE SUMMARY

1. Introduction

The 1994 Defense Science Board Study on Military Operations in Built-up Areas (MOBA) recommended to the Secretary of Defense that initiatives be undertaken to improve the ability of US forces to operate in urban areas and to perform dismounted combat, peacekeeping, and other activities associated with operations other than war. A priority initiative included the development of urban databases and simulations to assist the analysis of operational needs, and to provide urban training, crisis management, mission planning, rehearsal, intelligence, and command and control capabilities. A future goal is the production of information systems to improve situational awareness during actual urban operations. Improved awareness requires that real-time digital information regarding the dispositions and actions of forces be displayed within a geographic context that permits commanders and subordinate units to understand the environment, perform command and control, and conduct successful operations.

As a step toward this goal, the Defense Modeling and Simulation Office (DMSO) and the Defense Mapping Agency (DMA) - now part of the National Imagery and Mapping Agency (NIMA) - sponsored a project to produce and evaluate digital terrain data for the McKenna MOUT (military operations in urban terrain) training site at Fort Benning, Georgia. NIMA's Terrain Modeling Project Office (TMPO) is the Department of Defense modeling and simulation executive agent for the authoritative representation of terrain. During May 1995 through August 1996, TMPO, the DMA St. Louis production facilities, the Topographic Engineering Center (TEC), the Institute for Defense Analyses (IDA), TASC, Inc., GDE, Inc., LNK, Inc., LADS-Belleview, the TRADOC Dismounted Battlespace Battle Lab (DBBL), the Marine Corps Modeling and Simulation Management Office (MCMSMO), and several other organizations planned, constructed, and evaluated both digital and paper high-resolution terrain products of the MOUT site. These products included: stereo overhead imagery, detailed geodetic and Global Positioning System (GPS) site surveys; mapping, charting, and geodesy (MC&G) elevation and feature files; Geographic Information System (GIS) databases; micro-terrain profiles; orthophotos with feature overlays; 3-D anaglyphs; maps; radar, video, and other site imagery; 3-D computer-aided design (CAD) site models of McKenna building exteriors and interiors; texture libraries; and modeling and simulation (M&S) run-time databases for virtual reality (VR) simulators. Data were collected for both McKenna and the Griswold live fire range. High-resolution data were inserted in a larger runtime database employing USGS elevation data and feature information for a 24 Km x 24 Km region surrounding Griswold and McKenna sites. The three regions are depicted in Figure 1.

The McKenna training facility was chosen as the focus for this project due to the strong interest of the Army Training and Doctrine Command (TRADOC) and the Ft Benning Battle Lab. McKenna is used for the training of US and allied infantry, marines, and law enforcement personnel. It consists of a mock European village containing 15 primary buildings, three support buildings, and an underground sewer system with five manhole cover (MHC) pop-up points. Figure 2 provides the village layout. Letters inside building outlines match the DBBL reference system (used in evaluation questionnaires), while outside letters and MHC codes correspond to the DMA site survey codes. The village is approximately 90 x 150 meters in area, and can be approached through woods. The terrain immediately surrounding the village is composed of grassy and brushy open areas, thick forests, swamps, streams, ponds, some sharp erosional features, and many light/loose surfaced roads and trails. The 5 Km x 5 Km region is approximately 80 percent wooded and exhibits variations in relief of about 70 meters. It contains a 1,125-meter dirt airstrip capable of landing C-130 aircraft, and a 300 x 300 meter helipad. Figures 3 and 4 provide live and virtual aerial comparisons of the village.

Undisplayed Graphic
Figure 1 McKenna Site and Surrounding Study Area (Ft Benning, GA)
Undisplayed Graphic
Figure 2 McKenna MOUT Site (From DMA Geodetic Survey, May 1996)
Undisplayed Graphic
Figure 3 Aerial View of McKennat MOUT Site from Northwest
Undisplayed Graphic
Figure 4 Virtual View of McKenna MOUT Site from Northwest

2. Data Generation

A generic production flow for M&S databases is shown in Figure 5. Four different MC&G databases were created over the McKenna MOUT Site. The different cases were chosen to provide baseline data on several possible production and operational scenarios. Each case required an imagery source and control to perform a triangulation. Every case involved generation of a digital elevation model (DEM), a feature database, and 3D building models. Some cases used additional imagery sources or other sources to generate MC&G databases, or generated value-added databases by combining additional imagery with an MC&G database. In all cases, a M&S exchange format database was to be compiled from the MC&G database and value-added data available. Runtime databases for specific simulators were to be created by transformation from the M&S exchange format database. Due to resource constraints only the last database was converted to an M&S database.
Undisplayed Graphic
Figure 5 Generic M&S Database Construction
Undisplayed Graphic
Figure 6 Comparison Between The Four Data Generation Cases

3. Product Descriptions

Digital MC&G products are of two basic types: 1) Digital Elevation Models (DEM), including a specific DMA standard product termed Digital Terrain Elevation Data (DTED); and 2) digital feature files known as Interim Terrain Data (ITD) or enhanced ITD (sometimes termed ITD++). Since several of the McKenna elevation files are non-standard products, the generic term "DEM" is used to imply either type of elevation product. Elevation files provide the x,y reference and z posts used to construct the terrain surface or skin.

DMA provided the extraction rules used for generating all the high resolution feature databases. The extraction rules were based on the existing Interim Terrain Data (ITD) specification, enhanced with features and attributes agreed between DMA and the DBBL (Annex B). The enhanced ITD added to the feature description by supplying further information about buildings, point drains, roads, transportation, soils, lakes, rivers, vegetation, sewers, linear features, wrinkles, gullies, and obstacles.

Three dimensional models were created to provide the exterior and interior building details.

These three file types (DEM, ITD, and 3D Models), when combined, form the complete McKenna MOBA terrain database (MOBA TDB). One method to combine them is in the S1000 format database. This provided the user with the tools to manipulate and use the MOBA TDB.

4. MC&G Data Base Generation

This section covers the generation of the four different cases. The original production plan is defined, and any deviations form the plan, as well as the reasons for the deviation are recorded. In addition, resources that were expended to complete each case study are enumerated. Finally, each case description concludes with an assessment of the production process. This includes an evaluation of difficulties encountered, both expected and unanticipated, any unusual or exceptional situations encountered, and a review of any lessons learned.

4.1 Case Study I

The first case (Data Generation by Commercial Processes) was the generation of a high resolution TDB produced from high resolution conventional photography supported by a ground survey. Data generation of the elevation, features, and exterior building geometry were done by a DoD contractor, using a Digital Stereo Photogrammetric Workstation (DSPW). Elevation data consisted of a DEM with a resolution of 1 meter in a 1 Km x 2 Km patch covering the McKenna MOUT facility, and the airfield. Feature data was extracted to the MOBA feature specification. TEC was to compile the DEM, feature data, and three dimensional (3D) building models into an S1000 format database. This case was to demonstrate the ability of high resolution conventional photography to support the detailed feature and attribute requirements of a MOBA TDB. It is understood that this imagery may not be available for all operational areas.

4.1.1 Source of Data

1:5000 scale frame aerial photography (45 images) were used for this case study. Camera positions were photogrammetrically adjusted by aerial triangulation to ground control and in flight GPS collected camera stations.

4.1.2 Resources Expended

A total of 330 hours of direct man-hour costs were expended. This includes 12 hours for scanning, 24 hours for triangulation, 160 hours for terrain extraction, 60 hours for feature extraction, and 74 hours for preparation of the original report. Of the 60 hours used for feature extraction, approximately 12 hrs were used to create the specification used by the DPWS. This time would be reduced for follow on projects. There was no breakdown of resources into direct and indirect costs.

4.1.3 Production Process Assessment

During the scanning procedure, the Orientation/Automatic process was less accurate than the Semi Automatic process. For triangulation procedure, not enough ground control was supplied (only six points were supplied), GPS values for camera position didn't meet accuracy specification, and the dense tree cover caused problems measuring tie points. (APM measured 57%). Also, blunder detect needs to be more robust to handle user or terrain induced anomalies. In the terrain extraction procedure: the tool to eliminate trees and building areas from the DEM was not very effective. Also, the terrain extraction process would greatly benefit from a TIN capability

4.1.4 Lessons Learned

The DEM Model and Feature Database Generation study for the McKenna MOUT site provided valuable, real-world insight into the kinds of benefits and problems experienced with the use of high resolution data. It established a timeline benchmark for future similar studies. It generated a very large data set which can be used for developing and testing more productive semi automatic feature extraction tools. It also demonstrated several deficiencies in the Feature Extraction process that require improvements to the existing tool. The attribution process for high resolution databases needs to be simplified (i.e. fewer attribution parameters in the specification file) to improve the overall process productivity while still capturing significant detailed features.

4.2 Case Study II

The second case (Data Generation by Current Procedures) was the generation of a high resolution MOBA TDB using current DMA production procedures. This case was intended to represent existing, widely practiced, procedures for the construction of M&S TDBs. Data sources that are not likely to be available over operational areas (large scale conventional photographs, ground surveys, and engineering drawings of buildings) were not be used. The source for this process was MC&G imagery. DMA processed MC&G imagery, using MC&G triangulation procedures without ground control, and produced two standard products (ITD and DTED II). Then high resolution MC&G imagery and the DMA products were to be provided to a contractor to add additional details and features to the DMA standard products. Models of the exterior geometry of the buildings were to be extracted from high resolution MC&G imagery. Data sources not likely to be available over denied areas were not utilized.

4.2.1 Source of Data

Operational MC&G imagery was used to produce all of the standard feature and elevation products for Case Study II. The most recent imagery was used for the ITD data, but due to the presence of clouds over the project area, imagery from an earlier date was used to produce the DTED Level II data.

4.2.2 Resources Expended

A total of 466 hours of direct man-hour costs were expended in the production of Case Study II. This includes 31 man-hours for geopositioning, 35 man-hours for extraction of the elevation data, 189 hours for the extraction of the feature data, and 211 man-hours for the preparation of this document. An estimated total of 88 man-hours of indirect labor costs were incurred. 8 hours were spent by geopositioning, and 40 hours apiece for elevation and feature data set management.

4.2.3 Production Process Assessment

The purpose of Case Study II was to document existing and widely practiced procedures typically used for the construction of M&S terrain databases. Although the resolution of operational imagery and the data content of DMA's standard products do not meet the strict requirements of this project, this data is intended primarily for use as a standard of comparison to the more robust data sets. Since this case study involves the utilization of standard procedures and production processes for geopositioning, data extraction, and quality review, problems were neither expected nor encountered. The only problems occurred where deviations from standard practices were required, for example, when the data formats and output tape types and capacities varied from the norm.

4.2.4 Lessons Learned

The lessons to be learned from this are twofold: First, there must be a clear understanding at the outset of a project of all hardware and software inventories and capabilities for both data generators and users. This can easily be accomplished by listing all hardware items including their capacities, and all software items, including the versions used. If the data generators know which systems are being utilized, they can tailor the output to match. Secondly, there should be a standardized tape and file naming system. All output tapes should clearly identify the data producer, the case study number(s), the data set(s) contained on the tape, all data and file formats, and a list and short description of the content of all data files. Furthermore there should be a standardized means of listing and describing all data files with this information appearing, not only on the exterior label of the tape, but, internally, in soft copy. With this method, even if the tape label was lost, not copied, or incorrectly copied, a complete list of all necessary items could still be obtained by opening and reading the tape.

4.3 Case Study III

The third case (Data Generation by M&S Tailored DMA Procedures) was generation of a MOBA TDB using a DMA production process tailored to meet the high resolution MOBA M&S requirements. This case study consisted of the production of enhanced ITD and DTED Level III and IV, using both operational and high resolution MC&G imagery triangulated without ground control. Extraction of both ITD and one of the two DTED data sets occurred only within the 3 Km x 3 Km area inside the 5 Km x 5 Km project area. The other DTED data set covered the entire 5 Km area.This case was intended to demonstrate DMA potential data generation capability as opposed to established DMA production processes. Data sources that are not likely to be available over operational areas (large scale conventional photographs, ground surveys, and engineering drawings of buildings) were not used.

4.3.1 Source of Data

Both operational and high resolution MC&G imagery were used to produce the feature and elevation products for Case Study III. High resolution MC&G imagery was used for the extraction of enhanced ITD and DTED Level IV, while operational MC&G imagery was used for the DTED Level III data set.

4.3.2 Resources Expended

A total of 758 hours of direct man-hour costs were expended in the production of Case Study III. This includes 31 man-hours for geopositioning, 125 man-hours for the extraction of DTED Level III, 151 hours for DTED Level IV, 240 hours for the extraction of the feature data, and 211 man-hours for the preparation of this document. An estimated total of 88 man-hours of indirect labor costs were incurred. 8 hours were spent by geopositioning, and 40 hours apiece for elevation and feature data set management.

4.3.3 Production Process Assessment

Some concerns developed over imagery issues as well as from extraction artifacts in the elevation data sets. Operational MC&G imagery is routinely utilized as a source for the production of DTED Levels I and II. Due to the wide spacing of grid points on these products the quality, scale and resolution of this imagery do not present any serious problems. It cannot, however, support products with 1 or 3 meter grid spacing requirements. Although it was used to produce DTED Level III, this extended data extraction beyond the capabilities of the imagery.

The most serious concerns were due to the presence of extraction artifacts known as cornrows, which are a product of manually produced DTED. Historically, cornrows have not presented significant data quality problems, since the elevation post spacing on standard DTED products are quite large. With close grid spacing, however, cornrows are quite numerous, especially in heavily forested areas, where the "ground" location is uncertain. Although the effect of cornrows can be greatly reduced through processing techniques, there is a concern that such processing may adversely affect data quality.

4.3.4 Lessons Learned

The results of Case Study III confirm the belief that standard production processes can be successfully utilized and adapted, where needed, in most cases, to produce customized digital terrain data sets. The primary problems associated with this case study, however, had to do with post production activities, preproduction confusion over project requirements, and image-related issues.

The lessons to be learned from this are twofold: First, there must be a clear understanding at the outset of a project of all hardware and software inventories and capabilities for both data generators and users. Secondly, there should be a standardized tape and file naming system. Furthermore there should be a standardized means of listing and describing all data files with this information appearing, not only on the exterior label of the tape, but, internally, in soft copy. With this method, even if the tape label was lost, not copied, or incorrectly copied, a complete list of all necessary items could still be obtained by opening and reading the tape.

There were two image-related issues that need adjustment for future work. Although high resolution MC&G imagery was a requirement for this project, it took six months to get it. Missions requiring such imagery must plan sufficient lead time for imagery acquisition or arrange for higher priority for imagery procurement. The other image-related issue concerns the season when the imagery was taken. Use of winter imagery would have resulted in much more accurate elevation data sets.

4.4 Case Study IV

The final case (Data Generation from Unconstrained DMA Procedures) was the generation of a high resolution MOBA-TDB using unconstrained resources. The TDB was generated by DMA from high resolution conventional photographs, MC&G imagery, high resolution MC&G imagery, a site survey, and supported by a geodetic survey. It was intended to produce the best possible database, using any available data and data support sources. It is understood that this approach may not be possible over operational areas. It was similar to Case III except that additional data sources (large scale conventional photographs, ground surveys, and engineer drawings of buildings) were used. All available sources were used. DMA used conventional photographs, MC&G and high resolution MC&G imagery but triangulated to ground control collected by DMA. The triangulated high resolution conventional photographs were the primary source for elevations and feature locations. The MC&G and high resolution MC&G imagery were used to assist in identification of features and collection of feature attributes. The 1 Km x 2 Km patch of 1 meter resolution DEM, generated for Case I, was also used in this case. A DEM of 3 meter resolution and high resolution feature data were collected over the 3 Km x 3 Km area. These data sets were generated directly, without intermediate production of a medium resolution standard product. Complete 3D building computer aided design (CAD) models were generated from engineering drawings. These models include interior and exterior building geometry. TEC compiled the 3 meter and 1 meter DEMs, feature data, and 3D building models into an S1000 database.

4.4.1 Source of Data

90 conventional commercially-derived aerial photography at a scale of 1:5,000 and 7 photographs at a scale of 1:20,000 were used.

4.4.2 Resources Expended

A total of 2,202 hours of direct man-hour costs were expended in the production of Case Study IV. This includes 449 man-hours for geopositioning, 1083 man-hours for conducting the geodetic survey and follow-up reporting, 250 hours for the extraction of the feature data, and 300 man-hours for the preparation of this document. An estimated total of 158 man-hours of indirect labor costs were incurred. 8 hours were spent by geopositioning, and 150 hours for feature data set management.

4.4.3 Production Process Assessment

While there were no insurmountable problems in Case Study IV, a number of time consuming inconveniences were encountered in the feature extraction process. The areas of difficulty were related to the type of imagery used, and the hardware and software of the system used to exploit the imagery.

Image related difficulties originated from the necessity of creating small, numerous models. The small size of each aerial photograph, along with the overlap and sidelap requirements for the stereo use of such imagery dictated the creation of small stereo models. Since data extraction models can be no larger than the stereo models in which they are contained, many small data extraction models were also necessary. Model creation and closing, as well as the required internal quality assurance checks, are time consuming processes, and the greater the number of models to process, the greater the associated processing time. The MC&G imagery (used in Case Studies II and III) required the construction of only 1 stereo model each, and from 1 to 10 associated data extraction models (depending on the case study), while the aerial photography used in this case study required 28 stereo models and 31 data extraction models.

Time consuming difficulties related to the software had to do with processing of individual photo strips, conducting a software fix allowing image exploitation, and time saving automated processes that did not work well. Software restrictions allowed only one photo strip to be processed at a time. The result was that the photo analyst could not quickly move from one end of the project to the other, since it involved the cumbersome and time intensive processes of activating the proper photo strip, orienting the correct stereo model, and, finally, opening the appropriate extraction model, each being a 20 to 30 minute process.

The FE/S was designed to exploit hard copy imagery sources. Since DMA uses hard copy MC&G imagery on the FE/S almost exclusively, the orientation processes and procedures for conventional photography were not completely developed, somewhat cumbersome, and more time consuming than the orientation process for MC&G imagery. Before extraction could take place, Fortran code had to be written, allowing the ellipsoid heights in the control data to be converted to MSL values as required by the FE/S.

Several processes and procedures typically used as shortcuts to save time in the data extraction process (the use of the auto node and auto tag processes, and the use of maintenance data) could not be utilized on this project. These processes save much time and effort, since manual placement of nodes and tags is quite time consuming. On this project, however, both processes generated numerous errors, requiring much repair time. Use of auto node and auto tag was discontinued, necessitating the labor and time intensive manual approach.

There was also a hardware related problem. The FE/S hardware has a built-in restriction in it's magnification (zoom) capabilities. While this does not adversely affect extraction from MC&G imagery, it proved to be very restrictive with conventional aerial photography. The photo analyst could not "zoom up" to get an overall view of an area, in order to get a better idea of how features should be split apart or included together. Instead, mapping had to take place at a high magnification level, which does not allow the analyst a view of the "big picture", and as a result, interpretations of feature outlines required much editing.

The combination of small model sizes, restricted "zoom" capability, lengthy orientation and photo strip processing times, and inability to utilize such time saving processes as auto node, auto tag, and maintenance data, made it difficult to map, to get an overall view of the mapping area, and to easily change previously compiled linework from another photo model. More processing time directly correlates to less data extraction time, especially when a limited time frame for project completion is considered.

4.4.4 Lessons Learned

The results of Case Study IV confirm the belief that standard production processes can be successfully utilized and adapted to produce customized digital terrain data sets. The geopositioning activities followed well established procedures, and did not encounter any significant problems. There was some room for improvement in a few areas related to the geodetic survey. In the feature extraction arena, with a number of improvements in processes and procedures, production times could be shortened, and certain problems could be avoided.

Geodetic crews should be involved in coordinating the preflight plan with the aerial photography contractor. This would allow the geodetic team to locate additional identifiable, suitable GPS control points on the ground.

When the geodetic crews were in the field, the aerial photography, photocopies of the imagery, charts and point sketches were used as guides to identify and locate the preselected control points. Since there was only one set of paper prints of the imagery available for use, individual crew members had to use photocopies of the exposures to locate the control points. In most cases, these photocopies did not show enough detail to be of much use. If duplicate sets of paper prints had been available, multiple field crews could have accessed the imagery at the same time, thus ensuring confidence in point site recognition, as well as speeding up the process.

The lessons to be learned from this are twofold: First, there must be a clear understanding at the outset of a project of all hardware and software inventories and capabilities for both data generators and users. This can easily be accomplished by exchanging lists of all hardware items, including their capacities, and all software items, including the versions used. Secondly, there should be a standardized tape and file naming system. All output tapes should clearly identify the data producer, the case study number(s), the data set(s) contained on the tape, all data and file formats, and a list and short description of the content of all data files. Furthermore there should be a standardized means of listing and describing all data files with this information appearing, both on the exterior label of the tape and in soft copy. With this method, even if the tape label was lost, not copied, or incorrectly copied, a complete list of all necessary items could still be obtained by opening and reading the tape.

There was also an image-related issue concerning the season when the imagery was taken. Both the MC&G imagery and the conventional aerial photography were "summer scenes" with full foliage on the deciduous trees. Imagery taken in the middle of winter, when all of the deciduous trees have lost their leaves, would have allowed a better view of the ground. Use of winter imagery would have resulted in much more accurate extraction over forested areas, and possibly would have made the extraction of an elevation data set by DMA for this case study feasible .

Finally, the type of imagery used to generate the best data set possible was originally believed to be the conventional aerial photography. While image interpretations made in open areas were not a problem, the overall dark tones and general lack of variability in tonal patterns, made differentiation of vegetation types in heavily forested areas quite difficult at best, and impossible at worst. The MC&G imagery was a better image source, at least in heavily vegetated areas, then the conventional aerial imagery.

5. CAD Generation

This section documents the generation of Computer Aided Design (CAD) models of the McKenna MOUT site. The intent was to generate a high resolution database from both detailed engineering drawings, aerial photographs, and texture maps extracted from still photographs of the site buildings. These models, along with the elevation and feature data generated in Case Study IV, were used in the creation of the S1000 database.

5.1 Source Data

The generation of CAD models used drawing package and building details, McKenna MOUT, Ft. Benning, GA, Drawing No's FE 21725 - FE 21889 (164 Drawings), photographs of building site 8x10 glossy, 144 photographs, and sewer layout details.

5.2 Resources Expended

A total of 275 hours were expended in the production of the CAD models. There was no breakout by direct and indirect costs.

5.3 Production Process Assessment

In general, textures extracted from still photos of building faces that are orthogonal produce reasonable alignment for doorways and windows. As the photograph becomes increasingly oblique, which is especially true of multi-story buildings taken from ground level, the alignment and mismatch are more apparent. Some tools can manipulate an image to remove perspectives introduced by the camera lens. This is not true ortho-rectification of the image but simply stretching pixels of the digital image and did not guarantee correct alignment.

Another observation is the shadowing introduced by the photographs. Since the photos were not taken at the same instant in time, shadowing on the buildings and ground provide "conflicting" information that the observers' brain will try to differentiate.

Still photos of building faces were shot in black and white. Texture map file format (RGB, JPEG, TIFF) can handle color. We were able to "synthetically" introduce color into the MOUT by assigning color attributes to the buildings, walls, and ground.

The process of digitization of the drawing coordinates is highly labor intensive and requires high productivity tools. Mixing detailed engineering drawings with photographs of the actual buildings was hampered by problems of misalignment due to both camera positional distortions and builder modifications not captured by the original drawings.

5.4 Lessons Learned

One unexpected observation made during production was that the builders often deviated from the original design drawings with respect to window and door size, and placement. The precision of the drawing was held fixed and an attempt was made to fit the textures to the wire frame CAD model. If the texture map images were produced under more controlled conditions, with respect to scale and obliquity, it would have been better to adjust the face polygons to fit the "real-world" rather than vice versa.

6. M&S Data Base Generation

The MOBA-TDB project represents a significant milestone in M&S terrain database production. Every effort was made to create a terrain database limited by the availability of the source data, rather than the existing limitations of current real-time graphics and computer generated forces systems. In this sense, the resulting MOBA database represents a forward solution that will only run optimally on future hardware and software systems. Nonetheless, the use of level of detail representations for terrain, features, and 3D models is a key aspect of the simulation database production process, which cannot be ignored.

Case IV represented the most challenging and interesting case for M&S database production. Use of unconstrained sources accentuated the importance of selecting the "best available" source data from alternative sources. However, this, together with the requirement to also process a 24 Km x 24 Km background maneuver area at SIMNET density, led to additional time and labor resources needed to accomplish front-end GIS processing of the available feature data. Additional data had to be acquired and processed from available CONUS sources (i.e. US Army Waterways Experimentation Station, US Geological Survey, US Census Bureau) to achieve comprehensive source coverage for the entire database footprint.

6.1 Source Data

This project involved the production of a M&S TDB from products generated in Case Study IV. This meant that all available sources, including high resolution aerial photographs, MC&G imagery, ground surveys, and engineering drawings were to be incorporated directly in the M&S database production process in order to produce the best possible M&S database. For the purposes of this evaluation, only source data provided by DMA and GDE (through TEC) was to be considered for usage in the high resolution McKenna MOUT site area. Imagery was used by TEC to upgrade existing and available digital topographic data received from these sources. In the course of the project, significant shortfalls were identified in the quality, quantity, and adequacy of the source data provided. Additional site-specific images were collected at Ft Benning to support geospecific building textures for the McKenna buildings. In addition, digital source products had to be procured for the surrounding 24 Km x 24 Km low resolution area, in order to satisfy the M&S requirements for this database. These additional products included digital topographic data available from the US Geological Survey, the US Census Bureau, and the US Army Waterways Experimentation Station.

6.2 Resources Expended

A total of 1,856 hours were expended in the production of the M&S Database. This includes 503 hours of general MOBA tasks, 718 hours for the 24 Km x 24 Km maneuver box tasks, and 636 hours for the 4 Km x 4 Km tasks. It should be noted that approximately 198 man-hours were expended as subcontract labor, and that additional consulting costs were incurred in the process of hiring a photographer to gather site-specific photographs used as texture maps. These expenditures were not broken down into direct and indirect costs.

6.3 Production Process Assessment

Case IV represented the most challenging scenario for source data evaluation and fusion. 115 man-hours were spent on these tasks, 30 hours more than initially planned.

TIN Processing also exceeded initial expectations. This was the result of a combination of factors, including the high density of the elevation data being processed, the interaction of high density road data with the elevation data in integrated TIN simplification, and the inexperience of personnel who were using the TIN generation tool for the first time.

Manual editing and texture application proved to be time consuming, but the end result was well worth the effort in the visual effect achieved by high resolution, photospecific texturing.

Production of the 24 Km x 24 Km database, into which the McKenna MOUT site was inset, proved to be slightly more time consuming than the population of the Case IV inset. GIS processing, S1000 population, and associated tasks (less the editing and texturing of the McKenna MOUT site models) took a total of 718 man-hours for the 24 Km x 24 Km maneuver box, versus 636 man-hours for the 4 x 4 high density McKenna MOUT site area. These levels of effort generally correspond to labor resource levels required for previous SIMNET database projects.

M&S database design was complicated by uncertainty as to the specific requirements and performance specifications of target simulators, including real time graphics, simulation host, and computer generated forces applications that will use this database now and in the future. We anticipate that additional database enhancements and modifications will be necessary to support currently fielded, and potential future simulation platforms. Experience gathered in testing and evaluating this database to date indicates that significant increases in the quality and efficiency of simulation platform performance will be needed to enable this database to be run at full efficiency in a virtual environment. Heightened interaction between simulation systems developers and M&S TDB producers will assist in identifying specific shortfalls and required database enhancements.

More detailed revisions to the 3D CAD models of the McKenna MOUT site models were found to be necessary, in order to remove extraneous and inefficient vertices and segments from the 3D models, and to create two levels of detail for each McKenna building model. These measures were taken in order to optimize real time updates in the 3D visual systems, which would have been degraded by retaining the model geometry as digitized from engineering blueprints. Photospecific texture application was done concurrently with modifications to 3D model geometry

TIN processing methods also diverged from the initial project plan/process model. In the initial plan, only one or two TIN iterations were planned with no subsequent modifications to 2D features as input to a CMU iTIN tool. A significant change to the initial process was repeated iterative processing of the terrain skin and associated input feature data in a CMU iTIN tool and ARC/INFO. The reprocessing of the TIN terrain surface was necessitated by excessive smoothing of the terrain surface polygons in the CMU iTIN tool output, which required multiple passes to create adequately detailed terrain surfaces. Evaluation of TIN processing output was conducted both before and after the TIN data was exported into S1000. Rather than attempting to modify the TIN surface and associated 2D transportation features in S1000, necessary 2D feature edits were done in ARC/INFO, followed by subsequent reprocessing of the TIN terrain surface in the CMU iTIN tool.

6.4 Lessons Learned

TIN processing represented a significant project milestone. Initial TIN generation revealed significant loss of elevation accuracy due to road density vis a vis specified polygon budgets. It is virtually impossible to process an acceptable TIN database into S1000 on only one pass, using state-of-the-art tools. This problem was eliminated by an iterative process that both increased the polygon budget allocated to the terrain surface, and by further filtering and editing of the 2D road geometry used in integrated TIN processing.

Explicit tailoring of source data products to fit modeling and simulation database design specifications is a necessary and unavoidable evil. Experience during this project was that the database producer must be able to "pick and choose" what data can and cannot be directly imported for use in the terrain database, what data will be retained for analytical purposes only, and what data will not be used at all.

Generation of multiple levels of detail building models was essential to make this database usable on even the highest performance real time graphics platforms. Real time performance would also be improved if S1000 were capable of processing hierarchical TINs at multiple levels of detail, and morphing forest canopies into individual tree models and stamps.

A very large number of site-specific photographs were made available to support the M&S database production team. These sources proved essential to communicate visual database content, and were even more vital links in the process, since none of the M&S database production team participated in the site surveys at Ft Benning. Had S1000 modelers been available to provide guidance to the original on-site photographers, and the initial photographs been taken in color, redundant photography of the McKenna MOUT site could have been alleviated.

More software tools are needed to optimize TINs, develop textures, build adjustable phototexture libraries, accurately position building models, provide quality control, and permit value adding,

7. TEC Digital Elevation Models

7.1 Source of Data

1:5000 scale frame aerial photography (45 images) were used for this case study. Camera positions were photogrammetrically adjusted by aerial triangulation to ground control and in flight GPS collected camera stations.

7.2 Original Production Plan

Two databases were created, one at the scale of 1:5,000 with the same imagery used at GDE and DMA to produce a DEM and orthophoto, but of a smaller area than the GDE database. The second, at a scale of 1:20,000, was created to produce a DEM and an orthophoto over a 5 Km x 5 Km area. One of the DEMs was used within TEC to create a TIN for the simulation display. In addition, a "Standard" DEM was created to be used as a comparison against other DEMs.

7.3 DMA Data Evaluation

A set of DMA ground control points, obtained from differential GPS ground survey (but not tied to the state first order triangulation network) was measured on the DSPW using the Coordinate Display tool. The identification of the points were located on sketches and the data was provided in Geographics. This data was converted to UTM. Table 1 lists the differences between the coordinates in UTM of DMA and the measurements made in the models and displayed on the Coordinate Display form. The differences are DMA - TEC and shown in meters.

Table 1 Differences Between DMA Coordinates and Measurements on the Coordinate Display Form
 

Point Id.  North  East  Elevation  Point Id  North  East  Elevation 
19008  +0.71  -0.71  -0.53  49012  +4.02  +2.40  +0.17 
19010  -0.28  -0.08  -1.20  49013  +1.16  -1.15  -0.19 
19018  +1.56  +0.30  -0.72  49016  +1.41  -1.32  -0.09 
19020  -1.19  49017  +1.43  -1.44  +0.19 
19021  -0.60  49021  +2.95  -1.24  -1.00 
19027  -0.89  59003  +1.00  +0.37  -0.50 
19028  -1.15  59004  +0.56  -0.37  +0.01 
29006  +3.85  +0.48  -0.04  59005  -0.19 
29007  +1.23  -0.25  -0.30  59006  +0.40 
39007  +1.30  -0.49  -0.85  59007  -4.88  +2.26  -1.30 
39009  +0.90  +0.42  -0.14  59008  -0.19  -0.66  +0.09 
39010  +1.00  -0.84  -0.42  59009  -1.87  +1.31  -0.29 
39011  +1.21  -0.53  +0.14  69018  -0.36 
39013  +0.19  69019  +0.14 
39014  +1.54  0.00  -0.22  69020  +0.41  -0.47  -0.13 
39015  +1.47  -0.60  -0.68  69021  +0.31  -0.87  +0.10 
39016  +0.98  +0.65  -0.45  69022  +0.59  -0.52  +0.63 
39018  +0.02  +2.52  +0.13  69023  -0.44  -0.38  -0.63 
49010  +1.02  +4.31  -0.04  69025  +0.01  -0.44  +0.19 
79005  +0.47  -0.78  -0.77 
Average:  +0.78  +0.06  -0.32 
The above data shows that the DMA control could be recovered to an accuracy of one meter or below for the majority of points, but not always. This is because of the absence of image detail that can be accurately recovered by image measurement. The proper way is to place panels on the ground before flying the photography. A dozen or less well selected points would have been sufficient to verify the accuracy of the PSI data.

Table 2 shows the differences between the 6 checkpoints used for control between the PSI and DMA as measured in this dataset. These were measured the same way as the above data. The data shows that there is very little difference between the PSI and DMA control comparisons.

Table 2 Differences Between Checkpoints Used for Control Between the PSI and DMA Coordinates
 

PSI  DMA  DMA - TEC  PSI - TEC 
Pt. Id.  Pt. Id.  North  East  Elev.  North  East  Elev. 
ch1  49001  +0.60  -1.19  -0.22  +0.12  -0.47  -0.27 
ch2  69026  -0.91  -0.87  +0.20  -1.49  -0.09  0.0 
ch3  39028  +0.40  -0.86  +0.40  -0.11  -0.30  +0.09 
ch4  29005  +0.31  -0.27  -0.65  -0.37  +0.90  +0.19 
ch5  69005  +0.41  -0.67  -0.08  -0.15  +0.11  0.0 
ch6  49009  -0.24  -0.27 
Average:  +0.16  -0.77  -0.10  -0.40  +0.03  -0.04 

7.4 Standard DEM

TEC, upon the request of DMA/TEMPO, created the best possible DEM over most of the "bare ground" area surrounding the MOUT site. This product's purpose is to be used as a standard to compare other DEMs.

7.5 DEMs From Other Sources

DEMs from DMA, ESRI, and USGS were imported and viewed to obtain an idea of their accuracy.

7.5.1 DMA

DMA provided DTED level 3 and 4 data over the MOUT site, these databases showed a definite elevation bias. The level 3 was approximately 3 meters below the ground and the level 4 data about 1.5 meters below the ground.

7.5.2 IFSAR

The IFSAR data from ESRI was corrected by 27 meters from ellipsoid to mean sea level height. The posts were for the most part quite close to the ground in the "bare ground" areas, but were on the canopy or slightly lower in the forested areas.

7.5.3 USGS

DEMs were obtained from USGS and processed before being imported in SOCET SET. The 7.5 minute quadrangles Cusseta and Ochillee covered the 4 Km x 4 Km area. They were merged for this area. This data was most likely obtained from the Gestalt system at USGS. The data is presented at 30 meter post spacing. The data varies from fair to poor. It appears that the correlator sometimes did not work as in certain areas the posts were at an even elevation.

7.6 Resources

The 1:5,000 scale database took 150 hours to complete. The 1:20,000 scale database took 68 hours to complete. The "Standard" DEM took 42 hours to complete.

7.7 Lessons Learned

Photo Acquisition: Make sure that the company contracted will provide differential GPS control for the camera and ground control points and has experience in this technology.

Photo Scanning: Select a pixel size between 20 and 30 microns for the resolution of the digital data. Make sure that 8-bit data is captured as close as possible. Scan the full image including the photo identification data. This will make it easier to insure that the intended photo is viewed on the photogrammetric workstation. If the camera position needs to be entered in the header file, the above-mentioned floppy of input data should be loaded on this computer and the data pasted in.

Block Triangulation: The selection of weights for the camera and ground control points is important. If differential GPS has been used, set the weight of the ground control points to the expected accuracy of that point. If it is a panel point, a weight of 0.05 meters is recommended for xyz. For the camera control, which should be adjusted as discussed in paragraphs above, a weight of 0.05 is recommended for X and Y, but a higher weight should be used for the Z, such as 0.5 meters. This will take care of the weakness in determining the z offset between antenna and camera and the general weakness in Z determination of the camera. Another method is to fly the camera first with the differential GPS over a test area with many control points for a self calibration of the Z offset.

DEM Creation: Create DEMs with a small overlap between adjacent models. Run some sample tests first (if not familiar with the strategy files) to determine the best strategy file to use with the imagery. Use the contour display and primarily the geomorphic editor to edit the data. If there are large bodies of water, use the area editor (area fill) to edit these polygons before correlation to speed up the correlation process.

8. Warfighter Operational Evaluation Report

This section discusses the production and evaluation of environmental databases for an urban training area at Ft. Benning, Georgia (the McKenna MOUT Site). The purposes of the project were to: 1) develop high resolution mapping, charting, and geodesy (MC&G) urban test data of use to the DoD modeling and simulation community; 2) perform technical and user evaluations of the databases with regard to accuracy, completeness, and utility; and 3) provide results to decision makers faced with determining cost-effective solutions to high-resolution ("one-meter") data requirements. The task provided an opportunity to examine alternate production strategies for the generation of high resolution terrain databases. The study team completed both technical and subjective evaluations of the original databases and derived products.

8.1 Evaluation Results

  1. Useful high-resolution digital MC&G elevation and feature data were provided to describe the McKenna MOUT site at Fort Benning, GA.
  2.  

  3. High-resolution requirements exist; however one-meter data are not needed uniformly throughout a region, or for every circumstance.
  4.  

  5. Data capture under forest canopy is a continuing problem requiring further research and the aid of advanced sensor technologies.
  6.  

  7. Less than one percent of the MC&G high-resolution data can now be directly accommodated in SIMNET runtime load modules.
  8.  

  9. The majority of warfighters queried stated that database features should be positioned within five meters of their true location; A near majority wanted positional accuracy within one meter for dismounted operations.
  10.  

  11. More capable computer image generators are needed to support high-resolution, urban infantry operations.
  12.  

8.2 Recommendations

  1. Cognizant organizations should produce technical additions and improvements to MOBA environmental databases and dismounted warrior simulation capabilities. These would include:
  2.  

  3. The DMSO should furnish an improved capability to simulate individual combatant behaviors. Actions would include:
  4. The Army Deputy Chief of Staff for Intelligence should endorse requirements for better data collection and production, as well as improved mission simulation systems. Based on the results of this evaluation, we recommend the earmarking of resources for improvements in databases and tools critical to the responsive production of topographic products for the warfighter.
  5.  

9. DEM Evaluations

This section documents the evaluations that were performed on the DEMs which were generated for the Ft. Benning MOUT project. The evaluations relate the ability of the evaluated DEMs to match "ground truth" data (DEM and GPS survey points). The results of the DEM evaluations will be used in conjunction with other evaluations to help determine each subject data set's ability to satisfy the modeling and simulation requirements established for this project.

These specific evaluations were performed on each test DEM: 1) Absolute Vertical Accuracy was established by comparing each DEM to the GPS survey points and to the base (control ) DEM. Range, mean and standard deviation of differences in Z (vertical plane only) were calculated; 2) "Cornrow"/Artifacts: The DTED level 4 data set extracted on the BC1-s stereoplotter was evaluated in a "raw" state and after post processing to minimize the manual profile (cornrow) artifact; 3) Vegetated Areas: Each test data set was compared to an accurate (sub-meter relative accuracy) field terrain profile generated by acquiring a dense profile of GPS survey points collected in heavily vegetated areas. The vegetation in this area was primarily composed of a large pine tree canopy; 4) Micro Relief: Each data sets' ability to depict micro relief was evaluated by comparison to accurate (sub-meter accuracy) field data (GPS survey points) collected over a terrain profile; 5) Relative vertical accuracy was evaluated by comparing pairs of points from the control micro relief profile data to each test data set; and 6) Various visual displays were also generated to evaluate the characteristics of the data sets. These included shaded relief, wireframe and contour portrayals.

In addition to the evaluations listed above there were additional evaluations planned for this project. Two evaluations that were not completed, as software development requirements could not be completed in time, were 1) A DX, DY, DZ shift for each DEM compared to the control DEM was to be calculated and applied to remove any horizontal or vertical shifts detected in the test DEMs relative to the control DEM; and 2) A more rigorous relative vertical accuracy was to be computed by comparing pairs of points in the control DEM with each test DEM over varying distances

9.1 DEMs Evaluated

  1. meter post spacing DEM (equivalent to DTED lv 5) produced by TEC/GDE (control DEM).
  2.  

  3. DTED level 2 ( 30 meter post spacing)produced by NIMA
  4.  

  5. DTED level 3 ( 10 meter post spacing) produced by NIMA
  6.  

  7. DTED level 4 ( 3 meter post spacing) produced by NIMA (without ground control)
  8.  

  9. IFSARE DEM (2.5 meter resolution) produced by ERIM.
  10.  

9.2 Procedures

The DEMs were evaluated relative to absolute and relative vertical accuracy via DEM to DEM and DEM to survey comparisons, and accurate relief depiction in non-vegetated and vegetated areas via DEM to survey comparisons

DEMs were evaluated on the Interactive Quality Review System (SUN/UNIX environment) utilizing ARC/INFO GRID functionality with NIMA enhancements (DTED TOOLS). The TEC produced LV5 DEM was used as the control DEM data set. GPS survey data produced by NIMA was used as survey control. The data was loaded to the IQRS and DEM to DEM statistics were generated via ARC/INFO STAT functionality. DEM to survey statistics were generated via ARC INFO SEARCH functionality.

9.3 Results Summaries

9.3.1 Matrix to Matrix (post to post) Vertical Difference Statistical Comparison

Software used for this evaluation converts differences to integer values to compute statistics. Differences were calculated as control lv 5 matrix - test matrix, difference results are in integer meters.
 
DEM  MIN DIF  MAX DIF  MEAN DIF  STD DEV  90% LE 
GDE LV 5  -5  47  0.768  5.197  8.548 
NIMA LV 4  -4  2.284  1.106  1.819 
NIMA LV 3 (DPS)  -17  12  1.794  1.746  2.872 
IFSARE (ERIM)  -29  -0.73  2.543  4.183 
NIMA LV4 SPRINT  -4  2.283  1.093  1.799 
NIMA LV2  -1  16  5.559  2.311  3.801 

9.3.2 Matrix to GPS Survey Control Comparison

Values are in meters. Software used for this evaluation converts matrix elevations to integer values. Data sets which covered more area (Sq. KM) than the control set were not clipped to the control set size. Thus larger data sets also had more GPS control within the SEARCH area.
 
DEM  # OF POINTS  MEAN  STD DEV  90%LE 
GDE LV 5  22  -0.010  0.558  0.917 
NIMA LV 4  40  -1.974  1.483  2.439 
NIMA LV 3 (DPS)  97  -1.748  1.655  2.723 
IFSARE (ERIM)  40  -0.498  1.605  2.639 
NIMA LV 4 (SPRINT)  40  -1.965  1.475  2.426 
NIMA LV 2  87  -4.286  4.743  7.802 
CONTROL LV 5  22  -0.056  0.449  0.739 

9.3.3 Vegetated and Micro-Relief Evaluation

Results of comparisons to profiles collected in tree coverage areas (two separate profiles) were as follows:

Profile 1:
 

DEM  # OF POINTS  MEAN  STD DEV  90%LE 
CONTROL LV 5  203  1.096  2.656  4.370 
NIMA LV 4  203  1.960  4.069  6.695 
NIMA LV 3 (DPS)  203  9.696  5.177  8.515 
IFSARE (ERIM)  203  12.201  3.659  6.019 
NIMA LV 4 (SPRINT)  203  1.748  3.623  5.960 
NIMA LV 2  203  0.058  3.022  4.971 
Profile 2:
 
DEM  # OF POINTS  MEAN  STD DEV  90%LE 
CONTROL LV 5  201  -0.939  0.761  1.252 
NIMA LV 4  201  -2.189  0.867  1.426 
NIMA LV 3 (DPS)  201  -0.145  1.496  2.462 
IFSARE (ERIM)  201  7.43  2.293  3.772 
NIMA LV 4 (SPRINT)  201  2.331  0.715  1.177 
NIMA LV 2  201  -4.978  1.003  1.651 
Results of comparisons to the profile collected in an open area (no trees) depicting micro relief characteristics are as follows:
 
DEM  # OF POINTS  MEAN  STD DEV  90%LE 
CONTROL LV 5  118  -0.061  0.711  1.170 
NIMA LV 4  118  -3.147  0.501  0.824 
NIMA LV 3 (DPS)  118  -2.606  0.545  0.895 
IFSARE (ERIM)  118  0.817  0.643  1.058 
NIMA LV 4 (SPRINT)  118  -3.034  0.529  0.871 
NIMA LV 2  118  -11.794  0.557  0.917 

9.3.4 Relative Vertical Accuracy Analysis

Relative vertical accuracy was calculated based on results from the comparisons of the matrices to the micro-relief GPS ground survey. Every tenth survey point was used in the analysis. Results of vertical accuracy analysis expressed as 90% confidence linear error are:
 
DEM  Relative Vertical 90% LE 
CONTROL LV 5  1.46 
NIMA LV4  1.00 
NIMA LV3 DPS  1.22 
IFSARE ERIM  1.21 
NIMA LV2  1.10 

9.4 Analysis of Results

9.4.1 DEM to DEM evaluation

  1. The NIMA data sets appear to be lower (mean) than the GDE/TEC data sets (level 3 and 4 about 2 or 3 meters, level 2 about 5 meters). The NIMA data sets were generated from standard MC&G triangulation solutions without incorporating the GPS survey results, thus this difference is not surprising. The GDE (and TEC) data sets were generated from imagery which utilized the GPS survey as triangulation control.
  2.  

  3. The GDE level 5 data set to TEC control data set comparison yielded larger than expected 90% (8.5 meters) and min/max range values, this may indicate that some "intermediate" data set generated by GDE was shipped to NIMA. All of the level 5 data sets were supplied to NIMA via TEC.
  4.  

  5. The large negative difference (min) reported for the IFSARE comparison occurred around the lake area, and appeared to possibly be caused by layover (trees?) close to the boundary of the lake.
  6.  

9.4.2 DEM to GPS survey

  1. NIMA data is confirmed to be low to the GPS survey (level 3 and 4 about 2 meters, level 2 about 4 meters).
  2.  

  3. The GDE/TEC data sets yield extremely low 90% values from this comparison. These results can be attributed to two factors; the increased resolution of the data sets (less interpolation error in the SEARCH directive) and the fact that the imagery was triangulated using this very data set as control.
  4.  

9.4.3 DEM to GPS Ground Profile (Tree coverage areas)

  1. None of the data sets did a particularly good job of portraying the ground surface under the tree canopy. This conclusion is based on analysis of the graphical profile data generated via SPYGLASS Transform software.
  2.  

  3. The IFSARE and GPS generated data sets were portraying elevations more closely representative of the top (or near to the top) of the tree canopy (reflective surface).
  4.  

9.4.4 DEM to GPS Ground Profile (Open micro relief area)

All of the data sets exhibit extremely low 90% LE values. Given the nature of our evaluation software (rounding to integers) they are virtually identical.

9.4.5 DEM relative vertical accuracy

Again given the limitations of the software used for this statistical evaluation all of the data sets are virtually identical.

9.4.6 Visual display analysis

  1. The TEC control level 5 data set and the GDE level 5 data set exhibit extremely high resolution detail on shaded relief displays in the open (dirt) areas, and are for the most part devoid of extraction/edit artifacts. It is obvious that the tree covered areas were interpolated to this resolution from a less dense post spacing.
  2.  

  3. The NIMA data exhibits the manual profile artifacts a.k.a. "corn rows". NIMA cartographers attempted to profile the ground surface through the tree cover. Generally the severity (magnitude) of the artifact varies with the obscuration factor (in this case tree cover). Open (dirt) areas are less affected than tree covered areas. Attempts to edit out this artifact using production software are generally only semi-successful as evidenced by the "SPRINT" data displays.
  4.  

  5. The DPS produced level 3 data exhibits very good resolution in the open (dirt) areas, with little extraction or edit artifacts evident. The tree areas were intentionally left as extracted by the correlator, only large positive/negative spikes were removed.
  6.  

  7. The ISARE data also does a good job of portraying the ground surface in the open (dirt) areas. This data set also portrays tree canopy (reflective surface) heights over dense tree coverage areas, similar to the DPS correlator solution.
  8.  

10. Feature Data Accuracy Evaluation

This section documents the accuracy evaluation of all the feature data sets produced to evaluate terrain data generated for the McKenna MOUT Site. The data sets include one generated by GDE Systems, Inc., three constructed by DMA, and one S1000 database. This was primarily an analysis of feature horizontal position accuracy, although considerable insight into data completeness and representational accuracy was gleaned from the effort to assess position accuracy. A photogrammetric data set, prepared by TEC, was used as the standard for the evaluation. Coordinates of well- defined points on the features were read from the TEC Photogrammetric Models. Coordinates of the same points were determined from each feature data set using ARC/INFO. Position differences between the points in the photogrammetric model and the data sets were calculated and graphics were prepared to provide a quantitative and visual estimate of the absolute and relative horizontal position accuracy of the terrain feature data. Feature Data Accuracy Evaluation Tables.The evaluation concluded that the GDE Data Set was the most accurate, one prepared by DMA from the same imagery was the next most accurate, one prepared by DMA using sources and techniques used to produce standard ITD was third, and the DMA Data Set made by supplementing the ITD from larger scale imagery was least accurate. A further conclusion was that errors in plotting were more significant than systematic errors. The accuracy was further degraded in constructing a S1000 database for SIMNET Simulation.

An absolute and relative horizontal positional accuracy evaluation of the feature data sets prepared by DMA and GDE for the MOUT Project was conducted by TEC. This evaluation, which was performed in the Topographic Technology Laboratory (TTL) of TEC, is closely related to the evaluation of other quality aspects of the feature data sets performed by the Military Operations in Built-up Areas Terrain Database Technical Evaluation Working Group. Some of the insights that resulted from the metric accuracy assessment are included in this report and were provided to TEC's Digital Concepts and Analysis Center (DCAC) for use in the working group's assessment. The source data (photogrammetric data base) used in the TTL evaluations was also used to provide products for use in the Operational Warfighter Evaluations.

10.1 Evaluation Plan

Originally TEC planned to use the DMA GPS ground control points to evaluate feature accuracy. However, none of them were located on well-defined feature points. Therefore, the procedure was modified to compare well-defined feature point coordinates from the various data sets to coordinates of those same points read from the TEC "Coordinate Standard" Photogrammetric Data Set. Absolute accuracies were determined from a direct comparison of these coordinate values. Differences in control surveys upon which each set was based were to be taken into account in making the comparisons. Operator interpretation of the points was recognized as a part of the errors detected. The points were chosen carefully in an attempt to minimize this factor. Relative accuracies were determined by aligning all data sets to the photogrammetrically determined coordinates.

10.2 Evaluation Procedures

Fifty-two well defined feature points which appeared in both GDE's high resolution feature data and in DMA's Enhanced ITD data sets were chosen for use in the assessment. A fifty third point which appeared only on the GDE Feature Data was also used. Four of these points were later discarded. UTM Coordinates of each point were read from TEC's Photogrammetric Data Set. UTM Coordinates of the same points were determined from the GDE and DMA digital feature data sets using ARC/INFO software. These readings were then compared to the TEC "standard" coordinates.

During the early part of the evaluation, a variation between the PSI and DMA GPS Control Surveys was discovered. Table 3 shows the differences at the six paneled ground points established by PSI. The DMA Triangulation Report shows the same approximate differences. DMA applied these offsets to its photo stations before doing triangulation. There was considerable discussion among DMA, TASC, NOAA, and TEC about these differences, but the consensus seemed to be that the results seen are at about the limits of GPS positioning accuracy. TEC received informal information that a DMA resurvey of the area did not resolve the differences.

Table 3 Variation Between PSI and DMA Coordinates
 
DMA Coordinates PSI Coordinates Difference
Point Easting Northing Elev Easting Northing Elev East Northing Elev
39028  5387.52 5700.77 137.745 5388.08 5700.28 137.44 -0.56 0.49 0.31
49009  6574.84 3552.94 129.427 6575.60 3552.38 129.40 -0.76 0.56 0.03
49011  6518.13 3658.20 130.197 6518.85 3657.72 130.15 -0.72 0.48 0.05
69026  8145.38  4967.59  138.987  8146.16  4967.02  138.97 -0.78  0.57  0.02 
29005  4906.98  2481.93  111.833  4907.8  2481.36  111.94  -0.82  0.57  -0.11 
69005  8285.68 2009.83 123.885 8286.46  2009.27  123.97  -0.78  0.56  -0.08 
AVG -0.74  0.54  0.03 
Because the GDE photogrammetric data sets and the TEC photogrammetric models were both based on PSI GPS Control, the comparison between the TEC Photogrammetric Standard and GDE Feature Data was made directly. DMA data sets were believed to be based on DMA GPS control and, consequently, the offset between DMA and PSI Control was applied before the comparisons were made for those data sets. Later information about the DMA Data Sets and other subsequent findings led to a change to this approach.

The TEC Photogrammetric Block was initially triangulated using only the PSI supplied GPS Control for the camera stations. Later two of the paneled ground points established by PSI were incorporated into the solution in an attempt to improve the vertical accuracy of the block. In addition to the internal checks and evaluations made by TEC in triangulating the photogrammetric block, some additional accuracy assurances were obtained by comparing coordinates of ground points established by GPS with those read from the block.

Table 4 compares coordinates measured photogrammetrically at TEC with the PSI GPS coordinates for the three paneled points which fall within the block. Except for one of the elevations, the two sets of coordinates agree to within less than 0.1 meter at these three points. This indicates a high degree of consistency between the block and points used to control it.

Table 4 Comparison Between PSI and TEC Coordinates
 
PSI Coordinates TEC Coordinates Difference
Point Easting Northing Elev Easting Northing Elev East Northing Elev
49009  6575.60  3552.38  129.40  6575.54  3552.40  129.34 0.07  -0.01  0.06 
49011  6518.85  3657.72  130.15  6518.90  3657.75  130.52  -0.05  -0.03  -0.37 
69026 8146.16 4967.02 138.97 8146.23  4966.95  138.96  -0.07  0.08  0.01 
AVG -0.02  0.01  -0.10 
Coordinates of the DMA established ground control points were also compared to readings taken from the TEC Photogrammetric Block. Thirty-four of the DMA points fell within the approximately 2.5K by 2.5K area covered by the TEC Block. Two of these could not be identified by TEC from the descriptions provided by DMA. The coordinates for a third had clearly not been read at the point described. Two others were later thrown out because they were poorly defined and compared poorly with the DMA Coordinates. The first of these was one of a number of points positioned at the point of vegetation at a road fork. In this case there were several locations which could have been rationalized to be the point, none of which matched the DMA coordinates well. The other was a tip of land into a lake which could not be selected reliably on the 1:5,000 imagery although it appeared quite clear on the 1:20,000 photos flown a few months later. It is no accident that these points were also eliminated from the comparisons made in the DMA Triangulation Report.

The horizontal bias (most probable error) detected closely approximates the differences between the DMA and PSI Surveys. Consequently, the TEC Coordinates were shifted by the amount of the difference between the DMA and PSI Coordinates. The comparison of these shifted coordinates to the DMA Coordinates shows only a slight bias (a circular error of .08 meter) and standard deviations of .36 meter in easting and .23 meter in northing. From these values a CPE of .35 meter was calculated. Thus, 50 percent of the well defined points within the block will be within .35 meter of their true locations relative to other points in the block and virtually all points (99.78%) will be within 1.04 meters. With respect to the DMA Control, 50 percent will be within .43 meter and virtually all within 1.12 meter. Given the nature of the test points this is a very good match. At the three paneled ground points established by PSI, the match is .16 meter (circular) or less. Two of the points with larger differences, 69017 and 69022, were omitted in the DMA Triangulation Report indicating that DMA also had trouble with them. For three others, DMA also showed large differences, though not necessarily in the same coordinate.

The vertical bias has not been explained. TTL members used equations provided by TEC's Geodetic Applications Division to check some of the camera station elevations used in the triangulation. No significant differences were found. The source of error may be in converting the elevation from aircraft antenna to camera lens, although the amount of error seems rather large for this explanation. Note that the vertical bias decreased to less than one-third meter after the ground control was incorporated into the triangulation. The DMA Triangulation Report indicated vertical biases of approximately the same magnitude.

10.3 Results Summary

Tables F - 3 through F- 7, Annex F, created by TTL, show the point by point comparisons for Cases 1 through 4 and SIMNET S1000, respectively. Note that, except for Case I, not all the test points were found in each data set. The most probable error, maximum error, minimum error, and standard deviations are computed for each case and the values are consistent with the plots of the data in Figures F - 1 through F - 15, Annex F. Based on analysis of the statistics in the tables, the GDE Feature Data is the most accurate, Case IV is the next most accurate, Cases II and III are roughly equivalent, and S1000 is lowest in accuracy.

10.3.1 Control Bias

The summary tables show a bias (the most probable error) in every data set. Originally, these comparisons for the DMA Data were made against TEC coordinates offset to fit the DMA GPS Control. TTL later learned that Cases II and III were not made from that control. Then a plot of the relative positions of the control solutions (Figure 7) showed that Case IV also fit the TEC (PSI) Control better than the DMA Control. This is despite statements in the triangulation report that camera stations had been adjusted to fit the DMA Control. One possible explanation for this anomaly is that the longitude offset may have been applied with the wrong sign. After this situation came to light, all comparisons were made to raw TEC Coordinates. Converted to a circular error, the biases are .57 meter for Case I, .86 meter for Case II, 1.04 meters for Case III, .63 meter for Case IV, and .86 meter for S1000. These can be considered as differences in the control solutions for the photogrammetric data sets from which the data were extracted. The remaining errors are random errors in compiling the features. Once the biases were removed relative accuracies could be determined.
Undisplayed Graphic
Figure 7 Relative Positions of Control

10.3.2 Error Estimation

For each case, an approximate Circular Probable Error was computed from the standard deviations in easting and northings. The relative error of 50 percent of the well-defined points within the data set should not exceed this value. From this value, the 3.5 sigma circular error which includes 99.78 percent of all points was computed. To determine absolute error, the most probable error was added to the relative values. The figures found for the cases are shown below.
 
Relative  Absolute 
CPE (50%)  99.78%  CPE (50%)  99.78% 
Case I  1.82 m  5.41 m  2.39 m  5.98 m 
Case II  4.40 m  13.09 m  5.26 m  13.95 m 
Case III  4.53 m  13.47 m  5.57 m  14.52 m 
Case IV  3.11 m  9.24 m  3.74 m  9.87 m 
S1000  3.46 m  10.27 m  4.32 m  11.13 m 

10.3.3 Azimuth Checks

Azimuth checks were made on the six longest buildings in the MOUT Area. Table 5 shows azimuths computed from the GDE, DMA, and S1000 feature data for the long side of the building extending from the designated test point compared to azimuths of the same building sides measured in the TEC Photogrammetric Models. This is not a statistically significant sample and one must bear in mind that the baselines are so short that small errors in positioning have a significant effect on the accuracy of the computed or measured azimuths.

The largest errors are in Case II which is not surprising, given that the imagery was smaller in scale and the buildings were much smaller. The most probable errors indicate a counterclockwise rotation of from .23 to .44 degrees between Cases I, III, and IV and the TEC Photogrammetric Block. The Case II data seems to be rotated by nearly 2 degrees in the other direction. The SIMNET S1000 azimuths show little overall bias from TEC. Azimuths over a longer baseline, from Point 10 to Point 42, were compared to check the validity of these conclusions. For this baseline, TEC, Case I, and Case IV Azimuths were essentially identical. The Case III Azimuth still showed a counterclockwise rotation, but by about half as much. The Case II Azimuth now showed a counterclockwise rotation of about one degree.

Table 5 Azimuth Angle Differences
 
Point  Description TEC Case I Diff. Case II Diff. Case III Diff. Case IV Diff. SIMNET Diff.
1 NW Bld H Corner 74.35 73.72 -0.63 76.68 2.33 72.59 -1.76 75.06 0.71 73.66 -0.69
2 NW Bld L Corner 97.87 98.04 0.17 100.92 3.05 96.60 -1.27 96.98 -0.89 97.21 -0.66
3 SE Bld A Corner 8.16 7.30 -0.86 8.78 0.62 8.19 0.03 7.92 -0.24 7.54 -0.62
44 SW Bld G Corner 78.21 77.60 -0.61 74.61 -3.60 77.11 -1.10 76.73 -1.48 77.62 -0.59
46 NW Bld I Corner 71.05 71.50 0.45 79.64 8.59 72.86 1.81 71.74 0.69 73.21 2.16
47 NW Bld E Corner 71.75 71.68 -0.07 71.97 0.22 71.38 -0.37 71.60 -0.15 71.79 0.04
Prob. Value -0.26 1.87 -0.44 -0.23 -0.06
MIN ERR -0.07 0.22 0.03 -0.15 -0.15
MAX ERR -0.86 8.59 1.81 -1.48 -1.48
STD DEV 0.47 3.67 1.17 0.79 1.02
Overall, the Case I azimuths have the best relative consistency (about two-thirds are within .47 degree) and Case IV is the second best (about two-thirds within .79 degree). By redoing the buildings, Case III seems to have improved azimuths significantly from Case II.

The S1000 Azimuths, individually, do not track their source (Case IV) too well, but have an internal consistency that is not much worse.

10.4 Analysis of Results

The results of this test show that the photogrammetric method used has the inherent accuracy to provide accurate feature data. The most probable errors, attributed to differences in the fit of the photogrammetric data sets to control, are all rather small. Most of the error resulted from "random" errors in delineating the features. The accuracy of the data sets could have been improved by throwing out some of the points with the largest errors, but this could be seen as rewarding sloppy work. In fact, four of the original 53 test points were omitted in the evaluation. It may be instructive to review these points and some of the largest errors in all the data sets. The comments below can be understood better by referring to the evaluation plots of the individual points which are included in Annex H. Note that these plots are over an orthophoto constructed at TEC based on PSI Control. For many of the buildings, these plots include DMA GPS Positions provided by DCAC. The locations of these points with respect to the TEC points is generally consistent with the original control bias.

10.4.1 Discarded Points

Point 27, the corner of a clearing beside the road, was discarded. Some of the compilers included the road as part of the clearing and some didn't. The eastern edge of the clearing is approximately perpendicular to the road but was not delineated correctly enough to be a test point either. Point 32, the point of a nearly right angle change in direction of the shore of a small lake and bounding trees, falls off the orthophoto. It was not delineated by any of the compilers. Clearly each generalized the boundary in different ways. Point 33, a road fork was delineated very differently by GDE and DMA. Because this point, too, falls off the orthophoto, the stereo model was rechecked to determine where the side trail runs. No indication of a trail on the alignment shown by DMA could be found. Point 49, a lone tree, is actually two trees together. DMA chose only to delineate the eastern most tree while GDE placed the lone tree symbol between the two.

10.4.2 Case I Errors

GDE provided the most accurate, though not necessarily the most complete feature set, but it could have been better if some of the largest errors could have been avoided.
  1. The maximum easting error in this data set occurred on Point 37, a lone tree. TEC found this to be one of the more well-defined trees and the Case IV compilers positioned it much more accurately. The maximum northing error was on Point 9, a road fork. From the diagram of the point, it appears that the roads had to be intersected by TEC. The roads meet at a shallow angle making it difficult to position accurately. Nearby, there is a fenced installation with a small building and a tower of some sort which would have provided excellent test points, but none of the compilers included it.
  2. Some of the other largest errors in this case were in compiling hard stand corners (Points 13, 14, and 15) which were obscured by trees and shadows. Enough sections of the edges were visible to have allowed determining their alignments well enough to position the corners much better.
  3. Point 30 might have proved better if GDE had connected the trails at the intersection. TEC had to connect the end nodes of the two cross trails with a straight line. The large error on Point 26 could have been avoided by taking more care in delineating the timber boundaries. Several other of the larger errors were on lone trees which were definitely difficult to see in the stereo models.
  4.  

10.4.3 Case II Errors

That Case II was among the least accurate of the feature sets is not too surprising because it was compiled from lower resolution imagery. The data is not very much less accurate than some of the data compiled from the large scale imagery. However, many of the smaller features were not included and the buildings were not delineated very accurately. Again the accuracy results were degraded by very large errors on a small number of points.
  1. The largest northing error in this set is on Point 5, a trail fork or trail end. On the large scale imagery, there does not seem to be any indication of a trail extending to the point included in Case II. The largest easting error was on Point 14, a hard stand corner. Some other of the larger errors were on other similar corners. While TEC has not seen the imagery used, the comments concerning these points in the GDE analysis are believed to be applicable here, too.
  2.  

  3. Large errors were also made in delineating Point 26, a vegetation/lake or wetlands boundary. This, as in the previous case, seems to be a lack of care in delineating the boundary.
  4.  

  5. Other significant errors were on buildings, which were probably quite small on the imagery, and road intersections. (Points 7, 8, 10, and 34.) The latter may have been caused by trees obscuring the side roads, although the road alignments appear different from the other cases.
  6.  

10.4.4 Case III Errors

Case III is an augmentation of Case II (standard ITD) using large scale imagery. Points 6-8, 11, 13-15, 22, 30, and 34 were retained from Case II. Features which created points 9, 12, 16-18, 20, 21, 23, 24, 26, 28, 29, 31, 45, 50, and 51 were added. Features containing many of the remaining points were redone. That this feature set had a lower positional accuracy than Case II is puzzling. The systematic error (control bias) in this set is also larger than and on a different azimuth from Case II, raising questions about the techniques used to register the ITD Features and the higher resolution imagery.
  1. The maximum easting error in this set was on a point (16, a clearing corner) on one of the added features. This particular clearing was determined to be wetlands based on ground truth verification. The boundaries shown do not seem close to the edge of the clearing except in the northernmost section of the plate. The delineation was distorted by being connected to an edge from Case II. The largest northing error was also on an added feature (26, a vegetation corner). Though not readily discernible on the ortho, there is a pretty clear right angle in the tree boundary at this point which none of the compilers has shown properly.
  2.  

  3. Except for Points 9 and 10, road intersections from Case II, which were not very accurate, were retained. The intersection at Point 10 was redone much more accurately and the intersection at Point 9 was added fairly well. All the buildings; Points 1-4, 36, 39-44, and 46-48; were redone without in general improving the accuracy of their delineation. Also retained were some of the least accurately positioned hard stand corners in Case II (11 and 13-15).
  4.  

  5. Of the other features added, the one that created the road fork designated Point 28 was the least accurate. This was because the trail added was joined to another trail from Case II which was not correctly aligned. A fairly large error at the drain fork (Point 24) is due to poor alignment of the drains added. The trail added to form Point 28, a T intersection, is poorly aligned resulting in a significant error at that point. The same is true at Point 31. Some of the other larger errors were caused by tying to Case II features which were misaligned. This compiler, like the others, had understandable difficulty with the lone trees.
  6.  

10.4.5 Case IV Errors

Case IV was compiled from the same large scale imagery used in Case I. Again, it is instructive to look at some of the larger errors in this data set.
  1. The largest error is a northing error for Point 5, a trail end. This trail was dropped about 16 meters short of the trail junction delineated in Case I. The largest easting error was on a hard stand corner, Point 14. Like the others, this compiler did not use the edges of the hard stand to position the corner correctly.
  2.  

  3. The next most significant error was on Point 9, a road fork. The same comments as in paragraph 15.6.2 1. above apply. This is a point that perhaps should have been omitted. Another large error is seen at Point 34, a T intersection. The side road in this data set is drawn as if it curved into the other road.
  4.  

  5. Other relatively large errors were experienced in delineating hard stand corners. The size of the error at Point 6, the road and railroad crossing, is surprising. A similar error at Point 26, a vegetation corner, is the lowest for this point in any of the cases. However, the delineation of the timber edge is not any better. Most of the other large errors were in delineating lone trees, which was a problem for all compilers.
  6.  

10.4.6 S1000 Errors

The accuracy of this data set are not much larger than those of Case IV from which the S1000 data was derived. The individual points are fairly consistent between the two data sets. Some of the largest discrepancies occurred at Points 6, 7, 9, 12, 13, 23, 40, and 48. These discrepancies are thought to have happened when the features were generalized to fit into the TINed terrain skin.

10.5 Summary

This evaluation demonstrates that the methods used are capable of providing feature data with an absolute accuracies of from about 6 to 15 meters. The accuracies will be degraded somewhat in converting to a SIMNET S1000 Data Base. Clearly, the major portion of these errors is in delineating the features. Although TEC believes that results could have been better in this particular case, everyone should bear in mind that these accuracies are extremely good for terrain data in general.

Compilers using care and perhaps improved techniques could lower the range of error considerably. Evidence that this is possible was provided by recalculating the statistics based on the most accurate 30 points in each case, except Case II which only included 28 of the test points. The accuracies estimated from these limited points are shown below. The control biases also changed in both magnitude and direction, decreasing to .17 meter for Case I and increasing to 1.29 meters and .84 meters for Cases III and IV, respectively. This provides further indication that the plotting accuracies are the major source of error.
 

Relative  Absolute 
CPE (50%)  99.78%  CPE (50%)  99.78% 
Case I  .62 m  1.86m  .79m  2.02 m 
Case III  1.58 m  4.69m  2.87 m  5.98 m 
Case IV  1.08 m  3.20 m  1.92 m  4.04 m 
One can see from these results that accuracies in the 2 to 3 meter range may be possible. Again remember that these data sets were compiled from high resolution imagery which meant trading additional production time for higher resolution and accuracy.

10.6 Lessons Learned

The accuracy of photogrammetric data sets could have been evaluated much more rigorously had any GPS test points been established and paneled prior to the flying of the imagery. Then, there would be little doubt where they are located. At least some feature test points could also be positioned by GPS at the same time. GPS Surveys are also subject to error, but the existence of such ground coordinates would add to the reliability of the evaluation.

Imagery to be used for generation of data to be evaluated should be flown at the ideal time. The Fort Benning Area, with its dark coniferous tree cover and light sandy soil, is definitely a difficult area to image. The situation was made worse when the imagery was flown early in the morning adding many large shadows to the already high contrast imagery. Larger scale imagery flown later in the year and more nearly in the middle of the day is much clearer and allows a much more accurate positioning, even of lone trees. Incidentally, use of a digital photogrammetric system for data compilation in difficult areas such as this offers the advantage of being able to adjust the image photometry in any particular situation to improve positioning accuracy.

Standard coverage naming and feature classification conventions should be established in advance and given to all compilers. Also, the projection and datum should be standardized in advance to avoid errors that might occur in converting from one to another.

Some changes in techniques for and more care in compiling the features should provide improved accuracies. Compilers could be given some idea in advance about how the comparisons will be done.

The techniques for overlaying feature data on orthophotos used for this project was very effective in comparing the data sets. This technique may have potential in quality control.