Warfighting and conflict management into the 21st century will require improved concepts and applications of technology in the areas of surveillance and reconnaissance. As defined by the JCS, surveillance is the "systematic observation of aerospace, surface, or subsurface areas, places, persons or things by visual, electronic, photographic or other means." 1 Similarly, reconnaissance refers to "a mission undertaken to obtain, by visual observation or other detection methods, information about the activities and resources of an enemy or potential enemy. "2 Both surveillance and reconnaissance are critical to US security objectives of maintaining national and regional stability, and preventing unwanted aggression around the world. As the US moves into the 21st century in a world of diverse dangers and threats marked by the proliferation of weapons of mass destruction, unconventional warfare, and sophisticated enemy countermeasures, surveillance and reconnaissance are not only critical, but essential for achieving the "high ground" in information dominance, conflict management and warfighting.
Key to achieving information dominance will be the gradual evolution of technology, i.e., sensor development, computation power, and miniaturization, to provide a continuous, real-time picture of the battle space to warfighters and commanders at all levels. Advances in surveillance and reconnaissance, particularly real-time "sensor to shooter" to support "one shot, one kill" technology, will be a necessity if future conflicts are to be supported by a society conditioned to "quick wars" with high operational tempos, minimal casualties, and low collateral damage.
To meet the rigorous information demands of the warfighter, commander and National Command Authority (NCA) in 2020, a system and architecture must exist to provide a high resolution "picture" of objects in space, in the air, on the surface, and below the surface--be they concealed, mobile or stationary, animate or inanimate. The true challenge is not only to collect information on objects with much greater fidelity than is possible today, but also to process the information orders of magnitude faster and disseminate it instantly in the desired format.
The Key to the Concept: Structural Sensory Signatures
The critical concept of this paper is to develop an "omni-sensorial" capability that includes all forms of inputs from the sensory continuum (See Figure 1). This new term seeks to expand our present exploration of the electromagnetic spectrum to encompass the "exotic" sensing technologies proposed in this paper. This system will collect and fuse data from all sensory inputs--optical, olfactory, gustatory, infrared, multispectral, tactile, acoustical, laser radar, millimeter wave radar, x-ray, DNA patterns, HUMINT, etc. to identify objects (buildings, airborne aircraft, people etc.) by comparing their structural sensory signatures (SSSs) against a pre-loaded data base in order to identify matches or changes in the structure for identification or comparison. The identification aspect has obvious military advantages in the indications and warning, target identification and classification, and combat assessment processes.
An example of how this technique might actually develop involves establishing a sensory baseline for certain specific objects and structures. Using a known source, such as an aircraft or building full of nuclear or C4I equipment, the system would then optically scan from all angles; smell; listen to; feel; measure density, infrared emissions, light emissions, heat emissions, sound emissions, propulsion emissions, air displacement patterns in the atmosphere, etc.; and synthesize that information into a sensory signature of that structure. This map would then be compared against sensory signature patterns of target subjects such as Scud launchers or even individual people. A simple, but effective example of a sensory signature was discovered by the Soviets during the height of the cold war. They discovered that the neutrons given off by nuclear warheads in our weapons storage areas interacted with the sodium arc lights surrounding the area, creating a detectable effect. This simple discovery allowed them to determine whether a storage area contained a nuclear warhead. 4
This "imaging" could be carried one step further by techniques such as non-invasive magnetic source imaging and magnetic resonance imaging (MRI), which are now used in neurosurgical applications for creating an image of the actual internal construction of the subject.5 In fact, the numerous non-intrusive medical procedures now used on the human body might be extrapolated to extend to "long-range" sensing. Procedures similar to MRI and the use of nuclear medicine to look inside the body for anomalies could be used on targets at a distance. The nuclear materials for these "structural MRIs" could be delivered by PGMs or drones and introduced into the ventilation system of a target building. The material would circulate throughout the structure and eventually be "sensed" remotely to display the internal workings of the structure.
Another extension of the concept of distance sensing would be the tracking of mitochondrial DNA found in human bones. DNA technology is currently being used by the US Army's Central Identification Laboratory for identifying war remains.6 If this technique could be used at a distance, tracking individual human beings is conceivable. When extrapolating these techniques from medicine, the possibilities are endless. Detection of vapors and effluent liquids associated with many manufacturing processes could be accomplished by a mass spectrometer that ionizes samples at ambient pressure using an efficient corona discharge.7 These techniques are currently found in state of the art environmental monitoring systems. There are also spectrometers that can analyze chemical samples through glass vials.8 Applying this technology from a distance and collating all the data will be the follow-on third and fourth order applications of this concept.
Another technology that would aid the identification of airborne subjects would be NASA's new Airborne In Situ Wind Shear Detection Algorithm.9 Although designed to detect turbulence, wind shear and micro burst conditions, this technology could be extrapolated to detect aircraft flights through a given area (maybe using some sort of detection net for national or point defense). This, coupled with disturbances in the earth's magnetic field, vortex detection tracking of CO2 vapor trails, and identifying vibration and noise signatures would create a sensory signature that could be compared against a data base for classification (See Figure 2).
The overall system would accumulate sensing data from a variety of sources such as drone or cruise missile delivered sensor darts embedded in the structures, structural listening devices, space based multispectral sensing, weather balloons, probes, airborne sound buoys, unmanned aerial vehicles (UAVs), platforms such as AWACS and JSTARS, land radar, ground sensors, ships, submarines, surface and subsurface sound surveillance systems, human sources, chemical and biological information, etc. The variety of sensing sources would serve several functions. First, with many sources of information coming in on a particular target, spurious inputs could be "kicked out" of the system, or given a lesser reliability value, much like the comparison of data from an aircraft equipped with a triple Inertial Navigation System when there is a discrepancy among separate inputs. Another important factor in handling a variety of inputs is that it makes the system increasingly harder to defeat when it does not rely on a few key inputs. Finally, inputs from other nations and the commercial sector may be used as additional elements of data. Just as the current Civil Reserve Air Fleet (CRAF) system requires certain modifications for commercial l aircraft to be used for military purposes in times of national emergency, commercial satellites might contain subsystems designed to support the system envisioned above. In such a redundant system, if some data was not received, it would not have significant debilitating impact on the system as a whole.
To fuse and compare the data, the processors could take advantage of common neural training regimens and pattern recognition tools to sort data received from each of the sensor platforms. Some of the data fusion techniques we envision would require continued advancement in the world of data processing. Data processing capability is growing rapidly, as noted by Dr. Gregory H. Canavan, Chief of Future Technology at Los Alamos National Laboratory:
Frequent overflights by numerous satellites adds the possibility of integrating the results of many observations to aid detection. That is computationally prohibitive today, requiring about 100 billion operations per second, which is a factor 10,000 greater than the compute rate of the Brilliant Pebble and about a factor of 1,000 greater than that of current computers. However, for the last three decades, computer speeds have doubled about every two years. At that rate, a factor of 1,000 increase in rate would only take about 20 years, so that a capability to detect and track trucks, tanks, and planes from space could become available as early as 2015.10
Dr. Canavan also suggested that development time could be reduced even further by using techniques such as parallel computing and using external inputs to reduce required computation rates. The point is that with conservatively forecast advancements in computer technology the ability to gather and synthesize vast amounts of data will permit significant enhancements in remote sensing and data fusion.Using Space
As envisioned, this concept would be supported by systems in all operational media--sea, ground, subterranean, air and space. However, space will play the critical role in this conceptual architecture. Although the system would rely on data from many sources other than space, there are some definite advantages in using space as a primary source of data for sensing and fusing. Space allows prompt wide- area coverage without the constraints imposed by terrain, weather, or political boundaries. It can provide worldwide or localized support to military operations by providing timely information for such functions as target development, mission planning, combat assessment, search and rescue (SAR), and special forces operations.
The overall concept can be divided into three parts: The sensing phase using the ground, sea, air, and space based sensors; the data fusion phase which takes the raw data and produces information; and the dissemination phase which delivers the information to the user. The dissemination portion is discussed in the SPACECAST 2020 white paper entitled, "Global View."Sensing
In approaching the whole concept of sensing, this paper uses the five human senses as a metaphor. Although this is not a precise representation, it at least provides a convenient beginning point for our investigation. For instance, in the living world, human sensing capabilities are often inferior to the sensing capabilities of other forms of life, e.g., the dog's sense of smell or the eagle's formidable eyesight. This concept revolves around exploring the technological limits of sensing.
There have been tremendous strides made in the sensing arena. However, some areas are more fully developed than others. There have, for example, been more advances in the optical or visual than in the olfactory (smell) ones. This paper not only examines the more traditional areas of reconnaissance such as multi-spectral technology, but discusses interesting developments in some rather unique areas. An exciting element of this paper is the discovery of research being conducted in the commercial realm where very specific technologies for specific tasks are needed, and where these techniques may not have as yet been fully investigated for military uses.
The sensing areas examined here are: "visual" (to include all forms of imaging such as infrared, radar, hyperspectral, etc.), acoustic, olfactory (smell), gustatory (taste) and finally tactile (touch). There are two keys to this metaphoric approach to sensing. First, it unbinds the traditional electromagnetic spectrum orientation to sensing. Second, it provides a way of showing how all these sensors will be fused to allow fast, accurate decision making such as that provided by the human brain.Visual Sensing and Beyond
As mentioned above, remote image sensing received a tremendous amount of attention both in military and civilian communities. The intention of this paper is not to reproduce the vast amount of information on this subject, but to briefly describe the current state-of-the-art and highlight some of the more innovative concepts from which we can step forward into the future. We will not discuss US national imaging capabilities other than to emphasize they will need to be replaced or upgraded to meet the needs of the nation in 2020. The technologies and applications discussed below pave the way for these improvements.Multispectral Imaging
Multispectral imaging (MSI) provides spatial and spectral information. MSI is currently the most widely used method of imaging spectrometry. The US-developed LANDSAT, the French SPOT, and Russian Almaz are all examples of civil/commercial multispectral satellite systems. These systems operate in multiple bands, can provide ground resolution on the order of ten meters, and support multiple applications. Military applications of multispectral imaging abound. The US Army is busily incorporating MSI into its geographic information systems for intelligence preparation of the battlefield or "terrain categorization" (TERCATS). The Navy and Marines use MSI for near shore bathymetry, detecting water depths of uncharted water ways, to support amphibious landings and ship navigation. MSI data can be used to help determine "go, no go and slow go" areas for enemy and friendly ground movements. This information can be especially useful in tracking relocatable targets such as mobile short range and intermediate range ballistic missile launchers by eliminating untrafficable areas. Using MSI data in the radar, Infra-Red (IR) and optical bands, identification of environmental damage caused by combat (or natural disasters) can be more quickly discerned. For example, LANDSAT imagery helped determine the extent of damage caused by Iraqi-set oil fires in Kuwait during the Gulf War.11
Although MSI has a variety of uses and many advantages, this sensing technique results in a decrease of both bandwidth and resolution from conventional spectrometry. Additionally, multispectral systems cannot produce contiguous spectral and spatial information. These disadvantages must be overcome to meet the surveillance and reconnaissance needs of the warfighter and commander of 2020.Hyperspectral Sensing
One promising technology for overcoming these shortfalls is hyperspectral sensing. Hyperspectral devices can produce thousands of contiguous spatial elements of information simultaneously. This would allow for a greater number of vector elements to be used for such things as space object identification, resulting in higher certainty of object identification. Although hyperspectral models do exist, none have been optimized for missions from space, nor integrated with the current electro-optical, infra-red, and radar imaging technologies.
This same technology can be equally effective for ground target identification. Hyperspectral sensing can use all portions of the spectrum to scan a ground target or object, collect bits of information from each band, and fuse the information to develop a signature of the target or object. Since only a small amount of information may be available in various bands of the spectrum (some bands may not produce any information), the process of fusing the information and comparing it to other intelligence and information sources becomes crucial.
There are several warfighting needs for a sensor providing higher fidelity and increased resolution to support, for example, USSPACECOM and its components' missions of space control, space support, and force enhancement. In addition to the aforementioned examples of deep space object identification (either from ground or a space platform), identification of trace atmospheric elements, and certain target identification applications, there are also requirements in the following areas: debris fingerprints, damage assessment, space object anomaly identification (ascertaining the health of deep space satellites), spacecraft interaction with ambient environment, terrestrial topography and condition, and environmental treaty verification.
There are several enabling technologies involved in surface, air, and space object identification. These include, but are not limited to: remote calibration (ground-to-space or ground-to-ground), extreme sensitivity detectors, algorithms for very low signal to noise ration conditions, multiple frequency laser imaging, and range devices (LIDARSs) enabling precise frequency control and stability.
Several technologies are currently being developed that can be integrated into hyper-spectral sensing to further exploit ground and space object identification. Two promising technologies include remote ultra low light level imaging (RULLI) and fractal image processing. RULLI is a Department of Energy initiative to develop an advanced technology for remote imaging using illumination as faint as star light.12 It encompasses leading edge technology that combines high spatial resolution with high fidelity resolution. Long exposures from moving platforms become possible because high-speed image processing techniques can be used to de-blur the image in software. RULLI systems can be fielded on surface-based, airborne, or space platforms, and when combined with hyperspectral sensing, can form contiguous continuous processing of spatial images using only the light from stars. This technology can be applied to tactical and strategic reconnaissance, imaging of biological specimens, detection of low-level radiation sources via atmospheric fluorescence, astronomical photography in the x-ray, UV and optical bands, and detection of space debris. RULLI depends on a new detector--the crossed-delayed line photon counter--to provide time and spatial information for each detected photon. However, by the end of FY-96, all technologies should be sufficiently developed to facilitate designing an operational system.
The task of finding mobile surface vehicles requires rapid image processing. Automated pre- processing of images to identify potential target areas can drastically reduce the scope of human processing and provide the warfighter with more timely target information. Hyperspectral sensing can aid in quickly processing a large number of these images on-board the sensing satellite, identifying those few regions with a high probability of containing targets, and down linking data subsets to analysts for visual processing. The topological features of natural terrain (sand dunes, ocean surface, forests) are characterized by irregular shapes whereas man-made objects (missiles, vessels, vehicles) contain regular features with sharp edges and straight lines. A numerical quantity called the fractal dimension, "D", can be computed from an image of natural terrain. If a man-made object is superimposed onto the natural terrain background, the value of "D" changes noticeably. Therefore, an image could be characterized as completely natural or as containing a man-made object by obtaining the value of "D." By placing this processing capability on-board a satellite, the pre-processed imagery could be fused with other sensory information or simply down linked to national and theater-level analysts. Although fractal-like backgrounds can be defeated by cloud/smoke-cover or camouflage, if fused with information from other sensory sources, it can help the analyst or the processing software identify ground based signatures.13
Hyperspectral sensing offers a plethora of opportunities for deep space and ground object identification and characterization to support the warfighter's space control and surveillance mission, remote sensing of atmospheric constituents and trace chemicals, and enhanced target identification. Collecting and fusing pieces of information from each band within the spectrum can provide high fidelity images of ground or space based signatures. Moreover, when combined with fused data from other sensory and non-sensory sources, it can provide target identification that no single surveillance system could ever provide. The result: tThe warfighter has a much improved picture of the battle space--anywhere, anytime.Acoustic Sensing
When matter within the atmosphere moves, it displaces molecules and sends out vibrations or waves of air pressure that are often too weak for our skin to feel. Waves of air pressure detected by the ears are called sound waves. The brain can tell what kind of sound has been heard from the way the hairs in the inner ear vibrate. Ears convert pressure waves passing through the air into electrochemical signals which the brain registers as a sound. This process is called acoustic sensing.
Electronically-based acoustic sensing is not very old. Beginning with the development of radar prior to W.W.II, applications for acoustic sensing have continued to grow, and now include underwater acoustic sensing known as sonar, ground and subterranean-based seismic sensing, and the listening of to communications and electronic signals from aerospace. Electromagnetic sensing operates in the lower end of the electromagnetic spectrum and covers a range from 30 hertz to 300 gigahertz. Acoustic sensors have been fielded in various mediums, including surface, subsurface, air and space. Since the advent of radar, most acoustic sensing applications have been pioneered in the defense sector. Space-based acoustic sensing developments in the Russian defense sector have recently become public.
According to The Soviet Year in Space 1990:
Whereas photographic reconnaissance satellites collect strategic and tactical data in the visible portion of the electromagnetic spectrum, ELINT (Russian defense electronic intelligence) satellites concentrate on the longer wavelengths in the radio and radar regions... Most Soviet ELINT satellites orbit the earth at altitudes of 400 to 850 kilometers, patiently listening to the tell-tale electromagnetic emanations of ground-based radar's and communications traffic.14
It is believed the Russians use this space-based capability to monitor tactical order of battle changes, strategic defense posture as well as and treaty compliance.
On the ground, the United States used different kinds of acoustic sensors during the Vietnam War. The first was an acoustic sensor derived from the sonobuoy developed by the US Navy to detect submarines. The USAF version used a battery-operated microphone instead of a hydrophone to detect trucks or even eavesdrop on conversations between enemy troops. The air-delivered seismic detection (ADSID) device was the most widely used sensor. It detected ground vibrations by trucks, bulldozers, and the occasional tank, though it could not differentiate with much accuracy between vibrations made by a bulldozer and a tank.15
In the civil sector, there are numerous examples of applications of acoustic sensing. In the United States, acoustic sensors that operate in 800-900 Hz range are now being developed to help in detecting insects. It is conceivable that these low volume, acoustic sensors could be further refined to either work hand-in-hand with other spectral sensors or by themselves to classify insects and other animals based on noise characteristics.16
Sandia National Laboratory in New Mexico has made progress in using acoustic sensors to detect the presence of chemicals in liquids and solids. In the non-laboratory world, these acoustic sensing devices could be used as real-time environmental monitors to detect contamination either in ground water or soil, and have both civil (e.g., natural disaster assessment) and military (e.g., combat assessment) applications.17
An additional development in the area of acoustic sensing revolves around seismic tomography to "image" surface and sub-surface features. Seismic energy travels as an elastic wave that is both reflected from and penetrates through the sea floor and structure beneath--as if we could see the skin covering our face and the skeletal structure beneath at the same time. Energy transmitted through the crust can also be used to construct an image.18
In summary, acoustic sensing offers great potential for helping the warfighter, commander, and war planner of the 21st century solve the problems of target identification and classification, combat assessment, target development, and mapping. For acoustic sensing from aerospace, a primary challenge appears to be in boosting noise signals through various mediums. Today, this is accomplished using bistatic and multistatic pulse systems. In the year 2020, assuming continued advances in interferometry, the attenuation of electromagnetic "sound" through space should be a challenge already overcome, thus permitting very robust integration of acoustic sensing with other remote sensing capabilities from aerospace.
A more serious 2020 challenge in defense-related acoustic sensing may come from enemy countermeasures. As operations and communications security improve, space-based acoustic sensing will become increasingly more difficult. Containing emissions within a shielded cable or better yet, a fiber optic cable, makes passive listening virtually impossible. The challenge for countries involved with space-based acoustic programs is to develop improved countermeasures to overcome these technological advancements. In the year 2020, remote acoustic sensing from space and elsewhere will be a critical element for developing accurate structural signatures as well as for assessing activity levels within a target. New methodologies for passive and active sensing need to be developed and should be coupled with other types of remote sensing.Olfactory Sensing
Although this sense is somewhat "exotic" today, since the mid 1980's there has been a resurgence of research into the sense of smell. Both military as well as civilian scientists have aimed their efforts at first identifying how the brain determines smell and how we could synthetically replicate the process. The results of these efforts are impressive. An electronic "sniffer" that can analyze odors needs two things: 1) the equivalent of a nose to do the smelling, and 2) the equivalent of a brain to interpret what the nose smelled. A British team employed arrays of gas sensors made of conductive polymers working at room temperature. Each sensor has an electrical current that passes through it. When odor emissions collide with the sensors, the current changes and responds uniquely to different gases. The next step was to synthesize the various currents into a meaningful pattern. Using a neural net (a group of interlined interconnected microprocessors that simulate some basic functions of the brain) the patterns were identified. The neural net was able to learn from experience and did not need to know the exact chemistry of what it was smelling. It could recognize when patterns changed, giving it a unique ability to either detect new or remove substances.19
Swedish scientists took this a major step further. Their development of a light-scanned, seam-conductor sensor shows great promise in the area of long range sensing. This sensor is coated with three different metals: platinum, palladium, and iridium. These elements are heated at one end to create a temperature gradient. This allows the sensor to respond differently to gasses at every point along its surface. The sensor is read with a beam of light which generates an electrical current across the surface. The current, when fed into a computer, results in a unique image of each smell which is compared to a data base to determine the origin.20
Despite these impressive findings, present technology requires the gases to physically come in contact with the sensor. The next step is to physically fuse the sensory capabilities into a sort of particle beam which when coming into contact with the odors would react in a measurable way. Similar to radar, beam segments would return to the processing source and the object from which the odors emanated odors would be identified. This process could be initiated from space, air or land and would be fused with other remote sensing capabilities to build a more complete picture. Studies on laser reflection demonstrated the ability to correct for errors induced by moving from the atmosphere to space. There is every reason to believe that the next couple of decades will produce similar capabilities for particle beams. The ability to fuse odors sensors within these beams and receive the reactions for processing may also be feasible in the prescribed time frame.Gustatory Sensing
Another area that has not received a tremendous degree of attention is the sense of taste. In many ways, ideas concerning the sense of taste may sound more like the sense of smell. The distinction is that the sample tasted is part of (or attached to) a surface of some sort. The sense of smell relies on airborne particles to find their way to receptors in the nose. The study of taste makes frequent reference to smell-- this is probably due to similar mechanisms where the molecules in question come in contact with the receptor (be they smell or taste receptors).
Taste in and of itself will probably not be a prime means of identification. It can, however, be one of the discriminating bits of information that can aid in identifying ambiguous targets identified by other systems. It also provides another characteristic that must be masked or spoofed to truly camouflage a target. Taste could be used to detect silver paint that appears to be aluminum aircraft skin on a decoy. It could be used to "lick" the surface of the ocean to track small polluting craft. It could even be used to taste vehicles for radioactive fallout or chemical/biological surface agents. We could detect contamination before sending ground troops into an area. By putting a particular flavor on our vehicles, a taste version of IFF may be possible.
The sense of taste provides the human brain with information on characteristics of sweetness, bitterness, saltiness, and sourness. The exact physiological mechanism for determining these characteristics is not yet completely understood. It is theorized that sweet and bitter are determined when molecules of the substance present on the tongue become attached to "matching" receptors . The manner in which the molecules match the receptors is believed to be a physical interlocking of similar shapes such as how pieces of a jigsaw puzzle fit together. Once the interlocking takes place, an electrical impulse is sent to the taste center in the brain. It is not known whether there are thousands of unique taste receptors (each sending a unique signal), or if there are only a few types of receptors (resulting in a many unique combinations of signals). The experts think saltiness and sourness are determined in a different manner. Rather than physical attachment to the receptors, these tastes "flow" by the tips of the taste buds, exciting them directly through the open ion channels in the tips.21
To make a true bitter/sweet taste sensor in space would require technology permitting the transmission of an actual particle of the object in question. This appears to be outside the realm of possibility in the year 2020. An alternative would be to scan the object in question with sufficient "granularity" to determine the shape of the individual molecules, and then compare this scanned shape with a catalog of known shapes and their associated sweet or bitter taste. Such technology is currently available in the form of various types of scanning/tunneling electron microscopes. The shortcoming of these systems is they require highly controlled atmospheres and enclosed environments to permit accurate beam steering and data collection. The jump to a "remote electron microscope" may also be outside the reach possible by 2020.
Alternate means of determining surface structure remotely could be to increase the distance from which Computerized Axial Tomography (CAT) scans or Non-Magnetic Resonances (NMR)22 are conducted. While current technology requires rather close examination (on the order of several inches), at least a portion of the "beam" transmission takes place in the normal atmosphere. Extrapolation of this capability to be able to scan from increased distance does seem possible.
To perform something such as a taste scan from space to determine the sweet/bitter taste of an object will require continued research and a truly great increase in technology. First of all, taste research must continue and the mechanics of taste must be fully understood. From this research, the appropriate characteristics of molecules related to taste would need to be cataloged in a database. Without the understanding of how taste works, a scanner could not be designed properly.
The problem of remote scanning is the second great challenge and it comes in two parts: getting the beam to the targeted object; and capturing the reflected beam pattern to determine the surface shape at the molecular level. Getting the beam to the target has three prerequisites: beam generation, beam aiming, and beam power.
The scanning beam (of whatever type provides the desired granularity) needs to be generated with sufficient power to reach the target with enough energy to reflect a detectable and measurable pattern for collection and subsequent analysis. Beam generators and collectors are expected to both be located (not necessarily co-located) in space (most likely a low earth orbit [LEO] at least for the generators). Maximum distance from generator to target is probably on the order of 1000 to 1500 miles or to allow for off-track targeting. This is determined by using 300-400 LEO yeilds slant range to line of sight horizon of about 1000 to 1500 mile and advanced Trigonometrythe slant range from a 300-400 mile LEO to the line of sight horizon. Target to collector distances would be at a minimum the same as generator to target (if collection is accomplished in LEO) to a maximum of 25,000 miles (if collection is accomplished in geosynchronous orbit.)
To ensure data being gathered is what is desired, the beam must be aimed and focused exactly at the desired target. Aiming will require compensation for atmospheric inconsistencies. Work has already been done in this area where a laser is fired into the atmosphere to detect anomalies along the general path of the actual beam. This information is used to refine the final aiming to the target to properly compensate. Refined focusing on the targeted areas should be on the order of no more than 1 or 2 square feet. This sample size should control the number of different tastes sensed to a reasonable number while still being large enough too keep the required number of samples relatively low.
Associated with beam power is the consideration of what happens when the beam (of whatever type) hits the target area. Will the power required be so great as to burn or damage the target? Will the scanning be detectable in the target area? These challenges must be overcome in order to bring the taste sensor to reality.
Capturing the reflected beam is also a significant challenge. The general technique used to analyze objects with scanning methods calls for a beam from a known location and of known power to "illuminate" the targeted object. Since the surface of the object is irregular, the beam is reflected in various directions. For this reason, the object needs to be virtually surrounded by collectors to insure all reflected energy is collected. By virtue of which collector collects which portion of the beam, the surface reflecting the beam can be reconstructed.
In much the same way that beam aiming is a challenge due to the inconsistencies of the transmission medium between the beam generator and the target, collection of the reflected beam is also challenging.
Since it is impossible to completely surround the earth with a single collecting surface, a large number of platforms must serve as a ccollectors. To provide reasonable capability for collecting the reflected energy for analysis. All platforms would need to focus their collectors on the targeted area by compensating in a manner similar to the beam generator. Platforms available for collection would be any with line of sight directly to the target as well as any "below" the physical horizon but who may be able to capture reflected energy in a manner similar to over the horizon back scatter (OTHB) radar system. With appropriate algorithms and beam selection, it is conceivable that the entire sensor constellation could be available for collection all the time.
Fusing of the reflected data from a single "taste" would take place on a central platform, probably in geosynchronous orbit. Information about the taste measurement would include scanning beam composition, pulse coding data, firing time, location of beam generator, aiming compensation data, focusing data, targeted area location, collector position, collector compensation data, and actual collected data including time of collection and pulse coded data. All this data is needed to accurately assemble the data collected in many locations at slightly different times. Basically we are collecting only a fraction of the "reflected energy" from scanning beams and all this information is needed to know which part of the "taste signature" we have put together.Tactile Sensing
The final sense examined is the sense of touch, or tactile sensing. A potential exists for the development of an earth surveillance system using a tactile sensor for mapping and object determination. Rather than viewing and tracking items of interest optically, objects could be identified, classified and tracked via tactile stimulation and response analysis. This method of surveillance has advantages over optical viewing in that it is unaffected by foul weather, camouflaging or other obscuration techniques.
Tactile sense provides humans with a notice of contact with an object. Through this sense, we learn the shape and hardness of objects, and using our cutaneous sensors we receive indications of pressure, warmth, cold and pain. A man-made tactile sensor emulates this human characteristic using densely arrayed elementary force sensors (or taxels) which are capable of image sensing through the simultaneous determination of a contacting object's force distribution and position measurements.23
Recent advances in tactile sensor applications have appeared in the areas of robotics, cybernetics and virtual reality. These simple applications attempted to replicate the tactile characteristics of the human hand. One project, the Rutgers Dexterous Hand Master, combines a mechanical glove with a virtual reality scenario to allow an operator to 'feel' virtual reality images. This research has advanced the studies of remote controlled robots that could be used in such ventures as construction of a space station or cleaning up a waste site.24
The challenge is to develop tactile sensors which are capable of remotely 'touching" an object to determine object characteristics. This challenge elicits visions of a large gloved hand reaching out from space to squeeze an object to determine if it is alive. This science fiction analogy can be developed by expanding the practical concept of radar.
Radar is a radio system used to transmit, receive and analyze energy waves to detect objects of interest or "targets". In addition, target range, speed, heading and relative size can be determined. One possible way to identify tactile characteristics of an interrogated target is to analyze the radar returns and compare data reception to known values. Any radio wave striking an object will have a certain amount of its energy reflected back toward the transmitter. The intensity of the returned energy depends upon the distance to the target, the transmission medium, and the composition of the target. For example, energy reflected off a tree exhibits different characteristics from characteristics different from those of energy reflected off a building (a tree absorbs more energy). By analyzing the energy returns, it is conceivable that target characteristics of shape, temperature, and hardness could be determined through a comparative analysis against known values. The tactile characteristics of the various objects interrogated in an area surveillance could then be transformed into a 3-dimensional graphical representation using virtual reality.
The significant value in evolving tactile sensor technology lies not in the development of a replacement for current surveillance sensors, but in the unique additional information gained. A typical surveillance radar provides the "when, where, and how" for a particular target, while a tactile sensor adds the "what" and potentially the "who".Countermeasures to Sensing
Once a potential adversary perceives a potential threat to its structure and system, the enemy usually develops and employs countermeasures. The concepts for sensing in this paper, albeit rooted in leading edge technology, are not exempt from enemy countermeasures. Potential enemy countermeasures in 2020 include killer ASATs, jamming, and ground station attacks. Target protection countermeasures include concealment, camouflage, and deception (CC&D) and OPSEC. Technical experts must address these threats and countermeasure early in the design phase of this sensing system.
Active and passive systems can overcome jamming, ground station attack, and enemy OPSEC. In the case of jamming, frequency hopping and "hardening" of space links are both effective countering techniques. Hopping rates currently exceed 3,000 hops per second. These rates will most likely continue to increase exponentially in the future, which effectively makes could make many forms of jamming a minor irritant. Overcoming ground station attack can be accomplished through improved physical security and redundancy of critical nodes. Redundancy can be expensive, but if incorporated early in the design phase, it can be efficient and cost effective. Finally, the best way to counter enemy OPSEC is through passive measures such as better security training, HUMINT, and reducing the number of people that "need to know," .and active measures such as HUMINT and exploiting the "omni-sensorial" capability of this system.
Killer ASAT and CC&D capabilities are much more difficult and costly to counter. Decoy satellites and redundancy in space-based systems can be effective. However, some cost-effective means of hardening must be pursued to ensure the survivability of our space systems. In the case of CC&D, the diversity of sensors employed, combined with other intelligence sources, should provide a counter to this threat. Techniques for employing multi-source sensing must keep pace with such emerging technologies as holographic imaging to ensure a counter to the spoofing threat is maintained.
Developing effective means to offset enemy countermeasures is a never-ending challenge. To avoid it, however, is to run the risk of developing expensive technology that can be rendered useless or ineffective by enemy countermeasures. An advantage to studying "measures" and countermeasures simultaneously is awareness of how friendly systems can be better protected or desensitized to potential countermeasures.Data Fusion
Fusion of all the information collected from the various sensors mentioned above is the key to taking the massive amount of data and turning it into useful information for the warfighter (See Figure 3). Without the appropriate fusion process, the warfighter will be the victim of information overload, a condition which is not much better and sometimes worse than no information at all. The ability to fuse vast amounts of multisource data, real time, and have it available to the warfighter on-demand, is the goal of this initiative.
Today, we are able to collect data from a variety of sensor platforms, e.g., satellites, air breathing, HUMINT sources, etc. What we are not able to do, however, is to fuse large amounts of multisource data in a near real time fashion. Today, we have what amounts to "stovepipe" data, that is, data streams being processed independently. As we discovered in Desert Storm, there were deficiencies in sharing and relating intelligence from different sources. The warfighter was not able to see the whole picture, just bits and pieces.
In today's environment, sensor data is capable of drowning us. The sheer volume of this data can cripple an intelligence system.
Over 500,000 photographs were processed during Operation Desert Storm. Over its 14 year lifetime, the Pioneer Venus orbiter sent back 10 terabits (10 trillion bits) of data. Had it performed as designed, the Hubble Space Telescope was expected to produce a continuous data flow of 86 billion bits a day or more than 30 terabits a year. By the year 2000, satellites will be sending 8 terabits of data to earth each day. 25As staggering as this is, the computing power on the horizon may be able to digest this data. The Advanced Research Projects Agency (ARPA) is sponsoring the development of a massive parallel computer capable of operating at a rate of one trillion floating point operations per second (1 tera FLOPS). Parallel processing is the employment of multiple processors used to execute several instruction streams concurrently. Using parallel processing, the time required to process information is much faster than if only one processor were doing the work.
Once the data is processed into usable information or intelligence, a means of storing and retrieving a huge database or library is needed. "Advances in storage technology in such media as holography and optical storage will undoubtedly expand these capacities."26 An optical tape recorder capable of recording and storing more than a terabyte of data on a single reel is being explored.
Vertical block line (VBL) storage technology offers the possibility of storing data in non-volatile, high density, solid- state chips. It is a magnetic technology which offers inherent radiation hardness, data erasability and security, and cost effectiveness. VBL technology is intended to provide non-volatility, high density, and solid-state performance simultaneously. When compared to magnetic bubble devices, VBL offers higher storage density. It also offers higher data rates at reduced power when compared to bubble devices. VBL chips could achieve (volumetric) storage densities ranging from one gigabit to one terabit per cubic centimeter. Chip data rates, a function of chip architecture, can range from one megabit/second to 100 megabits/second. Chip costs, in volume production, are estimated to be less than one dollar per megabyte.
In order to provide the user real time, multisource data in a usable format, leaps in data fusion technologies must occur. A new technology which could exponentially increase computational speeds is the photonic processor. The processing capabilities and power requirements of current fielded and planned electronic processors are determined almost solely by the low-speed and energy inefficient electrical interconnects used to interconnect electronic boards, modules, or processing systems. Processing speeds of electronic chips and modules can exceed hundreds of megahertz, whereas electrical interconnects run at tens of megahertz rates due to standard transmission line limitations. More significantly, the majority of power consumed by the processor system is used by the interconnect itself. Optical interconnects, whether in the form of free-space board-to-board busses or computer-to-computer fiber optic networks, consume significantly less electric power, are inherently robust with regards to electromagnetic impulses (EMI) and electromagnetic pulses (EMP), and can provide large numbers of interconnect channels in a small, low weight, rugged subsystem. These characteristics are critically important in space-based applications.27
This technology of integrating electronics and optics reduces power requirements, builds-in EMI/EMP immunity, and increases processing speeds. The technology is very immature but has great potential. If it were possible to incorporate photonic processing technologies into a parallel computing environment, increases of several orders of magnitude in processing speeds could possibly might occur.
The fusing of omni-sensorial data will require processing speeds equal to or greater than those mentioned above. On-board computer (OBC) architectures will use at least three computers, performing parallel processing and using a voting process to ensure that at least two of the three OBCs agree. The integration of neural networks in OBC systems will provide higher reliability and enhance process control techniques.
Change detection/pattern recognition and chaos modeling techniques will increase processing speeds along with reducing the amount of data to be fused. Multiple sensors, processing their own data, can increase processing speeds and share data between platforms through cross-queuing techniques. Optical data transmission techniques should permit high data throughputs to the fusion centers in space, on the ground, and/or in the air.
The National Information Display Laboratory is investigating technologies which would aid in the registration and deconfliction of received omni-sensorial data, data fusion, and image mosaicing. "Information-rich" environments made accessible by the projected sensing capabilities of 2020 will drive the increasing need for geo-referenced autoregistration of multisource data prior to automated fusion, target recognition/identification, and situation assessment. Image mosaicing, or the ability to consolidate many different images into one, will enhance the usability of wide area imagery-based products. Signals, multiresolution imagery, acoustical data, analyzed sample data (from tactile/gustatory sensing), atmospheric/ exoatmospheric weather data, voice, video, text, and graphics can be fused in an "infobase" which provides content- and context--based access, selective visualization of information, local image extraction, and playback of historical activity.28
An omni-sensorial distributed fusion processing capability, either on-board the sensor platform, on the ground, or in the air, will act as a strong defensive countermeasure against possible enemy threats (jamming, ASAT, lasing, etc.). The future environment will no longer permit centralized fusion centers because of their vulnerability to single-event failures either through enemy attack or natural disasters. Neural network technology offers opportunities for size, weight, and power reductions in addition to opportunities for distributed networking. Internal logical and physical arrangements of neural systems (their software and hardware) can be modeled on the massively parallel, highly distributed, and self-organizing arrangements found in the brain--a silicon-based model of a carbon- based information processing and decision mechanism refined over millions of years of life on Earth. Neural systems emulate the coordinated interactions of a living organism's neurons, synapses, and nerve pathways, using many hierarchically-related computers (data processing elements- -some space-based, some land-based) and self-modifying decision software in each processing element, linked by reconfiguring networks of high speed communications channels. At any point in time in the system's operational life, the "strengths," structure, and interconnections of a neural system's components will determine its inferential abilities at that moment. The strength of each system connection --the degree to which each processor element can affect the inferential findings of the other processor elements in the system --is called the "weight" of that connection. At any point in time, each connection's weight is determined by "who" the processor can currently connect to, the capacity of those channels and the type of data it can pass, and the current hierarchic position of the processor. This hierarchic connections configuration making up the system will constantly rearrange itself with the arrival of the new situational data of interest to the neural system. In this manner, the neural system is always optimally self-organized to assimilate the new input.
The architecture for neural data fusion can be described in a model consisting of two to three major sub-processor levels. The first level takes observation data from multiple sensors and associates it with a hypothesized object and background. The extent of a potential "match" to a known object or background is given a value. Optimization of the value leads to an assignment of the observation to a labeled set. The set is a group of point objects in three dimensional space without reference to any context. The second level takes the labeled set and places it within a contextually-oriented framework through a process of situation refinement (resolution of conflicting data interpretations are resolved), situation abstraction (development of relationships between observation features and actual database elements are developed), situation assessment (composite interpretations of these relationships combined with an analysis of activities and events are made), and situation prediction (extrapolation of the analyses to a future point in time). The end result is a series of conditional relationships. Using table structures, object-oriented techniques, and similar recognition schemes, a solution to the observation is made by "expectation templating".
From this point the second level fusion process "results from the flow of multi-sensor data and inferences into this template structure" to confirm the match of sensory data to the template. A third level can be envisioned where strictly military data is coupled to the results from the first two levels to produce an assessment of the object of interest's ability to inflict damage, i.e. a threat assessment.29 For example, sensor systems detect an object and determine it to be a car-like object traveling in Montana (first level process). Next, data conflicts between sensors are resolved and the type of car and details of the environment are set (situation refinement). As the car rounds a corner, the system expects the "picture" to change (situation abstraction). The expectation is confirmed against sensor data as time progresses and it is determined that the car is moving on a winding mountain road (situation assessment). Speeds, terrain, geographic location, etc., are combined to predict the car's behavior as time progresses (situation prediction). All this taken at once is second level data fusion providing a solution to the observation. A third level would be the addition of behavioral traits (threat envelopes) to determine the object's intent (See Figure 4).
Crucial to both level one and two is the ability to have a processor recognize and understand patterns (cars, faces, armored vehicles, buildings, landscapes, etc.) as an animal brain does. The animal brain relies on neurons highly interconnected in three dimensions to recognize and interpret patterns rather than bit streams as do computers. Animal brains also process information on several different levels simultaneously. Neural network technology was inspired by these biological processors. They can perform a variety of pattern mapping functions or processes. They can reconstruct a stored pattern when the input is a only a partial match for that pattern, retrieve a second pattern associated with a given input pattern, generate a new pattern based on a combination of other patterns, or
group similar patterns into clusters and provide new patterns representative of the clusters.30 This last function gives neural networks the ability to "learn" which in turn could reduce our postulated system's reliance on conventionally stored databases such as digital terrain data, weather data, sensor system features, nuclear data, biological data, chemical data, building structure data, commercial systems characteristics data, weapon systems characteristics, and order of battle data.
Neural networks can discern patterns and trends too subtle or too complex for humans, let alone current conventional computers. They can perceive relationships among thousands of variables where as a human can only deduce the relationships among two or three variables simultaneously. Hence, these networks have the ability to identify emerging trends and draw conclusions better then humans. Neural network technology is already in use in spotting credit card fraud by identifying changes in spending patterns. Mellon Bank of Delaware discovered that credit card thieves were charging $1 dollar amounts to see if the stolen cards they were trying to use had been discovered. It could have taken weeks for human investigators using older conventional processing techniques to discover the same trends. In this case, their "just installed neural network detected this new pattern on its own, without us having told it anything about a scam." Imagine this capability in a combat environment. A network "trained" in an enemy's order of battle could not only predict enemy courses of action based on force movements detected by various sensor systems but also detect deviations from doctrine thereby alerting commanders of possible surprise attacks or deceptions. It is not difficult to imagine an operational level commander in a virtual reality environment with the battle space three dimensionally displayed all around him (or her) where friendly and hostile forces are not only depicted in traditional blue and red objects but the anticipated moves of the red objects are also indicated by a series of red arrows displayed on the battlespace. The predictive capabilities of these networks are already making a debut in the commercial world with airline companies who use them to forecast passenger loads and revenues. Networks forecast demand based on the time of day, day of the week, and season of the year. The networks have proven to be 20 percent more accurate than traditional computer based statistical predictions.31
IBM France built a neural system that warns of industrial robotics equipment failure and alerts maintenance technicians before a failure occurs. The sounds, vibration and other sensory data of a normally working motor and those of a malfunctioning motor are "shown" to the system. The system then monitors the motor(s) and based on the earlier example it detects changes and predicts problems. Further more, each instance becomes another example. The system learns as it works.32 Current neural network technology is limited by the fact that the systems using it are essentially simulations using traditional software, hardware, and metallic interconnections.
Multilayer back-propagation network (MBPN) is the most common and capable neural system approach under research today. This approach shows great promise for further development and maturation. Human programmers do not specify in advance the internal rules and procedure for a MBPN- based system. Instead, the neural system's expertise is represented by the changing patterns of activation and current connection weights exhibited by the current connection configuration in the system--analogous to the brain's constant synapse firings. In order to determine a response, MBPN systems require only that the inferential problem be represented in terms of an input data vector representing the current situation for each connection and an associate output data vector for expressing the finding or result to another connection or (finally) the warrior.
To achieve a "proper" set of internal connection weights and activation's, MBPN-based neural systems must be initially "trained" through exposure to historical examples of situation observations and correct performance outcomes (using associated data inputs and data outputs). During the system training, the historical example sets are used to expose the system to the values of the input and output vectors, which are then "clamped" to specific processing elements of the neural system as their activation or reaction levels. Through a repeated series of associated input and output examples (training sessions), the clamped activation levels methodically influence and adjust the remainder of the system's connections until a generalized solution is achieved (a mapping between the inputs and outputs is found) --the system has "learned" to recognize, analyze, and respond to similar (but not identical) situations. In this manner, MBPN-based neural systems are trained to associate collections of input data and desired output result(s) by being taught the differences between the actual result it produces in each training session and the result desired by the system's "teachers." To the teachers, system training means repeatedly presenting the system with correct examples of associated input and output data vectors and allowing the system to internally adjust itself until a mapping error occurs. Thus is remarkably analogous to what we understand about the human learning experience--learning by mistakes.
The fully-trained neural system or fully-developed AI system is limited to producing a response based on situational data similar to that presented in its training/experience or its "if-then" logic tables--this is an essential drawback of these systems. The systems can relate only to what they have been taught or programmed to recognize, and then they can only respond in a predetermined manner. For any MBPN- based neural system, the human-selected input and output data vectors selected for its training embody the same risks and rewards as the "knowledge engineering" required for an AI system. The ultimate success of a neural system (making roughly the same decision a human expert would make under similar conditions) depends on the set(s) of data used to initially train the system and the system's experience in real world situations up to that point. High capacity data transfer schemes can speed the training to a point that the system is highly effective once deployed. Its ability to continue to learn ensures it effectiveness with each new experience after deployment. The limits to the depth of learning that can conceivably take place are incalculable and will bounded only by the systems capacity to store, network, and process information. All of these are limited today by the use of inanimate components composed of silicon and metal.
Exponential advancement will come from replacing metallic connections with chemical ones, embedding or growing actual neurons or their operative chemical parts on an artificial substrate, and connecting them to thousands of sensory inputs and virtual reality presentation systems. Efforts at the Naval Research Laboratory (NRL), Washington DC; Science Applications International Corporation (SAIC), McLean VA; the National Institutes of Health (NIH), Bethesda MD; and the University of California, Irvine CA, are joining in cooperative research that will "introduce power into organically grown neurons on artificial substrates, signal input and output, measure differences in potential, and determine ion concentrations". These are the first steps toward interfacing biological units with solid-state devices to produce working systems. SAIC's corporate vice president for advanced technology programs, Clinton W. Kelly III, predicts that "if we understand the chemistry, we can get the large molecules to perform computation and, in principle, develop devices that are lighter, more complex and that will not use nearly the power of silicon based machines".33
Key to producing such systems is the ability to use microelectronics fabrication techniques with advanced surface chemistry processes to layout molecular patterns that can self orient, organize, survive, and flourish. Cells are currently being grown on various materials and continually improving techniques are able to control both cell patterns and growth of the neurons' communicating cell appendages, neurites. Bioelectronic circuit design is now a potential reality. Dr James J. Hickman (SAIC), a surface chemist, and Dr David A Stenger (NRL), a biophysicist, predict that in 20 years, the bioelectronic approach could lead toward an extremely fast machine that might match or correspond with the human operator's intellect. These devices will easily learn without conventional training algorithms (needed to for simulated neural systems) and require minute amounts of energy.34 Super computers are expected to have only the cognitive abilities of a chicken by the end of the century.35 It is easy to see how basketball-sized bioelectronic neural systems with near-human intellect and fully interconnected to a suite of sensors could provide phenomenal surveillance, reconnaissance, and intelligence analysis by the year 2020. Such systems would not only provide information but would determine what data was needed from what sources in what sequence in order to provide the clearest "picture" possible to the war-fighting customer.
Pursuit of non-military omni-sensorial applications in the early stages of development could provide a host of interested partners, significantly reduce our cost and increase the likelihood of Congressional acceptance. These applications can be divided down into the three sub-areas of government use, consumer use and general commercial use.
Government uses of this capability could include law enforcement, environmental monitoring, precise mapping of remote areas, drug interdiction and providing assistance to friendly nations. The capability to see inside a structure could prevent incidents like the one that occurred at the Branch Davidian compound in Waco, Texas in 1993. Drug smuggling could be identified by clandestinely subjecting everyone to remote sensing. Friendly governments could be provided with real time detailed intelligence of all insurgency/terrorist operations within their countries. Finally, the spread of disease might even be tracked to allow early identification of infected areas in a similar way similar to how we track bird migratory patterns today using LANDSAT multi-spectral imagery.
Consumer uses would range from home security, to monitoring food and air quality, as well as entertainment spin-offs. Home detection systems would be cheaper and more capable, not only able to sense smoke and physical break-ins, but gas leaks and seismic tremors. They could also provide advance warning of flash floods and other imminent natural disasters. Sensors could identify spoiled food items. Even the air we breathe could be constantly sensed to provide health benefits. The spin-offs in the field of entertainment are only limited only by the imagination.
Commercial uses could include a follow-on to the air traffic control system, mineral exploration improvements, airport security and major medical advances. Farmers to miners would benefit from remote sensors minimizing the trial and error approach that often occurs today. Aircraft would be scanned before leaving their gates and before take-off to provide new levels of safety. Finally, in the medical field patients would be scanned and the data fed into a computer. Exploratory surgery would cease to exist as doctors would see any problem on screen. They would then treat the problem to include surgery and drugs on the computer image to determine the best course of action and then treat the patient knowing how the patient should react.
A good example of a commercial application is the development of a new air and space traffic control system. This system would create an aerospace traffic location and sensing (ATLAS) system analogous to the current air traffic control system. The environment of space is becoming more and more crowded (accumulation of satellites, debris, etc.). It is hazardous today, and will be even more hazardous as the year 2020 approaches, to fly in space without having an "approved" flight plan, particularly in LEO. The space shuttle, for instance, occasionally makes unplanned course corrections in order to avoid debris damage. Similarly, as the boundaries between space and atmospheric travel become less distinct (e.g., trans-atmospheric vehicles), this system could conceivably integrate all airborne and space transiting assets into a seamless, global, integrated system.
This system envisions that some of the same satellites used as part of the integrated structural sensory signature system would also be used for ATLAS. It would require only a small (<20) constellation of space surveillance satellites orbiting the globe. ATLAS satellites would carry the same omni-sensorial packages capable of tracking any object in space larger than 2 centimeters. All satellites deployed in the future would be required to participate in the ATLAS infrastructure. The satellites will carry internal navigation and housekeeping packages, perform routine station-keeping maneuvers on their own, and would constantly report their position to ATLAS satellites. Only anomalous conditions (e.g., health and status problems, collision threats, etc.) would be reported to small, satellite-specific ground crews. ATLAS ground stations (primary and backup) would be responsible for handling anomalous situations, coordinating collision avoidance maneuvers with satellite owners, authorizing corrective maneuvers, and coordinating space object identification (particularly threat identification). The satellite constellation would be integrated via crosslinks allowing all ATLAS- capable satellites to share information. The aerospace traffic control system of the future would eliminate or downsize most of the current satellite control ground stations as well as the current ground based space surveillance system. Elements of the ATLAS system will include improved sensors for space objects (including debris and maneuvering target tracking), software to automatically generate and deconflict tracks and update catalogs, and an analysis and reporting "back end" that will provide surveillance and intelligence functions as needed. Air and space operators would have a system where they could enter a flight plan and automatically receive preliminary deconflicted clearance. In addition, ongoing, in-flight deconfliction and object avoidance would also be available without operator manipulation. It could integrate information from even more sophisticated sensors of the future, such as electro-magnetic, chemical, visible, and omni-spectral. "Hand-offs" from one sector to another would occur, but only in the on-board ATLAS brain, which would be transparent to the operator. ATLAS provides a vision of a future generation smart system that integrates volumes of sensory information and fuses it into a format that gives the operators just what they need to know on a timely basis. (For more information on ATLAS, see the Space Traffic Control white paper)
The ATLAS system is just one small commercial application of the comprehensive structural sensory signature concept that fuses data from a variety of sophisticated sensors of all types to provide the warfighter of tomorrow with the right tools to get the job done.
(3 December, 2020--YOU ARE THERE) "It had been five minutes since the tingling sensation in her arm had summoned her from her office. Now she was standing alone in the darkened battle assessment room wondering how she would do in her first actual conflict as CINC. "Computer on, terrestrial view" she snapped. Silently a huge three dimensional globe floated in front of her. "Target Western Pacific, display friendly and enemy orders of battle, unit status and activity level", was the next command. The globe turned into a flat battle map showing corps, division, and battalion dispositions. Lifelike images appeared before her marking the aircraft bases with smaller figures showing airborne formations. Beside each symbol were the unit's designator, its manning level, and the plain text interpretation of its current activity. The friendly forces were shown in blue and the enemy in red. All the friendlies were in the midst of a recall. The map showed two squadrons of air domination drones, a wing of troop support drones, and an airborne command module (ACM) heading toward the formations of enemy forces. Shaded kill zones encircled each formation. Enemy forces floated before her also displaying textual information. The image displayed enemy units on the move from their garrisons. Speed, strength, and combat radii were marked for each unit. Some enemy units showed still in garrison but with engines running, discovered by sensitive seismic, tactile and fume smelling sensors. "Manchuria", came the next command. The map changed. The CINC was now in the middle of a holographic display. Ground Superiority Vehicles (GSVs), identified by the reliable Structural Sensory Signature System (S4) moved below her and drones flew around her. She could see her forces responding to the enemy sneak attack and monitored their progress. The engagement clock showed ten minutes to go before the first blue and red squadrons joined in battle. Aboard the ACM, the aerospace operations director observed the same battle map the CINC had just switched off. By touching the flat screen in front of him, his dozen controllers received their target formations. Each controller wore a helmet and face screen that "virtually" put him just above the drone flight he maneuvered. The sight, feel, and touch of the terrain profile, including trees, buildings, clouds, and rain, were all there as each pressed to attack the approaching foe. On the ground, a platoon sergeant nervously watched his face-shield visual display. From his position he could see in three dimensional color the hill in front of him and the enemy infantry approaching from the opposite side. If the Agency had enough time before the conflict, they could have loaded DNA data on the opposing commander into the Data Fusion Control Bank (DFCB) so he could positively identify him now, but such was the fog of war. The driving rain kept him from seeing ten feet in front of him, but his monitor clearly showed the enemy force splitting and coming around both sides of the hill. The enemy's doctrinal patterns indicated that his most likely attack corridor would be on the eastern side of the hill. Now the enemy was splitting his force in hopes of surprising our forces. The platoon sergeant's troop commander saw the same screen as her troops did with the added feature of having her opponent's "predicted" moves overlaid with his actual movements. From her virtual command post, she arrayed her forces to flank the foe. She had to be careful not to be fooled by the holographic deception images put in place by the enemy, an all too frequent and disastrous occurrence in the last conflict. If she was lucky, surprise would be on her side today. A scant five minutes had passed since the Global Surveillance, Reconnaissance, and Targeting (GSRT) system alerted the CINC of unusual activity on the other side of the border. Multiple sensors, some of which had been dormant for years and some that had recently been put in place by special precision guided munitions(PGM) delivery vehicles, had picked up increased signal activity and detected an unusual amount of motion, scent, heat, noise, and motor exhaust in and around enemy bases. Now GSRT activated two additional CINCSAT low earth orbit multi-sensor platforms, launched four air breathing sensor drones, and fired two "light-sat" intersystem omni-sensorial communications satellites into orbit to bolster the surveillance grid that watched the globe and space beyond twenty-four hours a day. As the CINC, airborne controller, and ground troop activated their situation assessment system (SAS), GSRT identified them, confirmed their locations, and passed information required to get them on line. As each warrior requested target data, GSRT fused sensor data, tapped data bases, activated resources, and passed templated neurally collated information to each person in exactly the format they needed to get a clear picture of their enemy and the unfolding situation. This was the same GSRT that was also aiding San Francisco in responding to yesterday's massive earthquake. From the President to the city mayor to the fireman trying to find the best route through the cluttered and congested streets, each got the real-time information they asked for in seconds just as our troops in the Western Pacific did. The CINC paused for several moments, wondering how battles were ever fought without the information systems she now used with practiced ease and she was glad they were fighting an enemy still mired in the visual/ELINT oriented maneuver force of the last war
Return to SPACECAST 2020 Technical Report Volume I Table of Contents
Advance to next section