Page 1 of 1
Persistent surveillance requires highly compute-intensive imaging capabilities because the aerial vehicle must remain (on demand) in an area to detect, locate, characterize, identify, track, target and re-target threats. These imaging tasks must be performed and processed in real time, on multiple sensor data streams, so that actionable information is available to the warfighter.
Producing high-quality imagery on a mobile platform like a UAV poses some interesting challenges, including the motion of the platform and the resulting image perspectives. Onboard exploitation techniques and algorithms are employed to remove the resulting distortion present in the original imagery. Ortho-rectification uses geometric projections to remove the distortion present in the original imagery to produce geometrically correct images.
Geo-registration (the process of adjusting an image based on the relative similarity between the raw sensor data and a reference image) can be performed to stabilize the image, along with geo-location (the process of comparing the geo-registered image data to known geographical coordinates) to yield the most accurate image. Finally, this processed data, which still represents individual sensor streams, must be "stitched" or "mosaic-ed" together to create a fused image. Together, the real-time processing requirement, intense algorithmic computation, and parallel processing aspect of onboard exploitation present a huge computational load.
The long mission duration aspect of persistent surveillance (frequently multiple days) demands minimal power consumption. Often, imaging subsystems are the last to be added onto the airframe, and are subsequently allotted the smallest portion of the power budget.
Clearly, imaging subsystems based on processors that feature the lowest power consumption and the highest performance (the GigaFLOPS/watt metric) are the best choice for persistent surveillance applications. Many CPU-based boards don't meet these stringent GFLOPS/watt requirements. Graphical Processing Units (GPUs), first introduced by NVIDIA in 1999, have always had very high GFLOPS/watt metrics. Given their parallel performance potential and low power consumption, why haven't GPUs been utilized much for compute-intensive embedded computing? Programmability is one reason, and upgradeability is another.
The software environment for GPUs has been notoriously non-intuitive even to proficient embedded programmers. The environment is based on graphics primitives-not high-level language constructs or even CPU assembly variants. And the basic structure of programming tools for GPUs has not offered the optimizations that programming languages for CPUs do. GPGPUs (General Purpose GPUs, from ATI, NVIDIA and others) offer familiar constructs such as well-defined APIs and indexed matrix operations.
Historically, GPUs have not been easily upgradeable; they have been discrete components soldered directly onto the printed circuit boards. Upgrading the chip as new versions become available would require a complete board respin. Many of today's GPGPUs, however, are available in a mobile PCI Express module (MXM); an easy to insert format that supports easy upgrades when new, faster GPGPUs are available. An example system that employs GPU-based processing is Mercury's Sensor Stream Computing Platform (Figure 1), a GPU-based development platform for embedded, sensor stream computing and exploitation. It is an environment that enables users to design, simulate and implement data exploitation algorithms in a low-power, high-performance 6U VXS form factor.
The Sensor Stream Computing Platform is a GPU-based development platform that enables users to design, simulate and implement data exploitation algorithms in a 6U VXS form factor.