Page 1 of 1
Although used previously in conventional warfare applications, persistent imaging is a major new advancement in the war against terror. Radar, electro-optical (EO), infrared (IR), signals intelligence (SIGINT) and hyper-spectral are among the sensor system modalities that can be used for this purpose. The ability to collect, process and compare outputs from these sensor systems captured from aboard a moving platform such as an Unmanned Aerial Vehicle (UAV), or from a stationary airborne platform such as an aerostat, can provide the warfighter with invaluable situational awareness.
Since today’s modern sensors can gather so much data during long collection missions, extracting the real-time actionable information from the data and storing it for later forensic analysis is a challenging task (Figure 1). There are some key technologies worth exploring for addressing this challenge using high-performance embedded computing to process and reduce the data from current and next-generation sensors.
Today’s modern sensors can gather so much data during long collection missions that extracting the real-time actionable information from the data, and storing it for later forensic analysis, is a challenge. A Global Hawk UAV shown here gets all pre-flight checks accomplished from maintenance technicians before a mission.
Persistent Imaging Processing
The derivation of real-time actionable information from sensor data is a compute-intensive, real-time operation that requires SWaP (Size, Weight and Power)—optimized, ultra-high speed, real-time processing power. Traditionally used only in gaming, GPGPUs are performance powerhouses in terms of performance and SWaP optimization. In many instances, GPGPUs can outperform CPUs. GPGPUs excel at performing mathematical operations such as image processing on parallel data streams, because GPGPUs are actually highly parallel, multicore mathematical processors with high-speed, on-off chip data access—perfect for high-speed mathematical operand read and write.
In terms of SWaP, GPGPUs minimize board real estate because of their high performance per processor (Figure 2). In general, for applications that require multiple parallel mathematical operations, GPGPUs will outperform CPUs, so fewer processors are needed to perform the same task. GPGPUs also compare favorably against many CPUs in terms of Gflops per watt. GPGPUs can offer more performance per unit of energy consumed than CPUs. This aspect is clearly beneficial for power-limited airborne platforms. At both the component and system level, all airborne applications require high-performance components and systems for reliable operation under harsh environmental conditions. However, airborne imaging applications require additional features from a component and subsystem level.
The Ensemble 6000 series 6U OpenVPX (VITA 65) GSC6200 GPU processing module harnesses the tremendous compute power of graphics processing units (GPUs) for rugged, high-performance, embedded signal and image processing.
Frequent GPGPU Tech Upgrades
As imaging subsystem performance requirements inevitably increase and sensor payloads grow, a reliable way to increase performance will be required. Fortunately, GPGPU manufacturers such as NVIDIA and AMD release new and higher-performing GPGPUs roughly twice per year. However, a method for rapidly and seamlessly upgrading components to the latest GPGPUs (with minimal system downtime) should be available. One way to achieve this is to implement GPGPUs via a Mobile PCI Express Module (MXM), an industry standard form factor. Both AMD/ATI and NVIDIA ship their GPGPUs on these surface-mount boards with defined connecter specifications. Upgrading a carrier card with MXMs featuring the latest GPGPU technology is a much quicker way to disseminate the latest technology than a complete re-spin of a GPGPU board containing soldered-down components.
At the subsystem level, openness and interoperability not only align with open-standards initiatives, they make it easy to incorporate key persistent imaging functionality such as camera interfaces and compression technology. The OpenVPX specification is a defined set of system architectures describing an open, interoperable embedded subsystem interface definition. High data rate switch fabrics, ability to manage heterogeneous processing, and high-speed IO are all critical requirements of high-performance airborne imaging applications, and are aspects of the OpenVPX-based subsystems.
Producing high-quality imagery on board a mobile platform poses some interesting algorithmic and SWaP challenges because mitigating platform- and sensor-induced distortions can degrade image quality. Also, meeting the simultaneous requirements for extracting real-time, actionable data and storing high-fidelity forensic data is a massive computational challenge.
Single sensor images (such as a warfighter would request of a particular sensor) must be stored for analysis. Forensic analysts can use this imagery to determine recurring activity of interest and view adversarial movement patterns.
Solving these tough challenges for a successful implementation takes a combination of specialized products. Advanced image processing functions that are optimized for execution on GPGPUs would certainly save countless hours of algorithm research, development and testing. Several types of algorithms are needed, including memory optimization algorithms for management of digitized sensor data in memory, image tagging functions to indicate areas for further analysis, and geometrical correction functions to correct for image distortion due to factors such as camera angle tilt and curvature of the earth. Additionally, creating an amalgam-type of image from individually processed sensor streams requires stitching or mosaic-ing operations.
In addition to imaging functions, additional imaging software features are beneficial. Scalability is crucial for support of multiple cameras. Video feed orientation, coverage area and resolution must be mutually independent variables. And a mature and fielded solution is clearly desired for robustness.
Highly dense storage is the final technology needed to address the challenge of extracting actionable information from stored data. Sensor images need to be stored for both post-mission analysis and for serving the warfighter in a digital video recorder-type capability (Figure 3). Also, imaging techniques such as histogram analysis on stored data can be used to determine troop movement by identifying disturbed versus undisturbed ground. However, persistent imaging timeframes, particularly on unmanned flights, can be weeks or months long—calling for ultra-dense, scalable rugged storage.
This rugged, customizable Data Storage Unit is optimized for SWaP, environmental, temperature and interface requirements. Storage capacity can be extended up to 96 Terabytes.
There are many elements of storage solutions to be considered, including environmental requirements such as vibration tolerance; data interface choices such as SATA, 10GbE, and Fibre Channel; degree of redundancy; availability of upgrade path; and security needs. But perhaps the most crucial, least negotiable factor is storage density; lack of sufficient storage density could hamper access to actionable information.
Although key technology and components for extracting actionable data from overwhelming streams of sensor data are highlighted here, additional hardware components, software, system interconnect and management, plus integration time and expertise are required to create a total persistent imaging subsystem.
Mercury Computer Systems