Page 1 of 1
Military application developers continue to hunger for ways to take advantage of improved communications and faster processing. They also are trying to coordinate information to create more automated, cohesive and mission-aware links between the various groups and devices. Today’s battle communications gear is certainly a far cry from the days when surveillance consisted mainly of wireless voice relay of human visual observations. But today’s system architects want to go to the next level and seek to fuse data from high-performance sensors, COMINT, ELINT, situational awareness and other C4ISR applications–a collection of technology that shares a common need for “fat data pipes.”
Interconnecting high-bandwidth sensors or large numbers of lower-bandwidth sensors on airborne, ground vehicle, or stationary platforms represents a data plane domain where earlier generations of Ethernet were unable to compete against technologies such as Serial FPDP and Fibre Channel. But thanks to the arrival of the speedy 10 Gbit Ethernet standard, the universal nature of Ethernet promises defense programs new levels of portability and easier software maintenance throughout the application signal path.
Because of the heavy processing load of running Ethernet protocol stacks at 10 Gbit rates, most 10 GigE implementations use some form of a protocol offload coprocessor to achieve high-data-transfer-rate performance and minimize processor utilization. The offload coprocessor, typically an ASIC in the commercial world, performs the heavy lifting of the protocol, reducing the burden on the processor.
Figure 1 depicts a COMINT or ELINT wideband analysis application with real-time record and playback capability. Real-time C4ISR applications, such as these, have a requirement to send raw sensor data in parallel to storage and multiple processing streams–a challenge that 10 GigE addresses handily. However, these applications also impose unique functional and performance requirements on the 10 GigE interface that differ significantly at the application and hardware level from the 10 GigE interfaces used in the large server-based markets. It’s important to understand those unique requirements, the challenges they impose on the 10 GigE interface, and the architectures that solve them, pointing out where typical commercial solutions fall short.
A critical element in ISR (Intelligence Surveillance and Reconnaissance) systems, especially those dealing with multi-channel, direction finding and sensor fusion, is the ability to accurately time-tag the data with a sufficient level of accuracy. The tagging is used to enable the alignment of data arriving from the multiple sensors. It’s also used if the data is recorded and used later for playback, to simulate the re-injection of data into the system with the same timing fidelity with which it was originally captured.
Performing the time tagging in the CPU is not a suitable solution for meeting the accuracy and precision requirements. By the time the packets reach this point, they have gone through several non-deterministic interfaces. Instead, tags should be applied as close to the 10G interface as possible. The ASICs designed for commodity 10 GigE NIC cards generally do not support hardware interfaces for packet time stamping or playback staging. This leaves access for tagging at the socket layer only, which is insufficiently accurate.
Keeping “Corrupt” Packets
Since ISR systems take in sampled and digitized representations of real-world signals, they flow the data through signal processing algorithms such as filtering, FFT, decoding, or other processing for detailed analysis. Some of these algorithms can correct or tolerate a number of scattered errors, but they are not amenable to a consecutive swath of missing data. A good analogy would be the idea of a few letters missing from a sentence versus having a whole paragraph go missing.
Ethernet protocol stacks are designed to discard packets if an error is detected in a checksum. This means discarding anywhere from a series of 1500 or 9000 (depending on the network’s packet or MTU size) consecutive bytes, depending on the type and layer at which the error occurs. Ironically, the source of this error could be a small change in a packet’s header, which does not affect the integrity of the data. While this behavior makes good sense in most network applications of Ethernet, where higher protocol or application layers deal with it, it can severely and unnecessarily hamper the performance of real-time sensor processing applications.
As a result, when it comes to using Ethernet in C4ISR systems, a solution with the capability to avoid dropping packets with errors is essential. Again, the requirement for this capability is unique to systems performing signal processing functions on digitized sensor data and does not apply to the vast majority of Ethernet networking applications in the commercial space.
Extended Duration Data Bursts
Capturing hard real-time sensor data differs fundamentally from typical commercial applications. Consider an airborne surveillance radar platform or a ground-based EW platform. During the application’s “listening phase” the sensor is rapidly digitizing a stream of incoming signals. This is followed by a longer rest-period until the next outgoing pulse is transmitted. When Ethernet is used as the incoming sensor fabric, the listening phase brings the prospect of receiving multiple consecutive packet bursts at full line rate without the option of controlling the flow by asking a sensor to pause its transmission.
Typically, a 10 GigE Ethernet interface card buffers incoming data by using host system memory, located external to the card and accessed over PCI-X or PCIe. However, there is often at least momentary contention for accessing system memory, which in the case of consecutive line-rate bursts, can result in dropped data. Therefore, a 10 GigE interface used for these applications must be designed with sufficient local data buffer.
High-performance embedded real-time applications such as reconnaissance aircraft, FCS ISR drones or UAVs with SIGINT payloads, place unique real-time performance demands on the datapipe they use. To effectively address these demands, 10 GigE technology must implement features including interfaces for precision time-stamping, local memory to accommodate large full-rate inbound bursts and outbound data staging, and the ability to customize stack behavior for receiving real-time sensor data.
Generally, these performance requirements exceed what commodity NIC cards and the ASICs they are based on can do since these products are optimized for a different set of application requirements, and lack some or all of the required features. However, carefully designed 10 GigE products, such as AdvancedIO’s V1021 board, facilitate the use of the latest incarnation of widespread Ethernet in high-performance real-time applications (Figure 2).The board has been deployed for that purpose already in such C4ISR systems.
Vancouver, British Columbia