Page 1 of 1
Long gone are the days when data acquisition involved a person manually monitoring and writing down the status of some process to be analyzed later. Today, complex processes require data collection and data analysis to be done immediately and accurately. Whether the process is monitoring the position of ailerons, the pressure of a ground vehicle’s oil fluid, or the temperature of the space shuttles’ rockets, today’s data acquisition boards are vital for collecting, digitizing and storing/transferring the right data at the right time.
Getting the right data requires an understanding of two types of errors that influence the overall conversion bit resolution of any data acquisition board: random and bias errors. For a deeper look into that area, see sidebar “For Effective Data Collection: Know Thy Error Types” on p.75 of this article. Random errors affect the precision of the data collected and can typically be reduced by acquiring a better data acquisition board and installing it in a properly shielded environment. Bias errors, however, affect the precision and are reduced through proper calibration of the board. Correcting those bias errors can be done through the two calibration techniques: manual and autocalibration. Manually calibrated analog I/O boards are widely used in the industry and in many situations offer rugged, reliable, consistent performance without software or hardware overhead. When manual calibration is not enough, two mainstream auto-calibration methods exist: software calibration (SoftCal) and hardware calibration (HardCal). RTD Embedded Technologies has introduced a third method of auto-calibration, which is capable of both of these methods plus some additional features, in its SDM (Smart dataModules) product line: SmartCal (Smart Calibration).
SoftCal adds the ability to recalibrate without having to dismantle the data acquisition system, but at the cost of burdening the CPU and some, if not all, of the surrounding computer buses. HardCal eliminates the SoftCal deficiencies by having the calibration execution resident on the data acquisition board. SmartCal advances HardCal by increasing the execution speed of the onboard calibration and any user-defined data processing programs through the use of a DSP. The increased execution speed compounded with operating system independent calibrations expands the versatility and usability of the data acquisition board in a wider array of user applications.
Choice of Calibration Techniques
Both manual and auto-calibration follow the same general process: collect a large number of samples of a precisely known voltage, perform some filtering or averaging on the data, and determine if the measured voltage is the same as the actual voltage. If it is not acceptable, then adjustments need to be made to the circuitry and the process repeated. Though initial calibration of data acquisition boards occurs at the factory, regular adjustments may be required for continual precision and accuracy.
Less expensive, but highly effective, data acquisition boards require manual calibration of their mechanical potentiometers to adjust offset and gain errors. Manual calibration tends to be lengthy and requires external voltage references, but depending on the type of potentiometer used and the data collection environment, this type of calibration procedure will be suitable for many applications.
The manual calibration process involves powering up the data acquisition board for some period of warm-up time in a controlled thermal environment, connecting the analog inputs to a precision voltage source, reading the digitized voltage, and manually adjusting potentiometers until the converted voltage equals the precision source. These alignments require precision test equipment and step-by-step calibration procedures to ensure uniformity across the product line. being manufactured. This process needs to be repeated for the various gains and offsets. If a data acquisition board has analog output channels, then these too have to be calibrated using a similar method.
By setting each analog output to some value and measuring it with a precision voltmeter, the D/A (digital-to-analog converter) path offsets and gains can be adjusted until the output on the voltmeter matches the theoretical output of the converter. These resulting A/D and D/A calibrations may only be valid at the environmental temperature in which they were calibrated. Only if the characteristic thermal drift error of the board is less than the required resolution of the data being collected, can the manual calibration be relied on over other temperatures.
Users of manually calibrated data acquisition boards will sometimes profile the boards over the intended environmental temperature range it will be exposed to so that software corrections can be made for offset and gain drifts in post-data collection analysis. Auto-calibration is required when the user does not have physical access to the data acquisition board, when the user does not want to provide data correction software, or when the collected data is being processed in real time for immediate use and there is no time for corrective algorithms.
The SoftCal Approach
SoftCal data acquisition boards can be calibrated by executing a software routine within the host CPU. This is possible because the data acquisition board has onboard programmable digital-to-analog converters or digital potentiometers—instead of manual potentiometers—and onboard high-precision references. The data acquisition board is initiated by loading pre-defined factory hardware settings, which are typically stored onboard. The user has the ability through a host-side routine to recalibrate the board as necessary.
The process involves digitizing a number of samples of a high-precision voltage reference found on the data acquisition board, sending them to the CPU, calculating any deviations from the known value, and sending corrective data back to the data acquisition board so that digital potentiometers or D/A converters can be adjusted to correct for offset and/or gain errors. This process is repeated by the CPU until the proper voltage reading is achieved. SoftCal data acquisition boards can be readily calibrated in the field with no external equipment required except the CPU.
One drawback to SoftCal is that a complex external routine must be integrated into the host-side application to perform the calibration when necessary. This may prevent the CPU from performing other tasks that are needed in the full-up system operation. CPU dedicated calibration time will be dependent on the data acquisition hardware, the calibration software routine and the CPU processing speed. The SoftCal software may also have compatibility issues with user-specific applications.
If a manufacturer’s SoftCal data acquisition board is to be sold as a generic product to various users, it must support all possible operating systems—Linux, Windows, DOS and various RTOSes—compilers, for example Borland Turbo C++, OpenWatcom and others—and compiler versions. Binary modules are not a good solution because they are usually not portable across different compilers or between compiler versions. Though language-neutral software frameworks such as COM or CORBA may be used to reduce many of these problems, these systems impose limitations of their own. The use of binary modules also can be a problem for customers using non-x86 CPUs. All of these drawbacks are solvable, but they require more effort by, and dependence on, the supplier of the SoftCal data acquisition board.
The HardCal Approach
HardCal data acquisition boards are SoftCal data acquisition boards with a processing unit residing on the data acquisition board. The calibration process is self-contained on the data acquisition board without the need of any special host-side software algorithm. The data acquisition boards can be recalibrated at will by the user or be set to recalibrate based on information received by a sensor such as a digital temperature sensor.
With HardCal, a one-line command on the host side can initiate a calibration of the data acquisition board by the onboard processor chip. Depending on the complexity of the data acquisition board, the SoftCal method can consist of hundreds or thousands of commands and may call several operating system-specific functions. The HardCal implementation is clearly easier to port across different platforms and requires less CPU overhead. HardCal offloads the difficult details of auto-calibration to the onboard processor, allowing software developers to focus on their applications. The user does not have to know anything about the data acquisition board’s processor chip to perform this function.
In general, HardCal data acquisition boards have successfully used microcontrollers for the onboard processing chip. Unfortunately today’s microcontrollers have limited processing speed and functionality and are not designed for math-intensive computations. This limits calibration speed and user application options.
“Smart” DSP-based Calibration
RTD has developed a product line known as SmartCal, which eliminates the drawbacks of SoftCal and HardCal by incorporating a Digital Signal Processor (DSP) as the onboard processor for fast calibration of their A/D and D/A data paths, achieving calibration in under 300 ms (Figure 1). Since DSPs are optimized for mathematical computations, calibration can be performed much faster than with a microcontroller. And, since they are based on the HardCal principle, no user knowledge of DSPs is required—the process is transparent.
The advantage of the SmartCal approach is that if needed, the user can unlock the full potential of the resident DSP by storing their own custom data processing application in the onboard flash. This feature means that the user now has the capability of higher-speed data processing and control compared to a microcontroller while reducing or eliminating CPU intervention.
The data acquisition board can be initiated to calibrate by having the host CPU board command the data acquisition board to calibrate itself by setting a register bit, by having the onboard data acquisition DSP autonomously initiate a calibration when it deems it necessary, or by having the data acquisition board send a Calibrate Interrupt to the host CPU board, which can then process the interrupt in an Interrupt Service Routine to determine when a calibration should start. An example of the second case would be the user storing a DSP program on the data acquisition board that sets upper and lower temperature limits for the onboard data acquisition temperature sensor, so that when the temperature goes out of bounds the data acquisition board will auto-calibrate.
SmartCal boards use DSPs over microcontrollers for good reason. The Harvard architecture of the DSP provides the multiple buses needed to simultaneously read an instruction, read a data value, process the instruction and write a data value. Because of their bus structure and internal multiply/accumulate hardware, DSPs handle multiply/add cycles more efficiently than their microcontroller counterparts.
With multiple program/data buses and internal streamlining for mathematical operations, the data gets processed and moved around much faster than in a microcontroller. The Texas Instruments TMS320F2812 flash-based DSP has a 32 x 32-bit hardware MAC (Multiply/Accumulate) and 64-bit processing capabilities, making it extremely efficient in math-intensive applications.
Calibration by the DSP is performed by a sophisticated convergent algorithm that reads multiple onboard precision references and appropriately adjusts analog offsets to obtain accurate readings to within +/- ½ LSB. In order to avoid creating missing or duplicate digital codes, which can lead to non-linear resolution problems, the calibration is done in the analog signal path. Not doing this can cause a reduction in accuracy of your system above and beyond what a data acquisition board may specify. When calibration is finished, the DSP notifies the host by setting a status bit and/or generating an interrupt approximately 300 ms later. Since the DSP performs the calibration, it is operating-system-independent.
SmartCal boards are calibrated at the factory after a warm-up period, with the resulting set up values stored in an onboard EEPROM. As long as the board has not been recalibrated, these values will continue to be loaded as default values. Should the user decide to recalibrate, these new settings will be stored into a second location within the EEPROM. Restoration of the last known calibrated values occurs upon reboot. The user does have the ability to recall the initial factory defaults, if so desired.
Figure 2 shows an example of RTD’s SmartCal boards, the PC/104-Plus SDM7540. This board offers SmartCal features by utilizing the Texas Instruments TMS320F2812 DSP. There are sixteen single-ended analog inputs with a scan rate as high as 1.25 Msamples/s, 12-bit resolution, up to x64 gain and a FIFO. Complex scanning algorithms, including single conversion, multiple conversions, channel scanning, bursting and multi-bursting, allow the SDM7540 to collect comprehensive information not just pieces of data. This is accomplished through the use of a CGT (Channel/Gain Table), which stores channel, gain, polarity, range, single ended/differential, and skip bits for each data collection point taken. There are two digital-to-analog outputs that can settle within 200 kHz. Each output has its own FIFO for data buffering, which can also be used as a signal generator without repetitive software overhead. During recalibration, the outputs are grounded so that the user does not have to disconnect analog outputs from servos, pumps or other devices. Once calibration is complete, the outputs are returned to a user-specified voltage.
For Effective Data Collection: Know Thy Error Types
In order for a user to ensure that the data collected for an application has the appropriate accuracy and precision, two types of errors that influence the overall conversion bit resolution of any data acquisition board must be understood. These are random errors and bias errors.
A random error is random noise that gets generated by each component on a board and by the actual design of the board itself. The magnitude of this error is dependent on the quality of the components chosen and the quality of the schematic design and PCB layout. Once the board is designed and manufactured, this error is fixed and can only be removed through data averaging. The resident A/D and D/A semiconductor chips set the best case conversion bit resolution for any data acquisition board. Manufacturers typically publish two numbers to describe this chip resolution: the marketing resolution and the theoretical resolution obtained under ideal conditions.
Depending on the quality of the part, and the environment it is operating in, the actual value will approach, but not achieve, the theoretical value at all input data frequencies. For example, one manufacturer’s 8-bit converter may have a high noise floor at a particular input frequency that gives it an actual conversions capability of 7 bits. This is true whether you are talking about a 4, 12, 16, 24, or any other bit-level converter.
Bias errors are offsets caused by thermal drifts, temporal drifts and/or other sources. Typically these errors are calibrated out by the manufacturer before shipping to customers. As time progresses, and as the environmental temperature changes from where the board was calibrated, bias errors may become noticeable in sampled data. This drift can be minimized by using high-end components with high-stability coefficients over the expected environmental conditions that data is to be monitored.
If this is not possible then bias errors can be removed through recalibration, assuming this type of circuitry is available on the data acquisition board and it is user-accessible. If the bias error is due solely from thermal drift then the user also has the option of profiling the thermal drift of the board and using data analysis software to remove any errors. This method requires recording environmental temperature as well as the sampled data. This topic is covered under the Manual Calibration section.
How users choose to deal with random and bias errors is dependent on their performance requirements and budget constraints, which may or may not be related. Performance is relative and dependent on the type of data the user is obtaining. If the data collection is for a short time period, the data acquisition board is in a stable environment and the resolution requirement is not high, then an inexpensive data acquisition board can be used to obtain reliable results. Using a more elaborate, expensive board would be overkill and could ultimately affect the board’s Mean Time Before Failure (MTBF) or Mean Time Before Critical Failure (MTBCF) for the better or worse.
A more expensive board may indicate the use of more reliable components, but it could also mean more complex circuits, which can be possible points of failure. If the data collection is over a long period of time, at high speeds, in a wide, fast moving thermal environment, with each input channel having differing set-up parameters, then the data acquisition board may require high-end components and complex circuitry to meet these requirements. Sidebar Figure A summarizes the cost tradeoff for the type of data acquisition board used based on its susceptibility to thermal and temporal drifts.
If manual or auto-calibration is the desired means for minimizing bias errors so that the overall effective bit resolution of the data acquisition board can be realized, then very fine tuning components and circuits must be incorporated on the data acquisition board. By averaging sampled data, random noise can be eliminated revealing bias errors. These errors can then be reduced below the theoretical bit resolution of the converters.
RTD Embedded Technologies
State College, PA.