Sensor Fusion: The Basics

作者:Carolyn Mathas

投稿人:电子产品


Many of today’s complex applications require computers to draw information from a noisy world where input may be limited in accuracy and often is not complete. To meet the challenge, multiple sensor fusion is evolving rapidly as the basis of robust systems that can make sense of imperfect input, despite the environment in which it operates. Drawing on such techniques as artificial intelligence (AI), pattern recognition, digital signal processing, control theory, and statistical estimation, data from multiple micro-electromechanical systems (MEMS) are fused to increase response and accuracy, delivering applications that until recently could only be theorized.

Combining multiple “senses” for an improved output is not a new concept—we humans do it without thinking. Taste, for example, is a great example of sensor fusion. Our brain extracts the data from over 10,000 taste buds on our tongue (sweet, salty, sour, and bitter), factors in smell from our nose, and food texture from our mouth, determining (1) the flavors we are tasting and (2) whether or not we like what we taste.

Advances in sensor technology and processing techniques, combined with improved hardware, make real-time fusion of data possible. These advances provide the ability to emulate in hardware and software the natural data fusion capabilities that exist by combining and integrating MEMS accelerometers, gyroscopes, pressure sensors, magnetic sensors, and microphones into multi-sensor combinations with on-board processing and wireless connectivity. These characteristics will enable sensor fusion applications to chalk up impressive growth for the foreseeable future. For example, IHS iSuppli estimates that the total available market for 9-axis motion sensor fusion could top $850 million in 2012 and rise rapidly to 1.3 billion in 2015.

Part of the reason for this expected growth is that MEMs devices and advanced sensor technologies are no longer prohibitively expensive. Calibration, for instance, is now accomplished at the factory or automatically. Miniaturization is also playing a big role. Most importantly, in sensor fusion applications, the sum outdistances the advantages of the individual parts themselves. As new fusion algorithms are created to provide ever-greater accuracy, additional improvements will develop.

From the military to consumer use

At one time, applications were concentrated in the military realm, with data fusion based on multiple sensors used in target tracking, automated identification of targets, surveillance, and autonomous vehicle control. Specific benefits for military use included increased accuracy and enhanced ability to characterize, identify, and locate an object relative to something else. For example, position, direction, motion, size, velocity, etc., are more useful measurements when combined rather than when looked at separately. With its success in military use, sensor fusion technology has now moved on to a variety of applications including automotive, medical, complex industrial machine control, robotics, consumer electronics, and more.

Depending on the application at hand, fusion of data can take place in different ways, usually involving the level of complexity and the number of elements being measured. Feature-level fusion involves features that are extracted from different sensor observations or measurements and combined into a concatenated feature vector. Decision-level fusion takes information from each sensor after it has measured or evaluated a target individually.

Sensor fusion also can be centralized or decentralized depending on where the fusion of the data occurs. In a centralized situation, data are forwarded to a central location to be correlated and fused. In decentralized, every sensor or platform has a degree of autonomy in the decision making that fusion causes.

Designers are well on their way to realizing that sensor fusion has arrived and are planning for it in the applications programming interface (API). This preparation results in software that complements and overcomes limitations in, for example, motion sensors, leading to increasingly immersive games and highly accurate augmented reality.

A sensor fusion development kit

Let’s now look at sensor fusion targeting smart consumer devices. STMicroelectronics has a 9-axis sensor fusion engine for OEMs called iNEMO (Figure 1). Available in a development board, iNEMO is an advanced software engine that fuses accelerometer, gyroscope, and magnetometer data to deliver accurate and reliable motion-sensing information that is easy to integrate. ST’s iNEMO family is the company’s first IMU devices with 10 degrees of freedom (DOF), offering combinations of 3-axis sensing of linear, angular and magnetic motion with temperature and barometer/altitude readings, combined with a 32-bit processing unit and dedicated software. The modules contain a memory card socket for data logging and dedicated connectors for wired/wireless connectivity including USB, ZigBee, or GPS.

The iNEMO Engine

Figure 1: The iNEMO Engine is immune to magnetic interference for high-performance in real-world conditions.

The iNEMO Engine PW8 software pack is included with the iNEMO Engine as a precompiled library for STM32F103 MCUs and the device firmware upgrade tool. This Engine Pro Sensor Fusion algorithm provides absolute point tracking and motion-tracking accuracy, immunity to magnetic interference, few user-calibration interruptions, reliable compass headings for accurate navigation, accurate direction for true augmented reality applications, and is supported by the Win8 sensor class.

Once the STEVAL-MKI119V1 evaluation board has been upgraded with the iNEMO Engine PW8 software package, it allows connection through the USB interface to any PC running the Windows 8 operating system. After the kit is connected, it is recognized as an HID sensor cluster and accesses gyroscope, accelerometer, heading, inclinometer, and quaternions data to be employed for user applications.

The iNEMO engine fuses data from the integrated 9-axis sensor (Figure 2) suite with algorithms that use true high-number-of-states adaptive Kalman filtering. The adaptive filters converge so that correct heading data override magnetic distortions and anomalies for accurate and reliable data.

Nine-axis Sensor Fusion

Figure 2: Nine-axis Sensor Fusion (Courtesy of STMicroelectronics).

As shown in Figure 3, required inputs include

  • Accelerometer output data: x, y, z
  • Gyroscope output data: yaw, pitch, roll
  • Magnetic sensor output data x, y, z
  • Library configuration parameters
Software library outputs consist of
  • Quaternions four number hpr system
  • Rotation: heading, pitch, and roll
  • Linear acceleration: device frame linear accelerations
  • Gravity: device frame gravity acceleration

Sensor fusion input and output example

Figure 3: Sensor fusion input and output example. (Courtesy of STMicroelectronics.)

The iNEMO engine’s adaptive filters converge so that correct heading data overrides magnetic distortions and anomalies, resulting in more accurate and reliable data. iNEMO allows the correction of:

  • Magnetic distortions registered on the magnetometers
  • Dynamic distortion measured by the accelerometers
  • Inherent drift of the gyroscope over time
iNEMO integrates all 9 inertial axes plus compass with complex fusion algorithms, so the output of the sensor cluster is optimized for fast, easy integration into smart consumer devices. The library can be configured to achieve the best trade-off between performance and power savings.

Summary

In this article we have looked at what sensor fusion involves, presented its advantages, reviewed an example of a 9-axis sensor fusion system, and discussed a useful development kit now available from DigiKey. A companion article, also scheduled for publication in DigiKey’s Sensor TechZone (“Combining the input of multiple sensors to produce better overall results”) goes further in depth into recently announced devices and provides examples of other data fusion opportunities, especially in tablet-based and automotive applications.

免责声明:各个作者和/或论坛参与者在本网站发表的观点、看法和意见不代表 DigiKey 的观点、看法和意见,也不代表 DigiKey 官方政策。

关于此作者

Carolyn Mathas

Carolyn Mathas 曾在 EDN、EE Times Designlines、Light Reading、Lightwave 和 Electronic Products 等多家媒体任编辑或作者,从业经验达 20 多年。同时她还为多家公司提供定制内容和营销服务。

关于此出版商

电子产品

《电子产品》杂志和 ElectronicProducts.com 网站服务于负责电子设备和系统设计的工程师和工程管理人员。