Heterogeneous Multicore Processors Provide the Perfect Platform for Autonomous Devices

作者:European Editors

投稿人:DigiKey 欧洲编辑

The emergence of autonomous devices is perhaps one of the most eagerly anticipated developments of this decade. It is largely being explored in the form of vehicles that can exhibit some level of autonomous driving; cars that can parallel-park with limited driver assistance are already on the market, while Google’s self-driving car project has accrued over 1.5 million autonomous miles across the USA.

However, autonomous devices come in many shapes. Heathrow’s Terminal 5 car parking ‘pods’ are autonomous vehicles that transport passengers from their cars to the departure lounge, albeit running on a defined path and without the hindrance of other road users. There are now many examples of driverless light railways around the world. Mass transport aside, drones are becoming more autonomous, requiring less direct input from an operator to hover, change height or direction; the underlying trend here is for drones to become fully autonomous, transporting goods from point A to point B without any direct human control.

Much of the technology that enables this level of autonomy is still in development, as is the legislation that will allow its wide scale rollout.

Nothing new?

In many ways, embedded devices have always operated autonomously; the difference is that they have also been largely stationary. Nevertheless, the practices used to develop embedded applications in general, and embedded software in particular, reflect the nature of embedded devices. They (typically) exist to perform a specific function and have predictable conditions under which they must operate, with clearly identifiable stimuli. The ‘art’ of developing high quality embedded applications is the way in which the device handles non-identifiable stimuli, or unpredictable conditions.

In most cases, the software would include a robust and logical way to effectively ignore all those conditions that do not directly impact its primary function(s). That’s fine if the device is a printer, it’s not so good if the device has the ability to move under its own volition. It is this key difference that defines the main challenge developers face when designing an autonomous device. It’s no longer acceptable to ignore conditions or stimuli that have no relation to the primary function, because the primary function would not be able to reliably and safely navigate in a world where unpredictability is the main input.

In the US, the National Highway Traffic Safety Administration (NHTSA) makes a clear differentiation between driver assistance systems and automated driving systems, using five levels of automation. Systems that operate at Level 0 exhibit no automation of the primary safety functions of a car (steering, braking, accelerator) and as such include the majority of today’s driver assistance systems such as lane departure or collision detection warning. The definitions culminate at Level 4: Full Self-Driving Automation, which includes driverless vehicles.

Of course, the same definitions aren’t necessarily applicable to autonomous devices that aren’t intended to transport people; but the same challenges exist in terms of developing systems that are able to react appropriately to conditions that may not necessarily be previously described to the system.

Processing paradigm

Governments and industry are fully aware of the potential that autonomous devices hold. In the UK, the Robotics and Autonomous Systems (RAS) Special Interest Group (RAS-SIG) was formed in 2013, with the objective of understanding the landscape and opportunity of RAS in the UK. It has identified many, including RAS Tools: tools that are able to move themselves and other things while interacting with their environment and people, and plan their motion and actions.

Such systems will operate in slightly more ‘predictable’ environments than other forms of autonomous vehicles intended to move goods and passengers around urban and rural areas. Devices in this scenario will require more than just forward-looking radar to avoid collisions; they will rely on a systemic approach to connecting autonomous road vehicles in a way that will allow them to share information and even learn from each other. The burgeoning vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2X) communications systems are intended to provide this level of connectivity that will be necessary to keep all road users safe under any driving conditions.

The intelligence needed to gather and process this level of information is already available, and can be found in multicore devices and systems-on-chip (SoCs). Some of these devices specifically target the automotive market and ADAS (advanced driver assistance systems), but many more are used in a wider range of application areas.

These devices combine high-throughput data processing capability with real-time control functionality in a single device. The close integration of multiple processing cores allows for faster data exchange between the subsystems, while working independently to preserve security and safety requirements. Often these devices will also integrate peripherals dedicated to specific applications, such as sensor interfaces. Sensors will, of course, play a crucial role in autonomous devices and will include cameras, as well as other types of sensor technologies, such as infrared and various forms of light sensors for detecting pathways, obstacles and proximity. Ultrasound and radar are also being developed with autonomous devices in mind.

Multicore makes sense

Heterogeneous multicore devices like the VF6xx (part of NXP’s Vybrid Series) combine different but complementary processing subsystems, along with analog and digital peripherals that make it suitable for a number of applications. In the case of the VF6xx, there are two cores: an ARM® Cortex®-M4 with DSP functionality, and an ARM Cortex-A5. Although able to exchange data using the Network Interconnect (NIC) system (which forms the ‘backbone’ of devices in the Vybrid family), the two cores are intended to operate independently and concurrently, running different operating systems if necessary. Figure 1 shows a block diagram of the VF6xx.

Diagram of NXP’s Vybrid VF6xx

Figure 1: NXP’s Vybrid VF6xx features two ARM cores to provide real-time control and high levels of data processing

For example, the Cortex-A5 core could be executing a Linux distribution and application code, while the Cortex-M4 core could be running a real-time operating system (RTOS) for control functions. Although not specifically developed for autonomous devices, it is this level of separation in a single device that could be used to enable low cost, highly efficient control systems in devices that must process large amounts of data coming from several sources, while maintaining real-time control over motors and actuators.

Latency is a major factor in real-time systems, hence the need for an RTOS. As all data transfers must pass through the NIC found in the VF6xx, it has been designed to minimize latency. It also maintains the relationship between bus masters (cores and DMAs) and slaves (peripherals and memories). Any data transfer between masters and slaves incurs latency, so system-level architecture is an important aspect to designing systems based on multicore SoCs.

While the VF6xx features several memories that can be accessed by both cores over the NIC, the Cortex-M4 also has its own tightly coupled memory (TCM), a standard SRAM connected directly to the core using a local memory controller. As such, the core can access the TCM in a single cycle, making it ideal for real-time control.

High-resolution control

Multicore devices are now used in a range of applications, often where connectivity needs to be combined with precise and adaptive control. This is where the Concerto family from Texas Instruments is currently positioned, although it could conceivably be used in autonomous devices.

It features an ARM Cortex-M3 core integrated alongside TI’s own C28x floating point core; the same core used in TI’s Piccolo™ and Delfino™ families. The ARM core is intended to provide the communication subsystem, while the C28x core covers the real-time control.

The architecture features a number of analog peripherals that are accessible by either core. These include two 12-bit ADCs and six 10-bit DAC modules (each of which includes a comparator).

An interprocessor communications peripheral (IPC) provides the infrastructure for the two cores to exchange data and synchronize program execution. Exchanges are controlled via a register and simple handshaking in software. An external peripheral interface (EPI) is provided, also accessible by both cores, which allows the high-speed parallel bus to interface to external peripherals and memory. The flexibility of the EPI means it can interface to most types of peripherals using standard control protocols, as well as FPGAs and CPLDs.

A major feature of autonomous devices will be their ability to move and this will most likely be accomplished using brushless DC motors (BLDCs) for motion and ‘heavy lifting’, and perhaps stepper motors for the fine control of ‘arms’ and ‘fingers’ when manipulating smaller items. The default method for controlling BLDCs is now through pulse-width modulation (PWM). The Concerto F28M35x features nine PWM modules; eight of which are high-resolution.

Feedback is a key factor in control systems, which will be equally important in autonomous devices. One of the ways the Concerto addresses this is through the Enhanced Quadrature Encode Pulse Module, or eQEP. This is intended to connect directly to either linear or rotary encoders, to gather position, direction and speed data. Figure 2 shows a typical implementation of a rotary encoder and the resulting waveforms detected by the eQEP module.

Diagram of Texas Instruments’ Concerto family

Figure 2: Texas Instruments’ Concerto family brings together the industry standard ARM Cortex-M3 with its own C28xx core to deliver a multicore processor suitable for a wide range of control applications.

Conclusion

Autonomous devices, including unmanned aerial vehicles, next-generation ‘smart tools’ and, of course, vehicles, are expected to become more prevalent in the next several years, to the point where we will all be familiar with their presence in most walks of life. Safety is a major consideration when developing an autonomous device of any kind, and national and international standards will continue to develop to address this. All highly integrated processors targeting industrial applications today will be able to meet today’s safety requirements and that puts them in the excellent position of being at the vanguard of the autonomous age.

Despite the hype, the market for autonomous devices is still in its infancy, so it will take some time for semiconductor manufacturers to have enough confidence and market insights to develop application specific devices to address it. In the meantime, developers can rely on heterogeneous multicore devices like those described in this article.

免责声明:各个作者和/或论坛参与者在本网站发表的观点、看法和意见不代表 DigiKey 的观点、看法和意见,也不代表 DigiKey 官方政策。

关于此作者

European Editors

关于此出版商

DigiKey 欧洲编辑