Maker.io main logo

Another Ambassador Moment: Simple Head Related Transfer Function

2021-12-29 | By Robby Huang

License: Attribution Non-Commercial No Derivatives

Figure 1: An example representation of the variables in the Head-Related Transfer Function (HRTF).

 

What is Head-Related Transfer Function?

A function that describes the spectral characteristics of sound measured at the tympanic membrane when the source of the sound is in three-dimensional space. It is used to simulate externally presented sounds when the sounds are introduced through headphones. The Head-Related Transfer Function (HRTF) is a function of frequency, azimuth, and elevation and is determined primarily by the acoustical properties of the external ear, the head, and the torso. HRTFs can differ substantially across individuals but, because measurement of an individual HRTF is expensive, an averaged HRTF is often used.

Application

Spatial audio has become a basic requirement for a lot of game and entertainment applications. Especially in AR/VR devices, it is a necessary element that allows the user to have an immersive experience. Though it sounds complicated, we can actually implement it on something as simple as a microcontroller.

Our project is unique in that we perform all computations that generate the output spatial audio in a microcontroller, as opposed to on the Cloud or some other more powerful but power-hungry microcontroller. Low power generated spatial audio systems have a promising future in the consumer electronics industry. Battery capacity is a bottleneck for tech companies to put more advanced features into wearable devices such as AR glasses. Generating spatial audio with a microcontroller will directly contribute to reducing power consumption of the device. 

Our Simpler Version

Instead of implementing the complete version of the HRTF, we implemented a simplified version that only considers both the delay between left and right channels or interaural time differences (ITD) and the amplitude difference, or interaural level differences (ILD). Two main vectors contribute to our math: one from the user body center to the sound source and one from the user body center to their anterior. According to a 2015 project called Sound Navigation, by roughly modeling the user’s head as a sphere with two ears diametrically opposite to each other, we can use the angle between these two vectors to calculate each characteristic interaural difference.

hrtf

Figure 2:  By roughly modeling the user’s head as a sphere with two ears diametrically opposite to each other, the angle between these two vectors can be used to calculate each characteristic interaural difference.

Functions:

formula1

In a while loop for a program, we first calculate the distance between the user’s relative angle to all the sound sources. Then plug that number into the two equations above to obtain the amplitude ratio and delay between the two ears. We also use the equation: |Intensity@locA -Intensity@locB| = 20log(dA/dB) to model the intensity of sound decay over distance. We made the assumption that the synthesized audio has an original amplitude that is the amplitude one would hear within one meter of the sound source and then use that as a reference to calculate the intensity of sound at any other spot on the map. To prove the eligibility of the algorithms, we started by implementing them with Python on a Jupyter notebook. We generated sound to mimic a point sound source from different angles, and, with slight tunings of the head dimension parameters for different users, all experiment users were able to distinguish the sample sound directions.

In terms of hardware, all you need is an MCU that has a DAC output. For example, PIC32 and STM32 MCUs all have DAC output pins. With a 2-channel DAC output, we can have a channel per ear. Even with an MCU that does not have a DAC output such as an Arduino Uno, we can use some I2C DAC breakout boards like the MCP4725.

References

https://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/f2015/ath68_sjy33_vwh7/ath68_sjy33_vwh7/ath68_sjy33_vwh7/SoundNavigation/SoundNavigation.htm

https://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/f2021/az292_lh479_kw456/az292_lh479_kw456/index.html

 

制造商零件编号 MCP4725A3T-E/CH
IC DAC 12BIT V-OUT SOT23-6
Microchip Technology
制造商零件编号 DM320001
PIC32 STARTER KIT PIC32 EVAL BRD
Microchip Technology
制造商零件编号 NUCLEO-H723ZG
NUCLEO-144 STM32H723ZG EVAL BRD
STMicroelectronics
Add all DigiKey Parts to Cart
TechForum

Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.

Visit TechForum