This technology is an auditory attention decoding approach that uses a deep neural network to selectively amplify noises of interest in hearing aid technology.
Unmet Need: Background noise filtering for hearing aids in noisy environments
Millions of Americans suffer from hearing loss, and many of these people rely on hearing aids for day-to-day life. Unfortunately, current hearing aids do not work well in noisy environments where there are many competing sounds, as the hearing aid is unable to filter out unnecessary background noise from the acoustic source of interest. As such, a method of separating out extraneous noises is needed to improve hearing aid performance, which in turn will improve communication and mental health in users.
The Technology: Neural networking approach for improved auditory attention decoding
This technology uses machine learning to identify and selectively amplify noises of interest for the user.
Incoming sounds in a noisy environment are decoded into individual signals. Real-time brain signals are then coupled with a deep neural network to classify and identify a ‘desired’ sound, without forward or backward prediction. The similarity between incoming sounds and this “desired” sound is computed, then the identified sound of interest is amplified.
Applications:
- Improves hearing aid functionality in noisy environments
- Higher fidelity speak-to-access robotics and smart systems
- Noise canceling headphones and headsets
- Improves cell phone speaker output quality
Advantages:
- Allows for use of hearing aid in noisy environments
- Automated algorithm is customizable with user input
- Selectively amplify noises of interest
Lead Inventor:
Nima Mesgarani, Ph.D.
Patent Information:
Patent Issued
Related Publications:
Ciccarelli G, Nolan M, Perricone J, Calamia PT, Haro S, O'Sullivan J, Mesgarani N, Quatieri TF, Smalt CJ. Comparison of Two-Talker Attention Decoding from EEG with Nonlinear Neural Networks and Linear Methods. Sci Rep. 2019 Aug 8; 9(1): pp. 11538.
Li, C., Xu, J., Mesgarani, N., Xu, B., (2021), Speaker and direction inferred dual-channel speech separation, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021.
Luo, Y., Han, C., Mesgarani, N., (2021), Distortion-controlled training for end-to-end reverberant speech separation with auxiliary autoencoding loss, IEEE Spoken Language Technology Workshop (SLT) 2021
Tech Ventures Reference: