top of page

İşitme cihazlarında yapay zeka kullanılarak konuşmanın anlaşılmasının arttırılması

Güncelleme tarihi: 27 Nis 2022

Improving Speech Understanding and Monitoring Health with Hearing Aids Using Artificial Intelligence


Published Online: 2021-09-24, Thieme Open Access


ABSTRACT


This article details ways that machine learning and artificial intelligence technologies are being integrated in modern hearing aids to improve speech understanding in background noise and provide a gateway to overall health and wellness. Discussion focuses on how Starkey incorporates automatic and user-driven optimization of speech intelligibility with onboard hearing aid signal processing and machine learning algorithms, smartphone- based deep neural network processing, and wireless hearing aid accessories. The article will conclude with a review of health and wellness tracking capabilities that are enabled by embedded sensors and artificial intelligence.


In recent years, hearing aids have rapidly evolved from dedicated, single-purpose devices into multipurpose, multifunction devices. By combining acoustic and biometric sensors with signal processing, hearing aids today can monitor physical activity and social engagement, automatically detect falls, and serve as an intelligent virtual assistant, in addition to improving speech intelligibility in quiet and noisy listening environments.1


Fundamentally, the most essential function of any hearing aid is to optimize speech intelligibility, so that hearing aid users can communicate with comfort and clarity in challenging listening situations. In addition to the significant progress made toward achieving this primary goal, in recent years, embedded sensors and artificial intelligence (AI) algorithms are now also endowing modern, advanced hearing aids with important health and wellness tracking capabilities.


Since 2018, Starkey has incorporated acoustic, inertial, and biometric sensors directly into the hearing aids. Onboard signal proces- sing algorithms based on machine learning and AI technologies use the inputs from these sensors to provide hearing aid users with opti- mal speech intelligibility in noise,2 physical activity tracking,3 fall detection,4 and social engagement assessment.1


According to the latest MarkeTrak X fin- dings,5 natural sound quality, performance in back- ground noise, comfort for loud sounds, and spatial awareness are the top overall contributors tohearing aid satisfaction and benefit. A benchmark study on a cohort of 20 hard-of-hearing participants listening to four noisy acoustic scenes (conducted by FORCE Technology SenseLab, an independent perceptual assessment laboratory) measured speech sound quality and preference for the noise management systems (directional microphone and noise reduction algorithm) in the Starkey Livio AI and Muse iQ hearing aids along with premium hearing aids from other manufacturers. In all four noisy acoustic scenes, listeners judged the overall loudness of background noise to be lower for both Starkey hearing aids when compared with other manufac- turers’ premium hearing aids.2 In addition, Livio AI and Muse iQ hearing aids were judged to have the lowest sound distortion, in terms of reverbera- tion, across all four acoustic scenes. Starkey has continued to focus on improving performance in noise by using even more sophisticated machine learning and AI strategies to mimic—or exceed— human performance. To begin, however, the different aspects of human intelligence and AI are defined, as the latter has rapidly been approaching a “buzzword” status in recent years.



DEFINITIONS


Intelligence

One may have a basic understanding of the meaning of the word intelligence, but many theories and approaches can describe what the word means. Robert Sternberg (2020), IBM Professor of Psychology and Dean of the School of Arts and Sciences at Tufts University, desc- ribes intelligence as the “…mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to ma- nipulate one’s environment.”6 Fig. 1 is a sche- matic of how the human perception and intelligence system uses biological sensors to collect information from the environment, pro- cesses these information in the brain to under- stand the world, takes actions accordingly, and learns based on experiences. This concept makes sense for human perception and intelli- gence, but what about the ways that devices and machines process and learn from information?



Figure 1 Schematic diagram depicting key processes in human perception and intelligence, characterized by sensing inputs, processing information, developing actions based on these processes, and learning based on experience.


Artificial Intelligence

This term has been used for decades and has advanced over time with technological innova- tions. Today, AI is designed to enable machines to simulate human intelligence and human behavior, albeit for applications in narrow domains. AI systems do not require devices to be pre-programmed; instead, they use algo- rithms that may use data or sensory inputs to process, act—and learn—using their own “intel- ligence,” often acquired with training on relevant datasets. With unprecedented advances in algo- rithms, computing technologies, and digital data in recent years, AI has been rapidly adopted in a wide range of devices and systems, enabling a burgeoning array of new applications.7 The broader category of AI includes machine learn- ing, edge computing, and deep neural networks (DNNs), as defined below. See the article by Balling et al in this issue for additional details about the use of AI in hearing aids.


Machine Learning

Machine learning is a subfield of AI concerned with building algorithms that rely on a collection of examples of some phenomenon. These examples can exist in nature, be produced by humans, or be generated by another algorithm. Machine learning may also be defined as the process of solving a problem by gathering a dataset and algorithmically building a statistical model of that dataset that may, in turn, be used to “solve” the practical problem. As a branch within AI, machine learning systems use inputs to process, act, and improve performance based on these pretrained models. Simply put, machine learning uses algorithms to parse data, learn from that data, and make informed decisions or predictions based on what it has learned. The power behind machine learning is the size and diversity of the dataset used to train the models and the number of parameters or features used to characterize the models.


Edge Computing

Edge computing is a distributed computing para- digm that brings computation and data storage closer to the location where it is needed to decrease latency and save communication bandwidth. For hearing aid applications, edge computing moves computation closer to the edge of the network, relying on ear-level processing, without requiring the hearing aids to be connected to a smartphone or cloud-based data centers.


Deep Neural Networks

As a special subcategory within the field of machine learning, DNN systems use multiple layers of interconnected computational nodes, referred to as “neurons.” Each layer is com- posed of a large number of neurons represen- ting the “width” of the network. The number of layers defines the “depth” of the neural network. The human cerebral cortex consists of a large ensemble of interconnected biologi- cal neurons, which allows it to process a multitude of sensory information in a hierar- chy of increasing sophistication. In so doing, it teases out complex patterns or correlations in that information to help people understand and navigate in the real world. Inspired by the structure and function of the human cerebral cortex, DNN-based AI systems are increas- ingly solving problems that were previously considered tractable only through human in- telligence.7 See the article by Andersen et al in this issue for additional details about the use of DNN in hearing aids.


APPLICATIONS FOR IMPROVING SPEECH INTELLIGIBILITY USING AI


Acoustic Environmental Classification Derived from auditory scene analysis,8 acoustic environmental classification (AEC) is the compu- tational process by which signal processing is used to mimic the auditory system’s ability to separate individual sounds in real-world listening environ- ments, thereby classifying them into discrete “sce- nes” or environments based on temporal and spectral features.9 Modern hearing aids have used AEC to classify listening environments (e.g., quiet, speech, noise, and music) and auto- matically enable sound management features (e.g., directional microphones, noise reduction, and feedback control) appropriate for that environ- ment10 (see the article by Hayes in this issue for more information about environmental classifiers). Most AEC systems combine two processing sta- ges: feature extraction and feature/pattern classifi- cation, followed by postprocessing and environmental sound classification (Fig. 2). The accuracy of any AEC system depends on the number of feature parameters, sound classes, and the type of statistical model used. Supervised machine learning models that have been trained on large, known datasets have been used to improve the classification accuracy of AEC systems. Star- key’s Hearing Reality Sound AEC system features eight automated sound classes: music, speech in quiet, speech in loud noise, speech in noise, machine, wind, noise, and quiet. It prioritizes speech intelligibility in noise by making discrete adjustments in gain, compression, directionality, noise management, and other parameters appro- priate for each specific class. Classification accuracy for many hearing aid systems peaks at approxi- mately 80 to 90%; problems are most likely to arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctu- ating noises.11 For this reason, AEC—even with machine learning training with large amounts of data—is not always sufficient, especially in chal- lenging listening environments.12 These situations are better served by user-prompted, on-demand analysis and automatic adjustments for enhanced speech clarity, as described later.



Figure 2 Block diagram of an acoustic environmental classification (AEC) system incorporating feature extraction, feature/pattern classification, post-processing, and environmental sound classification.



Figure 3 Workflow for on-demand adaptive tuning (ODAT), known as “Edge Mode.”


Edge Mode

In January 2020, Starkey introduced Edge Mode, an advanced edge AI computing solution designed to overcome some of the limita- tions of AEC by putting the power of AI under the hearing aid user’s control. Edge Mode is designed as a simple interface where the hearing aid user initiates assistance using a control such as a double-tap or push-button when confron- ted with a challenging listening environment (Fig. 3). The recognition of a double-tap ges- ture is accomplished with the micro-electro- mechanical systems–based motion sensors in- tegrated within the hearing aids.1 The hearing aid captures an “acoustic snapshot” of the listening environment and optimizes speech intelligibility by adjusting the parameters of eight proprietary classifications comprising challenging quiet and noisy listening situations. These AI-based, on-demand adaptive tuning parameter adjustments to the prescribed set- tings include gain offsets, noise management settings, directional-microphone settings, wind noise management settings, to name a few. No smartphone or cloud connectivity is needed; all computational power is achieved through “on the ear” processing when activated by the user via a tap or button press via onboard controls. Earlier investigations13 have shown that most users found Edge Mode easy to operate and preferred it over audiogram-based prescribed hearing aid settings when communicating in restaurant noise, automobiles, and reverberant listening environments (Fig. 4).



Figure 4 Preference count of Edge Mode versus prescribed settings from 15 hearing-impaired participants. Legend shows the acoustic scene.


During the COVID-19 pandemic, health and government officials encouraged or man- dated community-wide face mask wearing to reduce potential presymptomatic or asymptom- atic transmission of the virus to others. This practice, in combination with social distancing (i.e., keeping >6 feet apart), helped decrease the spread of the virus, but it also posed a barrier to clear, empathetic communication, particularly for those with hearing loss.14


Fabry and colleagues assessed differences in sound attenuation across face masks via acoustic measurements made on many of the latest commercially available styles.15 Fig. 5 illustrates the differences for a range of mask types. Data were normalized to the condition when no mask was worn (the “zero” line on the x-axis). Fin- dings suggested that while all face masks re- duced important high-frequency information, there was significant variation across fabric, medical, and paper masks, especially those equipped with a plastic window. One unexpect- ed finding was that face masks and face shields equipped with transparent plastic panels had an enhancement of several decibels (dB) in the low/mid frequencies, as well as a reduction in the high frequencies.16,17 These data illustrate the challenge of using a predetermined com- pensation scheme with fixed high-frequency gain adjustment to account for the impact of social distancing and face mask use.


These findings kindled the development of the user-activated Edge Mode for Masks in Livio Edge AI hearing aids. As noted, Edge Mode uses an onboard AI model trained with machine-learning technology to optimize speech intelligibility and sound quality in all listening environments by assessing the levels of speech and noise present. Edge Mode for Masks dynamically adjusts multiple feature parameters, including gain, output, noise management, and directional microphones. Therefore, unlike sim- ple gain offsets used in other “Mask Mode” programs, Edge Mode for Masks is “agnostic” to which mask is worn, the distance between conversation partners, and the presence of back- ground noise. Again, all required signal proces- sing for Edge Mode for Masks is performed using ear-level hearing aid processing, with no connection to a smartphone or the cloud. In laboratory testing of hearing aid users, both Edge Mode for Masks and a “manual” Mask Mode offset program were significantly preferred by hearing aid users over the “Normal” prescription targets when the talker was using a medical-grade N95 face mask. Ongoing research is evaluating whether Edge Mode for Masks will be preferred versus the Mask Mode offset pro- gram when a broader array of face masks is used, similar to those depicted in Fig. 5.



Figure 5 The acoustic impact of different face masks compared with when no face mask is worn. (Note: measurements were made using a head and torso simulator manikin.)


In summary, although machine learning– based AEC systems are effective for up to 90% of “real-world” listening environments, the addition of on-demand edge AI computing technology that the user controls via an effective and easy-to-use interface may provide superior control and accuracy for the remaining balance of challenging listening environments encoun- tered by hearing aid users.


Intellivoice Deep Neural Networks

Edge Mode is likened to a user-initiated “acoustic snapshot” for AEC optimization and speech enhancement. It follows that DNN can be likened to a multilayered approach for improving speech intelligibility in noisy and reverberant listening environments. Prior re- search at Starkey has demonstrated the use of DNN for improving speech intelligibility in a wide range of signal-to-noise ratios and noise types while maintaining speech quality.18,19


In 2020, Starkey introduced IntelliVoice,20 a DNN-based speech enhancement strategy that combines the increased computational processing power available on a smartphone with the benefits of using the smartphone microphone as an input source that is closer to the target sounds (similar to the Apple iPhone “Live Listen” feature). Fig. 6 depicts a high-level schematic of IntelliVoice DNN.


Figure 6 High-level schematic of the smartphone-based IntelliVoice deep neural network implementation.


The spectrogram shown in Fig. 7 illustrates how IntelliVoice preprocesses spectrotemporal seg- ments for the presence of speech and/or noise to reject noise or speech at low signal-to-noise ratios (SNRs) while passing speech at higher SNRs through for amplification. Fig. 8 illustra- tes field test results with IntelliVoice DNN versus hearing aid-only processing for overall preference and speech understanding in noisy listening environments based on 12 hearing aid users with hearing losses ranging from mild to profound in degree. Additional analysis revea- led a positive correlation between the degree of hearing loss and IntelliVoice algorithm prefer- ence. This was most likely due to the system delays introduced by “off-boarding” processing to the smartphone for the IntelliVoice algo- rithms. Our findings suggest that hearing aid users with greater degrees of hearing loss toler- ate increases in signal processing complexity that contribute to system delays if they improve SNRs, while those with better hearing are less likely to tolerate the additional delays. As such, IntelliVoice DNN is recommended only for users with severe-to-profound hearing loss.


Figure 7 Spectrographic representation of a multilayered deep neural network approach that analyzes spectrotemporal segments for the presence of speech or noise and passes speech through at a criterion speech-to-noise ratio while rejecting segments that are determined to be noise.


Table Microphone Accessory

Another way that Starkey incorporates machine learning and edge computing in hearing aids to improve speech intelligibility in noise is a new multipurpose wireless accessory designed in col- laboration with Nuance Hearing.21 It uses eight spatially separated microphones and sophisticated directional beamforming technology to divide the listening environment into eight 45-degree segments. In “Automatic” mode, the Table Microphone dynamically switches the di- rection of the beam to focus on the active speaker in a group while simultaneously reducing competing background speech or noise from other directions.


Figure 8 Field test preference results for overall preference and speech understanding between IntelliVoice deep neural network (DNN) and hearing (HA) aid-only processing for 12 hearing aid (HA) users with mild-to- moderate (4 participants), moderate-to-severe (3 participants), and severe-to-profound (5 participants) hearing loss. The number on the y-axis corresponds to the number of users who preferred each option.


In “Manual” mode, the user can select either one or two speakers to focus on in a group and can change the direction of the beam or beams by simply touching on the top of the device. In “Surround” mode, all microphones are active so that sound is amplified from all directions around the user. Automatic and Manual modes are optimized for listening to speech in noise, and Surround mode is optimized for listening to speech in quiet. The Table Microphone provides the best listening benefit when placed at the center of a group or close to a single conversation partner. In the laboratory, 18 participants with hearing loss (10 females, 8 males; mean age: 66.9 years [range: 50– 80 years]) completed a speech intelligibility test using unaided, aided with Livio Edge AI custom rechargeable hearing aids alone, and aided with the Table Microphone accessory. As shown in Fig.9, the Table Microphone had a median SNR improvement for the hearing in noise test of 7.2 dB SNR improvement compared with hearing aids alone and a 15.0-dB SNR improvement compared with the unaided condition. The Table Microphone accessory is paired directly with the hearing aids and does not require the use of a smartphone or cloud-based computing. It may also function as a remote microphone and a multimedia streamer.


Figure 9 Speech reception thresholds (SRT) in diffuse noise for 18 hearing aid users in unaided and hearing aid–only conditions (Livio Edge AI) and when the Table Mic beamforming microphone array (left) is used.


APPLICATIONS FOR MONITORING HEALTH AND WELLNESS

In addition to improvements in speech intelligi- bility, embedded sensors and AI are transforming

hearing aids into multifunctional health and communication devices that continuously moni- tor and track physical activities and social engagement, and detect if the user experiences a fall so that they can automatically send alert messages to designated contacts. Since 2018, Starkey has incorporated inertial measurement unit (IMU) sensors into hearing aids to monitor the user’s movement and position. In combination with the classification of the listening environment via the AEC system, these data are used to monitor physical activity and social engagement while hearing aids are worn, which are then displayed on the mobile application (Fig. 10).


Figure 10 Body (steps, exercise, stand) and brain (use, engagement, environment) scores reported within the Thrive user control application.


Social Engagement

Hearing loss is correlated with many chronic health conditions. In recent years, significant attention has been focused on the link between hearing loss and cognitive decline. Compared with individuals with normal hearing, persons with a mild, moderate, and severe hearing impairment, respectively, had a 2-, 3-, and 5- fold increased risk of incident all-cause demen- tia over more than a decade of follow-up.22,23


The Lancet Commission24 reported that treating hearing loss is the largest modifiable risk factor for the prevention of dementia. Further- more, they reported that hearing loss is a risk factor that should be addressed in midlife—not toward the end of life—for optimal benefit.


A study published in the Journal of the American Medical Association25 indicated a significant degree of memory deficit in persons with age-related hearing loss who did not use hearing aids compared with those without hearing loss. However, memory function was significantly better and much closer to the performance of those with normal hearing in a similar group of individuals matched for hearing loss who did use hearing aids. An issue is how much hearing aid use is necessary to achieve any potential cognitive benefits. While research has demonstrated that people who use their hearing aids more than 8 hours/day are more satisfied than those who use their hearing aids less often,26 there is little evidence as to whether the type of listening environment is important (and predictive) to success. Many persons with hearing loss report difficulty understanding speech in the presence of back- ground noise.27 While communication in noisy listening environments is a top driver of success with hearing aids,28 the majority of new hearing aid users wear them in generally favorable listening environments.29 Hearing aid “data logging” has been recommended to identify those who are not using, or only minimally using their aids, so that clinicians can provide appropriate rehabilitation and support, partic- ularly for new hearing users.30 Although data logging provides an objective measure that is a more accurate representation of hearing aid use than “self-report” measures, which are often over-reported,31it also requires clinical inter- vention via face-to-face or telehealth. In a new approach, Starkey has incorporated measures of “social engagement” into the user-controlled “Thrive” app that automatically monitor and “gamify” (1) hours of daily hearing aid use; (2) time spent in listening environments where speech is present, either in quiet or noisy back- grounds; and (3) the diversity of listening environments encountered during each 24- hour period, as expressed by the inferred AEC classes.32 By displaying a daily social engagement score directly in the app, this simple tool empowers hearing aid users to challenge themselves to use their hearing aids and communicate with others in a wide variety of quiet and noisy listening environments. Users can even designate family members or professional caregivers to monitor daily prog- ress in real time via a companion application.33


These patient-centered tools may encourage people to use their hearing aids in difficult listening environments more often. They also can provide clinicians with the information they need to better optimize the hearing aid for a wider range of situations.


Physical Activity

Previous research suggests that modifiable risk factors for cardiovascular disease (CVD) may play a role in developing age-related hearing loss.34 Daily physical activity tracking has been promoted as a means to reduce cardiovascular risk, and studies have shown that achieving

10,000 steps per day reduces body mass index in aging individuals.35 A recent study evaluated the efficacy and the effectiveness of Starkey Livio AI hearing aids in tracking step count in real-world conditions and reported that the hearing aids were more accurate than two wrist-worn activity tracking devices.3 The hear- ing aids were found to be feasible, consistent, and sensitive in detecting daily step counts.


In addition to physical steps, the American Heart Association, the American College of Cardiology, and the American College of Sports Medicine, among other organizations, have emphasized that sedentary behavior and physical inactivity are major modifiable CVD risk factors, especially in the aging population. A major emphasis has been directed at reducing CVD risk by promoting 30 minutes of daily exercise and reducing sedentary behavior.36 Additional- ly, the American College of Sports Medicine has recommended daily flexibility exercises be com- pleted to maintain joint range of movement and musculoskeletal strength.37 To that end, the thrive application automatically tracks and dis- plays daily steps, exercise, and stand (for at least 1 minute in a 1 hour period) to encourage hear- ing aid users to be more physically active to mitigate the impact of CVD and potential comorbidity with hearing loss.32


Fall Detection

Approximately 40% of adults aged 65 years and older fall once or more per year, resulting in serious morbidities, mortality, and healthcare costs.38 In addition, studies have reported a significant posi- tive association between the severity of hearing loss and reports of falls, even when adjusting for demographic, cardiovascular, and vestibular bal- ance function.39 Forward falls, backward falls, trips, slips, and falls to the side have all been frequently observed in aging adults.40 Starkey developed an ear-level fall detection algorithm, using IMU sensors embedded into custom or standard hearing aids, which is designed to be highly sensitive to these types of fall events. Once the hearing aids detect the occurrence of a fall, an alert message is automatically sent to previously designated con- tacts. If the wearer has recovered from a fall and does not need help, the alert can be cancelled within 60 seconds of the detection of the fall event. A recent study evaluated the sensitivity and speci- ficity of the fall detection algorithm based on acceleration rate, estimated falling distance, and impact magnitude for bilateral hearing aids com- pared with a commercially available, neck-worn personal emergency response system.4 On average, the ear-worn fall detection system had comparable or higher sensitivity and specificity rates for fall detection than the neck-worn pendant for labora- tory conditions simulating forward and backward falls and near falls (Fig. 11). These data suggest that the ear-worn system may provide a suitable alter- native to more traditional neck-worn devices for detecting falls.


Figure 11 Measured fall detection and alert accuracy {[(true positives + true negatives)/total trials]

x100}, sensitivity {[true positives/(true positives + false negatives)] x 100}, and sensitivity {[true positives/ (true positives + false negatives)] x 100} for a popular neck-worn pendant (AutoAlert) versus Livio Edge AI with normal and high sensitivity. Accuracy, sensitivity, and specificity were compared across the three fall detection systems with McNemar’s test for paired nominal data. Livio AI (normal sensitivity) was more accurate than AutoAlert [x2(1) ¼ 9.13, p ¼ 0.002] and Livio AI (high sensitivity) [x2(1) ¼ 27.03, p < 0.001]; the difference in accuracy between Livio AI (high sensitivity) and AutoAlert was not significant [x2(1) ¼ 0.36, p ¼ 0.550]. Livio AI (normal sensitivity) was significantly more sensitive than AutoAlert [x2(1) ¼ 9.98, p ¼ 0.002] and Livio AI (high sensitivity) [x2(1) ¼ 29.00, p < 0.001]; the difference in sensitivity between Livio AI (high sensitivity) and AutoAlert was not significant [x2(1) ¼ 0.51, p ¼ 0.47].

Livio AI (high sensitivity) was significantly more specific than Livio AI (normal sensitivity) [x2(1) ¼ 4.00, p ¼ 0.045]. However, specificity differences were not statistically significant between Livio AI (normal sensitivity) and AutoAlert [x2(1) ¼ 3.57, p ¼ 0.059] or between Livio AI (high sensitivity) and AutoAlert [x2(1) ¼ 1.00, p ¼ 0.317].4


SUMMARY

This article provided an overview of Starkey’s approach for incorporating AI, machine learning, edge computing, DNNs, and embedded sensors into modern state-of-the-art hearing aids and accessories. By focusing fundamentally on improving sound quality and speech intelli- gibility for quiet and noisy listening environ- ments, while also connecting hearing aid use to overall health and wellness, today’s hearing aids not only help hearing-impaired individuals hear, understand speech, and communicate better but also enable them to live healthier lives by actively tracking both physical and cognitive activities.


REFERENCES


1. Hsu J. Starkey’s AI transforms hearing aids into smart wearables. IEEE (Institute for Electrical and Electronic Engineers) Spectrum 27 August 2018. Accessed May 12, 2021 at: https://spectrum.ieee. org/the-human-os/biomedical/devices/starkeys-

ai-transforms-hearing-aid-into-smart-wearables

2. Fabry D, Rodemark K, Vase Legarth S, Crukley J, Pociecha A, Seitz-Paquette K. Evidence of Noise Management Preference for Starkey Hearing Aids. White Paper 2019. Accessed May 21, 2021 at: https:// starkeypro.com/pdfs/white-papers/Evidence_of_- Noise_Management_Preference.pdf

3. Rahme M, Folkeard P, Scollie S. Evaluating the accuracy of step tracking and fall detection in the Starkey Livio artificial intelligence hearing aids: a pilot study. Am J Audiol 2021;30(01):182–189

4. Burwinkel JR, Xu B, Crukley J. Preliminary exam- ination of the accuracy of a fall detection device embedded into hearing instruments. J Am Acad Audiol 2020;31(06):393–403

5. Picou EM. MarkeTrak 10 (MT10) survey results demonstrate high satisfaction with and benefits from hearing aids. Semin Hear 2020;41(01):21–36

6. Sternberg RJ. Human Intelligence. Encyclopedia Britannica, November 6, 2020

7. Bhowmik A. Artificial intelligence: from pixels and phonemes to semantic understanding and inter- actions. Proc Int Display Workshops 2019(26): 9–12

8. Bregman AS. Auditory Scene Analysis: The Per- ceptual Organization of Sound. Cambridge, MA: MIT Press; 1990

9. Zhang T, Kindred JS. System for evaluating hear- ing assistance device settings using detected sound environment. 2011; U.S. Patent no. 7,986,790

10. Fabry D, Tchorz J. Results from a new hearing aid using “acoustic scene analysis”. Hearing J 2005;58 (04):30–36

11. Buchler M, Allegro S, Launer S, Dillier N. Sound classification in hearing aids inspired by auditory scene analysis. EURASIP J Appl Signal Process 2005;(18):2991–3002

12. Xiang JJ, McKinney MF, Fitz K, Zhang T. Evaluation of sound classification algorithms for hearing aid applications. In: 2010 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing, 2010:185–188

13. Harianawala J, McKinney M, Fabry D. Intelligence at the Edge. White paper; 2020. Accessed May 12, 2021 at: https://starkeypro.com/pdfs/technicalpapers/Intelligence_at_the_Edge_White_Paper.pdf

14. Ten Hulzen RD, Fabry DA. Impact of hearing loss and universal masking in the COVID 19 era. Mayo Clin Proc 2020;95(10):2069–2072

15. Fabry D, Burns T, McKinney M, Bhowmik A. “Unmasking” benefits for hearing aid users in challenging listening environments. Hearing Re- view 2020;27(11):18–20

16. Corey RM, Jones U, Singer AC. Acoustic effects of medical, cloth, and transparent face masks on speech signals. 2020 arXiv:2008.04521. Accessed May 12, 2021 at: https://publish.illinois.edu/aug- mentedlistening/face-masks/

17. Goldin A, Weinstein B, Shiman N. How do medical masks degrade speech reception? Hearing Review 2020;27(05):8–9

18. Zhao Y, Wang D, Merks I, Zhang T. DNN-based enhancement of noisy and reverberant speech. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

2016:6525–6529

19. Zhao Y, Xu B, Giri R, Zhang T. Perceptually guided speech enhancement using deep neural networks. In: 2018 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP). 2018:5074–5078

20. Cook D. AI can now help you hear speech better. Hearing Loss Journal 2020. Accessed May 12, 2021 at: https://www.hearinglossjournal.com/ai-can-now-help-

you-hear-speech/

21. Walsh K, Zakharenko V. 2.4 GHz Table Microphone. Starkey White paper 2020. Accessed May 12, 2021 at: https://home.starkeypro.com/pdfs/WTPR/SG/ WTPR2787-00-EE SG/Table_Microphone_White_Paper.pdf

22. Lin FR, Metter EJ, O’Brien RJ, Resnick SM, Zonderman AB, Ferrucci L. Hearing loss and inci- dent dementia. Arch Neurol 2011;68(02):214–220

23. Lin FR, Albert M. Hearing loss and dementia - Who is listening? Aging Ment Health 2014;18 (06):671–673

24. Livingston G, Huntley J, Sommerlad A, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. Lancet 2020;396(10248):413–446

25. Ray J, Popli G, Fell G. Association of cognition and age-related hearing impairment in the English longitudinal study of ageing. JAMA Otolaryngol Head Neck Surg 2018;144(10):876–882

26. Takahashi G, Martinez CD, Beamer S, et al. Sub- jective measures of hearing aid benefit and satisfac- tion in the NIDCD/VA follow-up study. J Am Acad Audiol 2007;18(04):323–349

27. Jorgensen L, Novak M. Factors influencing hearing aid adoption. Semin Hear 2020;41(01):6–20

28. Mueller G, Carr K. 20Q: Consumer Insights on Hearing Aids, PSAPs, OTC Devices, and More from MarkeTrak 10. Audiology Online, March 16, 2020. Accessed May 15, 2021 at: https://www. audiologyonline.com/articles/20q-understanding-today-s-consumers-26648

29. Humes LE, Rogers SE, Main AK, Kinney DL. The acoustic environments in which older adults wear their hearing aids: insights from datalogging sound environment classification. Am J Audiol

2018;27(04):594–603

30. Solheim J, Hickson L. Hearing aid use in the elderly as measured by datalogging and self-report. Int J Audiol 2017;56(07):472–479

31. Laplante-L'evesque A, Nielsen C, Dons Jensen L, Naylor G. Patterns of hearing aid usage predict hearing aid. J Am Acad Audiol 2014;25:187–198

32. Howes C. Thrive Hearing Control: An App for a Hearing Revolution. Starkey White paper 2019. Accessed May 17, 2021 at: https://starkeypro.com/ pdfs/white-papers/Thrive_Hearing_Control.pdf

33. Starkey Thrive Care application. Accessed May 17, 2021 at: https://starkeypro.com/pdfs/quicktips/ Thrive_Care_App.pdf

34. Helzner EP, Patel AS, Pratt S, et al. Hearing sensitivity in older adults: associations with cardio- vascular risk factors in the health, aging and body composition study. J Am Geriatr Soc 2011;59(06): 972–979

35. McCormack G, Giles-Corti B, Milligan R. De- mographic and individual correlates of achieving 10,000 steps/day: use of pedometers in a population-based study. Health Promot J Austr 2006;17 (01):43–47

36. Lavie CJ, Ozemek C, Carbone S, Katzmarzyk PT, Blair SN. Sedentary behavior, exercise, and cardio- vascular health. Circ Res 2019;124(05):799–815

37. Garber CE, Blissmer B, Deschenes MR, et al; American College of Sports Medicine. American College of Sports Medicine position stand. Quan- tity and quality of exercise for developing and maintaining cardiorespiratory, musculoskeletal, and neuromotor fitness in apparently healthy adults: guidance for prescribing exercise. Med Sci Sports Exerc 2011;43(07):1334–1359

38. Rubenstein LZ. Falls in older people: epidemiolo- gy, risk factors and strategies for prevention. Age Ageing 2006;35(02, Suppl 2):ii37–ii41

39. Lin FR, Ferrucci L. Hearing loss and falls among older adults in the United States. Arch Intern Med 2012;172(04):369–371

40. Crenshaw JR, Bernhardt KA, Achenbach SJ, et al. The circumstances, orientations, and impact loca- tions of falls in community-dwelling older women. Arch Gerontol Geriatr 2017;73:240–247







Comments


bottom of page