Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.[1]
Overview
In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim.[2]
For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway.[3]
In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.
Computational models of hearing, speech and auditory perception
For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the human auditory system. Examples include models of the mechanical properties of the basilar membrane,[4] the electrically stimulated cochlea,[5][6] middle ear mechanics,[7] bone conduction,[8] and the central auditory pathway.[9] Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models.[10] More recently, convolutional neural networks (CNNs) have been constructed and trained that can replicate human auditory function[11] or complex cochlear mechanics with high accuracy.[12] Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.
e-Health / mHealth (connected hearing healthcare, wireless- and internet-based services)
Online pure-tone threshold audiometry (or screening) tests, electrophysiological measures, for example distortion-product otoacoustic emissions (DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects of ototoxicity and/or noise, and guide ear and hearing care decisions and provide support to clinicians.[13][14] Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning.[15] Smartphone attachments have also been designed to perform tympanometry for acoustic evaluation of the middle ear eardrum.[16][17] Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening.[18][19]
Big data and AI in audiology and hearing healthcare
Collecting large numbers of audiograms (e.g. from databases from the National Institute for Occupational Safety and Health or NIOSH[20] or National Health and Nutrition Examination Survey or NHANES) provides researchers with opportunities to find patterns of hearing status in the population[21][22] or to train AI systems that can classify audiograms.[23]Machine learning can be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss[24] or the relationship between genetic profile and self-reported hearing loss.[25] Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user.[26]
Computational approaches to improving hearing devices and auditory implants
Methods to improve rehabilitation by auditory implants include improving music perception,[27] models of the electrode-neuron interface,[28] and an AI based Cochlear Implant fitting assistant.[29]
Data-based investigations into hearing loss and tinnitus
Online surveys processed with ML-based classification have been used to diagnose somatosensory tinnitus.[30] Automated Natural Language Processing (NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms.[31][32]
Diagnostics for hearing problems, acoustics to facilitate hearing
Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile.[33][34] Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours,[35] have been created.
e-Research (remote testing, online experiments, new tools and frameworks)
Examples of e-Research tools include including the Remote Testing Wiki,[36] the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online.[37]
Software and tools
Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement.[38]