Many people with hearing loss are unaware of it and do not seek benefit from available interventions such as hearing aids. This is in part due to the limited accessibility to qualified hearing healthcare providers in developing and developed countries alike. Automated audiometry, which has gained in popularity amidst the torrent of advances in telemedicine and mobile health, makes it possible to deliver basic hearing tests to remote or otherwise underserved communities at low cost. While this technology makes it possible to perform hearing assessments outside of a sound booth, many individuals administering the test are non-specialists, and thus, have a limited ability to interpret audiometric measurements and to make tailored recommendations. In this paper, we present the first steps towards the development of a flexible, supervised learning approach for the classification of audiograms in terms of their shape, severity and symmetry. More specifically, we outline our approach to building a set of non-redundant, annotation-ready audiograms from a much larger dataset. In addition, we present a Rapid Audiogram Annotation Environment (RAAE) designed specifically for the collection of audiogram annotations from a large community of expert audiologists. Preliminary results indicate that annotations provided through our environment are consistent leading to low intra-coder variability. Data gathered through the RAAE will form the basis of learning algorithms to help non-experts make better decisions from audiometric data.

, , , ,
13th IEEE International Symposium on Medical Measurements and Applications, MeMeA 2018
Department of Systems and Computer Engineering

Charih, F. (François), Bromwich, M. (Matthew), Lefrançois, R. (Renée), Mark, A.E. (Amy E.), & Green, J. (2018). Mining Audiograms to Improve the Interpretability of Automated Audiometry Measurements. In MeMeA 2018 - 2018 IEEE International Symposium on Medical Measurements and Applications, Proceedings. doi:10.1109/MeMeA.2018.8438746