Real-life performances


Development of new laboratory tests that better predict real-life performances

Working group established at the 2011 ICRA meeting in Toronto.  Co-ordinated by Gitte Keidser

Background:

Laboratory tests are frequently used to quantify the benefit that hearing devices, or specific features of devices, can provide for speech intelligibility. However, several studies have demonstrated that these tests often do not predict how beneficial the same devices/features will be to users in their everyday listening situations (e.g. Bentler et al., 1993; Walden et al., 2000; Cord et al., 2004; Cord et al., 2007). The discrepancy in outcomes measured in the laboratory and in the field is likely due to the lack of several ecologically relevant variables from current laboratory tests (Jerger, 2009), including dynamic variations in spatial and level characteristics of the acoustic environment, the presence of reverberation, and co-ordinated visual information. In addition, conventional speech tests mainly require verbatim repetition of what was heard (auditory processing), while real communication also requires that meaning is extracted from the words, a skill that further depends on cognitive processing, attention, memory, and language (Pichora-Fuller and Singh, 2006). There is thus a need for new kinds of laboratory tests that more accurately represent real-world listening situations and provide more accurate predictions of a person’s ability to engage in real-life conversations.  Such tests would enable developers and researchers to obtain a more realistic view of the potential real-life benefit from a new invention and to further optimise the algorithm at a much earlier point in time than what is currently possible.  The better predictions of outcomes with a hearing device feature before its commercial release would help clinicians and hearing aid users develop more realistic expectations of improved communication ability, ultimately resulting in increased success and satisfaction with amplification.

Objectives:

The long-term objectives are to:

  1. Create a data base of critical real-world acoustic (and visual) environments that can be played back via different loudspeaker systems and that can be reliably used for hearing instrument development and evaluation, and 
  2. Develop new tests that better engage real-world mental and cognitive processes.

The short-term goals are to:

  1.  Identify and describe a number of complex environments that are critical and challenging to hearing aid users and that can form the background for the design of future listening tests, and
  2. Obtain a better understanding of the effects of introducing ecologically relevant variations to the speech and noise variables.

Publications to date:

Best V, Keidser G, Buchholz JM, Freeston K. (2015). An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment. International Journal of Audiology 54(10):682-690. doi:10.3109/14992027.2015.1028656

Best V, Keidser G, Buchholz JM, Freeston K. (2016). Development and evaluation of a new ongoing speech comprehension test. International Journal of Audiology 55(1): 45-52. doi:10.3109/14992027.2015.1055835

Smeds K., Wolters F, Rung, M (2015). Estimates of signal-to-noise ratios in realistic sound scenarios. Journal of the American Academy of Audiology 26: 183-196.

The following papers are published in a 2016 special issue of the Journal of the American Academy of Audiology, 27(7).

Keidser G. Introduction to special issue: Towards ecologically valid protocols for the assessment of hearing and hearing devices. Pp. 502-503.

Naylor G. Theoretical issues of validity in the measurement of aided Speech Reception Threshold in noise for comparing nonlinear hearing aid systems. Pp. 504-514.

Best V, Keidser G, Freeston K, Buchholz J. A dynamic speech comprehension test for assessing real-world listening ability. Pp. 515-526.

Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C. Common Sound Scenarios: A context-driven categorization of everyday sound environments for application in hearing-device research. Pp. 527-540.

Oreinos C, Buchholz JM. Evaluation of loudspeaker-based virtual sound environments for testing directional hearing aids. Pp. 541-556.

Grimm G, Kollmeier B, Hohmann V. Spatial acoustic scenarios in multichannel loudspeaker systems for hearing aid evaluation. Pp. 557-566.

Lau ST, Pichora-Fuller KM, Li K, Singh G, Campos J. Effects of hearing loss on dual-task performance in an audiovisual virtual reality simulation of listening while walking. Pp. 567-587.

Brimijoin O, Akeroyd M. The effects of hearing impairment, age, and hearing aids on the use of self-motion for determining front/back location. Pp. 588-600.

Weller T, Best V, Buchholz J, Young T. A method for assessing auditory spatial analysis in reverberant multitalker environments. Pp. 601-611

Sound environments and their virtual reality implementationSpeech-in-noise tasks and the SNRLocalisation/spatial awarenessCommunication and associated (social) behavioursMulti-sensory and cognitive interactions
Boyd AW, Whitmer WM, Akeroyd MA. (2013) Recording and analysis of head movements, interaural level and time differences in rooms and real-world listening scenarios. ISRA.Aspeslagh S, Clark DF, Akeroyd MA, Brimijoin WO. (2014) Speech intelligibility can improve rapidly during exposure to a novel acoustic environment. The Journal of the Acoustical Society of America. 135(4):2227-.Brimijoin WO, McShefferty D, Akeroyd MA. (2012) Undirected head movements of listeners with asymmetrical hearing impairment during a speech-in-noise task. Hearing research. 283(1-2):162-8Vas V, Akeroyd MA, Hall DA (2017) “Data-Driven Synthesis of Research Evidence for Domains of Hearing Loss, as Reported by Adults With Hearing Loss and Their Communication Partners.”  Trends in Hearing 21:2331216517734088Neher T, Lunner T, Hopkins K, Moore BC. (2012) Binaural temporal fine structure sensitivity, cognitive function,
and spatial speech recognition of hearing-impaired listeners. JASA 131(4):2561-2564
Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C. (2016) Common Sound Scenarios: A context-driven categorization of everyday sound environments for application in hearing-device research. JAAA 27(7): 527-540Aspeslagh S, Clark F, Akeroyd MA, Brimijoin WO. (2015) Measuring rapid adaptation to complex acoustic environments in normal and hearing-impaired listeners. The Journal of the Acoustical Society of America. 137(4):2229-.Brimijoin WO, Boyd AW, Akeroyd MA. (2013) The contribution of head movement to the externalization and internalization of sounds. PloS one. 8(12):e83068.Meis M, Krueger M, Gablenz P et al (2018) Development and Application of an Annotation Procedure to Assess the Impact of Hearing Aid Amplification on Interpersonal Communication Behavior. Trends in Hearing 22.Lau ST, Pichora-Fuller KM, Li K, Singh G, Campos J. (2016). Effects of hearing loss on dual-task performance in an audiovisual virtual reality simulation of listening while walking. JAAA 27(7): 567-587
Oreinos C, Buchholz JM. (2016). Evaluation of loudspeaker-based virtual sound environments for testing directional hearing aids. JAAA 27(7): 541-556.Best V, Keidser G, Buchholz JM, Freeston K. (2015). An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment. IJA 54(10):682-690.Brimijoin WO, Whitmer WM, McShefferty D, Akeroyd MA. (2014) The effect of hearing aid microphone mode on performance in an auditory orienting task. Ear Hear 35(5):e204-212.Beechey T, Buchholz J, Keidser G. (2018). Measuring communication difficulty through effortful speech production during conversation. Speech Communication 100, 18-29.Akeroyd MA, Whitmer WM, McShefferty D, Naylor G. (2016) Sound-source enumeration by hearing-impaired adults. The Journal of the Acoustical Society of America. 139(4):2210-.
Grimm G, Kollmeier B, Hohmann V (2016). Spatial acoustic scenarios in multichannel loudspeaker systems for hearing aid evaluation. JAAA 27(7): 557-566.Smeds K., Wolters F, Rung, M (2015). Estimates of signal-to-noise ratios in realistic sound scenarios. JAAA 26: 183-196.Brimijoin O, Akeroyd M. (2016). The effects of hearing impairment, age, and hearing aids on the use of self-motion for determining front/back location. JAAA 27(7): 588-600.Hadley LV, Pickering MJ. (2018) A neurocognitive framework for comparing linguistic and musical interactions, Language, Cognition and Neuroscience, DOI: 10.1080/23273798.20.Devesse A., Dudek A., van Wieringen A., Wouters J. (2018). Speech intelligibility of virtual humans. International Journal of Audiology, 57 (12), 908-916. doi: 10.1080/14992027.2018.1511922
Weisser A, Buchholz J, Oreinos C, Davila J, Galloway J, Beechey T, Freeston K, Keidser G. (in review). The Ambisonics recordings of typical environments (ARTE) database Acta Acustica united with Acustica.Naylor G. (2016). Theoretical issues of validity in the measurement of aided Speech Reception Threshold in noise for comparing nonlinear hearing aid systems. JAAA 27(7): 504-514.Akeroyd MA, Whitmer WM. (2016) Spatial hearing and hearing aids. In Hearing aids (pp. 181-215). Springer, Cham.Beechey T, Buchholz J, Keidser G. (2019). Eliciting naturalistic conversations: a method for assessing communication ability, subjective experience and the impacts of noise and hearing impairment. Journal of Speech Language and Hearing Research 62(2), 470-484.Devesse A., van Wieringen A., Wouters J. (2019). AVATAR Assesses Speech Understanding and Multitask Costs in Ecologically Relevant Listening Situation, Ear Hear. 2019 Jul 25. doi: 10.1097/AUD.0000000000000778
Weisser A, Buchholz J, Keidser G. (in review). Complex Acoustic Environments: Review, Framework and Subjective Model. Trends in Hearing.Best V, Keidser G, Buchholz JM, Freeston K. (2016). Development and evaluation of a new ongoing speech comprehension test. IJA 55(1): 45-52. Weller T, Best V, Buchholz J, Young T. (2016). A method for assessing auditory spatial analysis in reverberant multitalker environments. JAAA 27(7): 601-611Hadley LV, Brimijoin WO, Whitmer WM, Naylor G. (In revision). Speech, movement, and gaze behaviours of dyads conversing in different levels of noise. Scientific Reports.Devesse, A., Wouters, van Wieringen, A (2020). Age affects speech understanding and multitask costs. Ear and Hearing,doi: 0.1097/AUD.0000000000000848
Best V, Keidser G, Freeston K, Buchholz J.  (2016). A dynamic speech comprehension test for assessing real-world listening ability. JAAA 27(7): 515-526.Freeman TC, Culling JF, Akeroyd MA, Brimijoin WO. (2017) Auditory compensation for head rotation is incomplete. Journal of Experimental Psychology: Human Perception and Performance. 43(2):371.Beechey T, Buchholz J, Keidser G. (in review). Communication effort, hearing impairment and noise in conversation. Journal of Speech Language and Hearing Research
Whitmer WM, McShefferty D, Akeroyd MA. (2016) On Detectable and Meaningful Speech-Intelligibility Benefits. In P van Dijk et al (eds): Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing, Advances in Experimental Medicine and Biology 894, pp. 447-455.
Best V, Keidser G, Freeston K, Buchholz J. (2017). Evaluation of a dynamic speech comprehension test in older listeners with hearing loss. IJA 57(3), 221-229.
Weisser A and Buchholz JM. (2019). Conversational speech levels and signal-to-noise ratios in realistic acoustic conditions. JASA, 145(1): 349-360.