Development and validation of the azbio sentence lists

Anthony J. Spahr, Michael Dorman, Leonid M. Litvak, Susan Van Wie, Rene H. Gifford, Philipos C. Loizou, Louise M. Loiselle, Tyler Oakes, Sarah Cook

Research output: Contribution to journalArticlepeer-review

442 Scopus citations

Abstract

Objectives: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech perception abilities of hearing-impaired listeners and cochlear implant (CI) users. Our intention was to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. Design: The AzBio sentence corpus includes 1000 sentences recorded from two female and two male talkers. The mean intelligibility of each sentence was estimated by processing each sentence through a five-channel CI simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. Sentences from each talker were sorted by percent correct score, and 165 sentences were selected from each talker and were then sequentially assigned to 33 lists, each containing 20 sentences (5 sentences from each talker). List equivalency was validated by presenting all lists, in random order, to 15 CI users. Results: Using sentence scores from the CI simulation study produced 33 lists of sentences with a mean score of 85% correct. The results of the validation study with CI users revealed no significant differences in percent correct scores for 29 of the 33 sentence lists. However, individual listeners demonstrated considerable variability in performance on the 29 lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 172 instances where research subjects had been tested on two lists within a single condition revealed that 94% of results were accurately contained within these confidence intervals. Conclusions: The use of a five-channel CI simulation to estimate the intelligibility of individual sentences allowed for the creation of a large number of sentence lists with an equivalent level of difficulty. The results of the validation procedure with CI users found that 29 of 33 lists allowed scores that were not statistically different. However, individual listeners demonstrated considerable variability in performance across lists. This variability was accurately described by the binomial distribution model and was used to estimate the magnitude of change required to achieve statistical significance when comparing scores from one and two lists per condition. Fifteen sentence lists have been included in the AzBio Sentence Test for use in the clinical evaluation of hearing-impaired listeners and CI users. An additional eight sentence lists have been included in the Minimum Speech Test Battery to be distributed by the CI manufacturers for the evaluation of CI candidates.

Original languageEnglish (US)
Pages (from-to)112-117
Number of pages6
JournalEar and hearing
Volume33
Issue number1
DOIs
StatePublished - Jan 2012

ASJC Scopus subject areas

  • Otorhinolaryngology
  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Development and validation of the azbio sentence lists'. Together they form a unique fingerprint.

Cite this