The ability to obtain reliable phonetic information from a talker’s face during speech perception is an important skill. However, lip-reading abilities vary considerably across individuals. There is currently a lack of normative data on lip-reading abilities in young normal-hearing listeners. This letter describes results obtained from a visual-only sentence recognition experiment using CUNY sentences and provides the mean number of words correct and the standard deviation for different sentence lengths. Additionally, the method for calculating T-scores is provided to facilitate the conversion between raw and standardized scores. This metric can be utilized by clinicians and researchers in lip-reading studies. This statistic provides a useful benchmark for determining whether an individual’s lip-reading score falls within the normal range, or whether it is above or below this range.
Skip Nav Destination
Article navigation
July 2011
July 19 2011
Some normative data on lip-reading skills (L)
Nicholas A. Altieri;
Department of Psychology,
The University of Oklahoma
, 3100 Monitor Avenue, 2 Partners Place, Suite 280, Norman, Oklahoma
73072
Search for other works by this author on:
David B. Pisoni;
David B. Pisoni
Department of Psychological and Brain Sciences,
Indiana University
, 1101 E. 10th Street, Bloomington, Indiana
47405
Search for other works by this author on:
James T. Townsend
James T. Townsend
Department of Psychological and Brain Sciences,
Indiana University
, 1101 E. 10th Street, Bloomington, Indiana
47405
Search for other works by this author on:
a)
Author to whom correspondence should be addressed. Electronic mail: nick.altieri@ou.edu
J. Acoust. Soc. Am. 130, 1–4 (2011)
Article history
Received:
July 16 2010
Accepted:
May 04 2011
Citation
Nicholas A. Altieri, David B. Pisoni, James T. Townsend; Some normative data on lip-reading skills (L). J. Acoust. Soc. Am. 1 July 2011; 130 (1): 1–4. https://doi.org/10.1121/1.3593376
Download citation file:
Sign in
Don't already have an account? Register
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Pay-Per-View Access
$40.00
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Using soundscape simulation to evaluate compositions for a public space sound installation
Valérian Fraisse, Nadine Schütz, et al.
Source and propagation modelling scenarios for environmental impact assessment: Model verification
Michael A. Ainslie, Robert M. Laws, et al.