Speech perception is a multimodal phenomenon, with what we see impacting what we hear. In this study, we examine how visual information impacts English listeners' segmentation of words from an artificial language containing no cues to word boundaries other than the transitional probabilities (TPs) between syllables. Participants (N=60) were assigned to one of three conditions: Still (still image), Trochaic (image loomed toward the listener at syllable onsets), or Iambic (image loomed toward the listener at syllable offsets). Participants also heard either an easy or difficult variant of the language. Importantly, both languages lacked auditory prosody. Overall performance in a 2AFC test was better in the easy (67%) than difficult language (57%). In addition, across languages, listeners performed best in the Trochaic Condition (67%) and worst in the Iambic Condition (56%). Performance in the Still Condition fell in between (61%). We know English listeners perceive strong syllables as word onsets. Thus, participants likely found the Trochaic Condition easiest because the moving image led them to perceive temporally co-occurring syllables as strong. We are currently testing 6-year-olds (N=25) with these materials. Thus far, children's performance collapsed across conditions is similar to adults (60%). However, visual information may impact children's performance less.
Skip Nav Destination
Article navigation
2 June 2013
ICA 2013 Montreal
2–7 June 2013
Montreal, Canada
Speech Communication: Session 2aSC: Linking Perception and Production (Poster Session)
May 14 2013
What you see is what you hear: How visual prosody affects artificial language learning in adults and children Free
Jaspal Brar;
Jaspal Brar
Psychology, University of Toronto, 3359 Mississauga Road N, Mississauga, Ontario L5L 1C6 Canada
Search for other works by this author on:
Michael D. Tyler;
Michael D. Tyler
Psychology, Marcs Institute, University of Western Sydney, Sydney, New South Wales Australia
Search for other works by this author on:
Elizabeth K. Johnson
Elizabeth K. Johnson
Psychology, University of Toronto, 3359 Mississauga Road N., Mississauga, ON L5L 1C6 Canada
Search for other works by this author on:
Jaspal Brar
Psychology, University of Toronto, 3359 Mississauga Road N, Mississauga, Ontario L5L 1C6 Canada
Michael D. Tyler
Psychology, Marcs Institute, University of Western Sydney, Sydney, New South Wales Australia
Elizabeth K. Johnson
Psychology, University of Toronto, 3359 Mississauga Road N., Mississauga, ON L5L 1C6 Canada
Proc. Mtgs. Acoust. 19, 060068 (2013)
Article history
Received:
January 25 2013
Accepted:
February 01 2013
Citation
Jaspal Brar, Michael D. Tyler, Elizabeth K. Johnson; What you see is what you hear: How visual prosody affects artificial language learning in adults and children. Proc. Mtgs. Acoust. 2 June 2013; 19 (1): 060068. https://doi.org/10.1121/1.4800523
Download citation file:
Citing articles via
Passive hearing abilities of dyslexia children: Investigating the rumble frequency of skull bones cavity
Ibrahim Youssef Noshokaty, Howida Hosny Elgebaly, et al.
Related Content
Infant-directed speech reduces English-learning infants' preference for trochaic words
J. Acoust. Soc. Am. (December 2016)
What you see is what you hear: How visual prosody affects artificial language learning in adults and children
J. Acoust. Soc. Am. (May 2013)
The iambic-trochaic law without iambs or trochees: Parsing speech for grouping and prominence
J. Acoust. Soc. Am. (February 2023)
The effects of prosodic prominence and serial position on duration perception
J. Acoust. Soc. Am. (August 2010)
Exposure effects on production and perception development for the learners of English as a second language.
J. Acoust. Soc. Am. (October 2008)