1 |
Does Vocabulary Knowledge Affect Lexical Segmentation in Adverse Conditions?Bishell, Michelle January 2015 (has links)
There is significant variability in the ability of listeners to perceive degraded speech. Existing research has suggested that vocabulary knowledge is one factor that differentiates better listeners from poorer ones, though the reason for such a relationship is unclear. This study aimed to investigate whether a relationship exists between vocabulary knowledge and the type of lexical segmentation strategy listeners use in adverse conditions. This study conducted error pattern analysis using an existing dataset of 34 normal-hearing listeners (11 males, 23 females, aged 18 to 35) who participated in a speech recognition in noise task. Listeners were divided into a higher vocabulary (HV) and a lower vocabulary (LV) group based on their receptive vocabulary score on the Peabody Picture Vocabulary Test (PPVT). Lexical boundary errors (LBEs) were analysed to examine whether the groups showed differential use of syllabic strength cues for lexical segmentation. Word substitution errors (WSEs) were also analysed to examine patterns in phoneme identification. The type and number of errors were compared between the HV and LV groups. Simple linear regression showed a significant relationship between vocabulary and performance on the speech recognition task. Independent samples t-tests showed no significant differences between the HV and LV groups in Metrical Segmentation Strategy (MSS) ratio or number of LBEs. Further independent samples t-tests showed no significant differences between the WSEs produced by HV and LV groups in the degree of phonemic resemblance to the target. There was no significant difference in the proportion of target phrases to which HV and LV listeners responded. The results of this study suggest that vocabulary knowledge does not affect lexical segmentation strategy in adverse conditions. Further research is required to investigate why higher vocabulary listeners appear to perform better on speech recognition tasks.
|
Page generated in 0.1216 seconds