• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 2
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The effects of talker familiarity on talker normalization

Nastaskin, Isabelle Rose 19 June 2019 (has links)
Despite the tremendous amount of phonetic variability in speech across talkers, listeners seem to effortlessly process acoustic signals while attending to both the linguistic content and talker-specific information. Previous studies have explained this phenomenon by providing evidence for talker normalization, a process in which our perceptual system strips away information about a talker so that the abstract, canonical linguistic units are all that remain for further linguistic analysis. However, it is currently unknown whether or how talker normalization is facilitated by familiar talkers. In this study, we investigated whether talker familiarity had an impact on the speed in which listeners perceived highly confusable words under varying contexts. Over the course of three days, listeners were explicitly trained on the voices of four talkers. Baseline and post-test measures were administered to determine the effect of talker training and to see whether this effect was impacted by the presence of a carrier phrase as well as the variability of talker presentation. The results demonstrated that listeners adapted to the talker regardless of familiarity. Having immediate information about a talker from preceding speech appeared to play a larger role in managing talker variability than a long-term familiarity with the talker’s voice. Our findings suggest that talker normalization is a feedforward process that does not rely on prior memory traces.
2

The Interaction of Language Proficiency and Talker Variability in Learning

Davis, Andrea Katharine January 2015 (has links)
Previous studies have shown that multiple talkers help learners make more robust word representations when the learner is not very experienced with the language (Richtsmeier et al., 2009; Rost & McMurray, 2009, 2010). This is likely because exposure to variation allows the learner to observe which acoustic dimensions vary unpredictably across talkers, and which acoustic dimensions vary predictably. However, this predicts that only learners who are less experienced with a language will benefit from multiple talkers, as more experienced learners should be able to use their previous knowledge about the language's speech sounds. Three word-learning experiments, with participants who were expected to have different levels of experience in the language, were performed to test this prediction. In the first experiment, English-acquiring children did benefit from multiple talker in the production but not perception of newly learned words. In the second experiment, native English-speaking adults did not benefit from learning from multiple talkers in either the perception or production of new words. Finally, second language-learning adults benefited from multiple talkers if they were less proficient speakers, but not if they were more proficient. Collectively, these results suggest that learning from multiple talkers is only beneficial for less experienced language learners.
3

From Perceptual Learning to Speech Production: Generalizing Phonotactic Probabilities in Language Acquisition

Richtsmeier, Peter Thomas January 2008 (has links)
Phonotactics are the restrictions on sound sequences within a word or syllable. They are an important cue for speech segmentation and a guiding force in the creation of new words. By studying phonotactics, we stand to gain a better understanding of why languages and speakers have phonologies. Through a series of four experiments, I will present data that sharpen our theoretical and empirical perspectives of what phonotactics are and how they are acquired.The methodology is similar to that used in studies of infant perception: children are familiarized with a set of words that contain either a few or many examples of a phonotactic sequence. The participants here are four-year-olds, and the test involves producing a target phonotactic sequence in a new word. Because the test words have not been encountered before, children must generalize what they learned in the familiarization phase and apply it to their own speech. By manipulating the phonetic and phonological characteristics of the familiarization items, we can determine which factors are relevant to phonotactic learning. In these experiments, the phonetic manipulation was the number of talkers who children heard produce a familiarization word. The phonological manipulation was the number of familiarization words that shared a phonotactic pattern.The findings include instances where learning occurs and instances where it does not. First, the data show that the well-studied correlation between phonotactic probability and production accuracy in child speech can be attributed, at least partly to perceptual learning, rather than a practice effect attributable to repeated articulation. Second, the data show that perceptual learning is a process of abstraction and learning about those abstractions. It is not about making connections between stored, unelaborated exemplars because learning from the phonetic manipulation alone was insufficient for a phonotactic pattern to generalize. Furthermore, perceptual learning is not about reorganizing pre-existing symbolic knowledge, because learning from words alone is insufficient. I argue that a model which learns abstract word-forms from direct phonetic experience, then learns phonotatics from the abstract word-forms, is the most parsimonious explanation of phonotactic learning.
4

Återgivning av ordlistor presenterade med alternerande röster: En jämförelse mellan två återgivningsinstruktioner

Enhorn, Cina January 2008 (has links)
<p>In many earlier investigations a recall advantage of auditory lists spoken in a single voice has been found over recall of lists spoken in two alternating voices. One explanation proposed is an organization strategy which makes recall of alternating-voice lists so difficult. The strategy implies sorting same-voice words into same-voice groups at encoding. Based on this proposition, it was assumed that voice-by-voice recall would be better than recall in order of presentation, as then the recall instruction and the organization of items in memory would be in concordance. The present experiment tested and was unable to support this hypothesis. However, an intriguing interaction between recall instruction and the sex of the participants was found, indicating that males perform worse in the voice-by-voice recall instruction than in serial recall while females’ performance did not differ between the two recall instructions. Implications of these results are discussed.</p>
5

Återgivning av ordlistor presenterade med alternerande röster: En jämförelse mellan två återgivningsinstruktioner

Enhorn, Cina January 2008 (has links)
In many earlier investigations a recall advantage of auditory lists spoken in a single voice has been found over recall of lists spoken in two alternating voices. One explanation proposed is an organization strategy which makes recall of alternating-voice lists so difficult. The strategy implies sorting same-voice words into same-voice groups at encoding. Based on this proposition, it was assumed that voice-by-voice recall would be better than recall in order of presentation, as then the recall instruction and the organization of items in memory would be in concordance. The present experiment tested and was unable to support this hypothesis. However, an intriguing interaction between recall instruction and the sex of the participants was found, indicating that males perform worse in the voice-by-voice recall instruction than in serial recall while females’ performance did not differ between the two recall instructions. Implications of these results are discussed.
6

Psychometric functions of clear and conversational speech for young normal hearing listeners in noise

Smart, Jane 01 June 2007 (has links)
Clear speech is a form of communication that talkers naturally use when speaking in difficult listening conditions or with a person who has a hearing loss. Clear speech, on average, provides listeners with hearing impairments an intelligibility benefit of 17 percentage points (Picheny, Durlach, & Braida, 1985) over conversational speech. In addition, it provides increased intelligibility in various listening conditions (Krause & Braida, 2003, among others), with different stimuli (Bradlow & Bent, 2002; Gagne, Rochette, & Charest, 2002; Helfer, 1997, among others) and across listener populations (Bradlow, Kraus, & Hayes, 2003, among others). Recently, researchers have attempted to compare their findings with clear and conversational speech, at slow and normal rates, with results from other investigators' studies in an effort to determine the relative benefits of clear speech across populations and environments. However, relative intelligibility benefits are difficult to determine unless baseline performance levels can be equated, suggesting that listener psychometric functions with clear speech are needed. The purpose of this study was to determine how speech intelligibility, as measured by percentage key words correct in nonsense sentences by young adults, varies with changes in speaking condition, talker and signal-to-noise ratio (SNR). Forty young, normal hearing adults were presented with grammatically correct nonsense sentences at five SNRs. Each listener heard a total of 800 sentences in four speaking conditions: clear and conversational styles, at slow and normal rates (i.e., clear/slow, clear/normal, conversational/slow, and conversational/normal). Overall results indicate clear/slow and conversational/slow were the most intelligible conditions, followed by clear/normal and then conversational/normal conditions. Moreover, the average intelligibility benefit for clear/slow, clear/normal and conversational/slow conditions (relative to conversational/normal) was maintained across an SNR range of -4 to 0 dB in the middle, or linear, portion of the psychometric function. However, when results are examined by talker, differences are observed in the benefit provided by each condition and in how the benefit varies across noise levels. In order to counteract talker variability, research with a larger number of talkers is recommended for future studies.
7

Processing Speaker Variability in Spoken Word Recognition: Evidence from Mandarin Chinese

Zhang, Yu 20 September 2017 (has links)
No description available.

Page generated in 0.1462 seconds