• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 41
  • 12
  • 8
  • 7
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 39
  • 39
  • 39
  • 31
  • 29
  • 27
  • 26
  • 18
  • 17
  • 15
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Invariant patterns in articulatory movements

Bonaventura, Patrizia 22 December 2003 (has links)
No description available.
22

Prosodic constituent structure and anticipatory pharyngealisation in Libyan Arabic

Maiteq, Tareq Bashir January 2013 (has links)
This study examines anticipatory pharyngealisation (i.e., emphasis) in Libyan Arabic, across a hierarchy of prosodic boundary levels (syllable vs. word vs. phonological phrase vs. intonation phrase ‘IP’) in order to quantify the magnitude, and identify the planned domain of anticipatory pharyngealisation. The acoustic manifestation of pharyngealisation is lowering in the second formant (F2) in pharyngealised contexts compared to their plain cognates. To investigate speech production models of how pharyngealisation is anticipated in advance, F2 measurements were taken at onset, mid and offset points of both vowels (V) in a word-final VCV sequence, in the context [VbV # Emphatic trigger]. The strength of [#], a prosodic boundary, was varied syntactically to manipulate the presumed hierarchical strength of that boundary from zero (where the VbV and the trigger are in the same word) up to an intonational phrase boundary. We expect that the stronger the boundary, the greater the resistance to the spread of pharyngealisation. The duration of the final vowel (i.e., the pre-trigger vowel) was also measured to assess if pharyngealisation magnitude on it and on the first vowel is influenced by the temporal proximity to the emphatic trigger. Results show (1) that within word boundaries pharyngealisation effects are present on both vowels, and (2) there are effects of pharyngealisation on the final vowel, i.e. the pre-trigger across word and phrase boundaries, and (3) there is no evidence of pharyngealisation across an IP boundary. An examination of the pre-trigger vowel + pause duration suggests that the lack of coarticulatory effects on the final vowel, i.e., pre-trigger vowel, across an IP boundary may be due to the temporal distance from the trigger: all tokens in this condition had a pre-trigger pause. For word and phrase boundary conditions, F2 was higher the greater the temporal distance from the pharyngealised trigger. These results suggest that anticipatory pharyngealisation is qualitatively different within the word as compared to across word boundaries. More clearly, the magnitude of pharyngealisation is categorical within word boundaries, and gradient across prosodic boundaries higher than the word. These findings suggest that pharyngealisation within the word is phonological, whereas across word boundaries it is primarily a phonetic process, conditioned by the temporal proximity to the pharyngealised trigger. Results also show that the planned domain of [pharyngealisation] is the word. However, additional phonetic pharyngealisation effects can extend across word boundaries as a result of coarticulation.
23

Speech motor control variables in the production of voicing contrasts and emphatic accent

Mills, Timothy Ian Pandachuck January 2009 (has links)
This dissertation looks at motor control in speech production. Two specific questions emerging from the speech motor control literature are studied: the question of articulatory versus acoustic motor control targets, and the question of whether prosodic linguistic variables are controlled in the same way as segmental linguistic variables. In the first study, I test the utility of whispered speech as a tool for addressing the question of articulatory or acoustic motor control targets. Research has been done probing both sides of this question. The case for articulatory specifications is developed in depth in the Articulatory Phonology framework of Haskins researchers (eg. Browman & Goldstein 2000), based on the task-dynamic model of control presented by Saltzman & Kelso (1987). The case for acoustic specifications is developed in the work of Perkell and others (eg Perkell, Matthies, Svirsky & Jordan 1993, Guenther, Espy-Wilson, Boyce, Matthies, Zandipour & Perkell 1999, Perkell, Guenther, Lane, Matthies, Perrier, Vick,Wilhelms-Tricarico & Zandipour 2000). It has also been suggested that some productions are governed by articulatory targets while others are governed by acoustic targets (Ladefoged 2005). This study involves two experiments. In the first, I make endoscopic video recordings of the larynx during the production of phonological voicing contrasts in normal and whispered speech. I discovered that the glottal aperture differences between voiced obstruents (ie, /d) and voiceless obstruents (ie, /t) in normal speech was preserved in whispered speech. Of particular interest was the observation that phonologically voiced obstruents tended to exhibit a narrower glottal aperture in whisper than vowels, which are also phonologically voiced. This suggests that the motor control targets of voicing is different for vowels than for voiced obstruents. A perceptual experiment on the speech material elicited in the endoscopic recordings elicited judgements to see whether listeners could discriminate phonological voicing in whisper, in the absence of non-laryngeal cues such as duration. I found that perceptual discrimination in whisper, while lower than that for normal speech, was significantly above chance. Together, the perceptual and the production data suggest that whispered speech removes neither the acoustic nor the articulatory distinction between phonologically voiced and voiceless segments. Whisper is therefore not a useful tool for probing the question of articulatory versus acoustic motor control targets. In the second study, I look at the multiple parameters contributing to relative prominence, to see whether they are controlled in a qualitatively similar way to the parameters observed in bite block studies to contribute to labial closure or vowel height. I vary prominence by eliciting nuclear accents with a contrastive and a non-contrastive reading. Prominence in this manipulation is found to be signalled by f0 peak, accented syllable duration, and peak amplitude, but not by vowel de-centralization or spectral tilt. I manipulate the contribution of f0 in two ways. The first is by eliciting the contrastive and non-contrastive readings in questions rather than statements. This reduces the f0 difference between the two readings. The second is by eliciting the contrastive and non-contrastive readings in whispered speech, thus removing the acoustic f0 information entirely. In the first manipulation, I find that the contributions of both duration and amplitude to signalling contrast are reduced in parallel with the f0 contribution. This is a qualitatively different behaviour from all other motor control studies; generally, when one variable is manipulated, others either act to compensate or do not react at all. It would seem, then, that this prosodic variable is controlled in a different manner from other speech motor targets that have been examined. In the whisper manipulation, I find no response in duration or amplitude to the manipulation of f0. This result suggests that, like in the endoscopy study, perhaps whisper is not an effective means of perturbing laryngeal articulations.
24

Cognates, competition and control in bilingual speech production

Bond, Rachel Jacqueline, Psychology, Faculty of Science, UNSW January 2005 (has links)
If an individual speaks more than one language, there are always at least two ways of verbalising any thought to be expressed. The bilingual speaker must then have a means of ensuring that their utterances are produced in the desired language. However, prominent models of speech production are based almost exclusively on monolingual considerations and require substantial modification to account for bilingual production. A particularly important feature to be explained is the way bilinguals control the language of speech production: for instance, preventing interference from the unintended language, and switching from one language to another. One recent model draws a parallel between bilinguals??? control of their linguistic system and the control of cognitive tasks more generally. The first two experiments reported in this thesis explore the validity of this model by comparing bilingual language switching with a monolingual switching task, as well as to the broader task-switching literature. Switch costs did not conform to the predictions of the task-set inhibition hypothesis in either experiment, as the ???paradoxical??? asymmetry of switch costs was not replicated and some conditions showed benefits, rather than costs, for switching between languages or tasks. Further experiments combined picture naming with negative priming and semantic competitor priming paradigms to examine the role of inhibitory and competitive processes in bilingual lexical selection. Each experiment was also conducted in a parallel monolingual version. Very little negative priming was evident when speaking the second language, but the effects of interlingual cognate status were pronounced. There were some indications of cross-language competition at the level of lexical selection: participants appeared unable to suppress the irrelevant language, even when doing so would make the task easier. Across all the experiments, there was no evidence for global inhibition of the language-not-in-use during speech production. Overall results were characterised by a remarkable flexibility in the mechanisms of bilingual control. A striking dissociation emerged between the patterns of results for cognate and non-cognate items, which was reflected throughout the series of experiments and implicates qualitative differences in the way these lexical items are represented and interconnected.
25

Electropalatographic investigation of normal Cantonese speech a qualitative and quantitative analysis /

Kwok, Chui-ling, Irene. January 1992 (has links)
Thesis (M.Ed.)--University of Hong Kong, 1992. / Includes bibliographical references (leaf 63-70). Also available in print.
26

Speech Production in Deaf Children Receiving Cochlear Implants: Does Maternal Sensitivity Play a Role?

Grimley, Mary Elizabeth 01 January 2008 (has links)
The current study sought to examine predictors of language acquisition for deaf children who received cochlear implants in a large, multi-center trial. General maternal sensitivity as well as two specific types of maternal sensitivity, cognitive and linguistic stimulation, were all evaluated in relation to speech production. Characteristics of the family and child (e.g. maternal education, family income, age at implantation, etc.) were also evaluated. The hypotheses tested were: 1) child age at implantation and gender, maternal education, and family income were expected to predict speech production across 6 and 12 months post-implantation, 2) both Cognitive and Linguistic Stimulation were expected to predict the growth of speech production at 6 and 12 months post-implantation, and 3) Cognitive and Linguistic Stimulation were expected to predict speech production above and beyond that predicted by general Maternal Sensitivity. Results indicated that, of the demographic variables, only child age at implantation was a significant predictor of speech production. Cognitive and linguistic stimulation were significantly associated with the development of speech production in the first year following activation of the implant. Furthermore, these important maternal behaviors accounted for gains in speech production beyond that accounted for by general maternal sensitivity. These findings have several clinical implications, including the development of formalized training for parents of children who receive cochlear implants.
27

Production and Perception of the Epenthetic Vowel in Obstruent + Liquid Clusters in Spanish: an Analysis of the Prosodic and Phonetic Cues Used by L1 and L2 Speakers

Ramírez Vera, Carlos Julio 31 August 2012 (has links)
This study hypothesizes that the Epenthetic Vowel (EV) that occurs in Spanish consonant clusters, although produced unconsciously, is part of the articulatory plan of the speaker. As part of the plan, the epenthetic vowel occurs more often in the least perceptually recoverable contexts in order to enhance them. To achieve a better understanding of the role of the epenthetic vowel, this study shows that the linguistic and phonotactic contexts condition the occurrence of these vowels. Specifically, it argues that linguistic and phonotactic contexts that are perceptually weak compel a significantly higher occurrence of EVs. The EV was analyzed from both production and perceptual standpoints. The results show that from the production standpoint, the occurrence of the EV is affected by the type of liquid that forms the clusters: in clusters with /r/ the variables that made a statistical contribution were post-tonic position (odds ratio, 4.46), and voiceless consonants (odds ratio, 1.42). In the case of clusters with /l/ an EV has a higher probability of occurring in the context of bilabial consonants (odds ratio, 4.19), and voiceless consonants (odds ratio, 1.3). As for the effects of speech rate on the duration of EVs, the results show that speech rate accounts for 14% of the variation in an EV’s length. From the standpoint of perception, listening was divided into the tasks of perceptual identification and perceptual discrimination. The results show that the strongest predictor is the interaction voiceless x post-tonic position (odds ratio, 4.8). For the identification of the Cr clusters, the strongest predictor is the context of voiceless consonants (odds ratio, 4.42). Regarding identification of the Cl clusters, the strongest predictors are the tonic position (odds ratio, 1.54) and the labial place of articulation (odds ratio, 1.39). With regard to the discrimination of the Cr clusters, the strongest predictors for perceptual recoverability are the interaction voiceless x post-tonic position (odds ratio, 2.22), and the labial place of articulation (odds ratio, 1.37), while for the Cl cluster, the strongest predictors are the tonic position (odds ratio, 5.83) and voiceless consonants (odds ratio, 3).
28

Production and Perception of the Epenthetic Vowel in Obstruent + Liquid Clusters in Spanish: an Analysis of the Prosodic and Phonetic Cues Used by L1 and L2 Speakers

Ramírez Vera, Carlos Julio 31 August 2012 (has links)
This study hypothesizes that the Epenthetic Vowel (EV) that occurs in Spanish consonant clusters, although produced unconsciously, is part of the articulatory plan of the speaker. As part of the plan, the epenthetic vowel occurs more often in the least perceptually recoverable contexts in order to enhance them. To achieve a better understanding of the role of the epenthetic vowel, this study shows that the linguistic and phonotactic contexts condition the occurrence of these vowels. Specifically, it argues that linguistic and phonotactic contexts that are perceptually weak compel a significantly higher occurrence of EVs. The EV was analyzed from both production and perceptual standpoints. The results show that from the production standpoint, the occurrence of the EV is affected by the type of liquid that forms the clusters: in clusters with /r/ the variables that made a statistical contribution were post-tonic position (odds ratio, 4.46), and voiceless consonants (odds ratio, 1.42). In the case of clusters with /l/ an EV has a higher probability of occurring in the context of bilabial consonants (odds ratio, 4.19), and voiceless consonants (odds ratio, 1.3). As for the effects of speech rate on the duration of EVs, the results show that speech rate accounts for 14% of the variation in an EV’s length. From the standpoint of perception, listening was divided into the tasks of perceptual identification and perceptual discrimination. The results show that the strongest predictor is the interaction voiceless x post-tonic position (odds ratio, 4.8). For the identification of the Cr clusters, the strongest predictor is the context of voiceless consonants (odds ratio, 4.42). Regarding identification of the Cl clusters, the strongest predictors are the tonic position (odds ratio, 1.54) and the labial place of articulation (odds ratio, 1.39). With regard to the discrimination of the Cr clusters, the strongest predictors for perceptual recoverability are the interaction voiceless x post-tonic position (odds ratio, 2.22), and the labial place of articulation (odds ratio, 1.37), while for the Cl cluster, the strongest predictors are the tonic position (odds ratio, 5.83) and voiceless consonants (odds ratio, 3).
29

All cumulative semantic interference is not equal: A test of the Dark Side Model of lexical access

Walker Hughes, Julie 16 September 2013 (has links)
Language production depends upon the context in which words are named. Renaming previous items results in facilitation while naming pictures semantically related to previous items causes interference. A computational model (Oppenheim, Dell, & Schwartz, 2010) proposes that both facilitation and interference are the result of using naming events as “learning experiences” to ensure future accuracy. The model successfully simulates naming data from different semantic interference paradigms by implementing a learning mechanism that creates interference and a boosting mechanism that resolves interference. This study tested this model’s assumptions that semantic interference effects in naming are created by learning and resolved by boosting. Findings revealed no relationship between individual performance across semantic interference tasks, and measured learning and boosting abilities did not predict performance. These results suggest that learning and boosting mechanisms do not fully characterize the processes underlying semantic interference when naming.
30

Measuring phonetic convergence : segmental and suprasegmental speech adaptations during native and non-native talker interactions

Rao, Gayatree Nandan 10 February 2014 (has links)
Phonetic convergence (PC) is speech specific accommodation characterized by an increase in similarity in a dyad’s speech patterns due to an interaction. Previous research has demonstrated that PC occurs in dyads during various interactive tasks (e.g. map completion and picture matching) and in cross-linguistic conditions (e.g. dyads who speak the same or different native language) (Pardo, 2006; Kim et al., 2011). Studies suggest that speakers who are closer in linguistic distance (i.e. share the same native language) are more likely to converge than speakers who are far apart (i.e. speak different native languages) (Kim et al, 2011). However, Interdialectal conditions where speakers use different national dialects of the same language have been studied to a far lesser extent (Babel, 2010). Similarly, studies have examined both segmental and suprasegmental features that are susceptible to PC but rhythm has not been studied extensively (Krivokapic, 2013; Rao et al., 2011). Though initial studies postulated that PC is the result of either automatic or social processes, more current research suggests that a combination of both kinds of processes may be better able to account for PC (Goldinger, 1997; Shepard et al., 2001; Babel, 2009a). The current dissertation uses novel measures such as Interlocutor Similarity and EMS + centroid to implicate global properties of vowels and rhythm respectively as acoustic correlates of PC. Moreover, it finds that speakers showed both convergence and divergence in vowels and rhythm as moderated by their language background. Close interactions between native speakers of American English (AE) resulted in convergence whereas interdialectal interactions (between AE and Indian English speakers) and mixed language interactions (between native and non-native speakers of AE who are native speakers of SP) resulted in both convergence and divergence. The results from this study may shed light on how speakers attenuate the highly variable nature of speech by adapting speech patterns to aid intelligibility and information sharing (Shepard et al., 2001) and that this attenuation is moderated by social demands such as identity and cultural distinctiveness. / text

Page generated in 0.1052 seconds