• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 18
  • 18
  • 13
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Speech-enabled application development for young children

Nicol, Anthony January 2005 (has links)
There are several activities in the educational development of young children which require them to speak aloud to a parent or teacher. High pupil-teacher ratios and modem lifestyles limit the time available for one-to-one interaction so the benefits of enabling a computer to assist in this area are significant. There are several large international research projects attempting to implement customised systems with the aim of becoming automated reading tutors within the next few years. This thesis considers a different approach; it tests the feasibility of using commercial speech recognition technology with young children. Commercial technology has the advantage of being available now and it has matured enough for standards to have been developed which enable a speech application to easily use alternative recognition technologies if required. Recognition accuracy needs to be measurable; to simplify and disambiguate the measurement of recognition accuracy, a new metric has been developed. Improvements in recognition accuracy have been found through experimentation. The experiments need a large amount of speech data from children. To facilitate this, a set of tools has been developed to collect a speech corpus from three different regions of the country then automatically measure the recognition effectiveness under different test conditions. Speech recognition is one of several input modes which support a multimodal interface; for it to be effective, the interface with which it is integrated also needs to be effective, so this thesis additionally studies the area of Child-Computer Interaction; one of the outcomes of this study is a set of interface design guidelines which have been developed through the implementation and evaluation of several multimedia applications. Several user evaluation methods have been used to test the applications with young children in the classroom and their effectiveness is discussed. The thesis integrates the speech recognition and Child-Computer Interaction studies to produce a speech-enabled application using the developed interface design guidelines and proposed speech interface design guidelines. The application was evaluated in the classroom with very encouraging results. The thesis concludes that commercial speech recognition can be used effectively with young children if the limitations, optimisations and guidelines developed during this project are considered.
2

Acquisition of Word Prosody by Second Language Learners

Tsurutani, C. Unknown Date (has links)
No description available.
3

Prosodic transfer: the tonal constraints on Vietnamese acquisition of English stress and rhythm

Nguyen, Thi Anh Thu Unknown Date (has links)
No description available.
4

The Effects of Speech Tasks on the Prosody of People with Parkinson Disease

Andrew Herbert Exner (7460972) 17 October 2019 (has links)
One of the key features of the hypokinetic dysarthria associated with Parkinson disease is dysprosody. While there has been ample research into the global characterization of speech in Parkinson disease, little is known about how people with Parkinson disease mark lexical stress. This study aimed to determine how people with Parkinson disease modulate pitch, intensity, duration, and vowel space to differentiate between two common lexical stress patterns in English: trochees (strong-weak pattern) and iambs (weak-strong pattern), in two syllable words. Twelve participants with mild to moderate idiopathic Parkinson disease and twelve age- and sex-matched controls completed a series of speech tasks designed to elicit token words of interest in prosodically-relevant speech tasks (picture identification (in isolation and lists) and giving directions (spontaneous speech). Results revealed that people with Parkinson disease produced a higher overall pitch and a smaller vowel space as compared to controls, though most lexical marking features were not significantly different. Importantly, the elicitation task had a significant effect on most dependent measures. Although lexical stress is not significantly impacted by Parkinson disease, we recommend that future research and clinical practice focus more on the use of spontaneous speech tasks rather than isolated words or lists of words due to the differences in the marking of lexical stress in the latter tasks, making them less useful as ecologically-valid assessments of prosody in everyday communication.
5

Tactile Speech Communication: Design and Evaluation of Haptic Codes for Phonemes with Game-based Learning

Juan S Martinez (6622304) 14 May 2019 (has links)
<div>This thesis research was motivated by the need for increasing speech transmission rates through a phonemic-based tactile speech communication device named TAPS (TActile Phonemic Sleeve). The device consists of a 4-by-6 tactor array worn on</div><div>the forearm that delivers vibrotactile patterns corresponding to English phonemes. Three studies that proceeded this thesis evaluated a coding strategy that mapped 39 English phonemes into vibrotactile patterns. This thesis corresponds to a continuation of the project with improvements summarized in two parts. First, a design and implementation of a training framework based on theories of second language acquisition and game-based learning is developed. A role playing game named Haptos was designed to implement this framework. A pilot study using the first version of the game showed that two participants were able to master a list of 52 words within 45 minutes of game play. Second, an improved set of haptic codes was designed. The design was based on the statistics of spoken English and included an additional set of codes that abbreviate the most frequently co-occurring phonemes in duration. The new set included 39 English phonemes and 10 additional abbreviated symbols. The new codes represent a 24 to 46% increase in word presentation rates. A second version of the Haptos game was implemented to test the new 49 codes in a learning curriculum distributed over multiple days. Eight participants learned the new codes within 6 hours of training and obtained an average score of 84.44% in symbol identification tests with error rates per haptic symbol below 18%. The results demonstrate the feasibility of employing the new codes for future work where the ability to receive longer sequences of phonemes corresponding to phrases and sentences will be trained and tested.</div>
6

The impact of voice on trust attributions

Torre, Ilaria January 2017 (has links)
Trust and speech are both essential aspects of human interaction. On the one hand, trust is necessary for vocal communication to be meaningful. On the other hand, humans have developed a way to infer someone’s trustworthiness from their voice, as well as to signal their own. Yet, research on trustworthiness attributions to speakers is scarce and contradictory, and very often uses explicit data, which do not predict actual trusting behaviour. However, measuring behaviour is very important to have an actual representation of trust. This thesis contains 5 experiments aimed at examining the influence of various voice characteristics — including accent, prosody, emotional expression and naturalness — on trusting behaviours towards virtual players and robots. The experiments have the "investment game"—a method derived from game theory, which allows to measure implicit trustworthiness attributions over time — as their main methodology. Results show that standard accents, high pitch, slow articulation rate and smiling voice generally increase trusting behaviours towards a virtual agent, and a synthetic voice generally elicits higher trustworthiness judgments towards a robot. The findings also suggest that different voice characteristics influence trusting behaviours with different temporal dynamics. Furthermore, the actual behaviour of the various speaking agents was modified to be more or less trustworthy, and results show that people’s trusting behaviours develop over time accordingly. Also, people reinforce their trust towards speakers that they deem particularly trustworthy when these speakers are indeed trustworthy, but punish them when they are not. This suggests that people’s trusting behaviours might also be influenced by the congruency of their first impressions with the actual experience of the speaker’s trustworthiness — a "congruency effect". This has important implications in the context of Human–Machine Interaction, for example for assessing users’ reactions to speaking machines which might not always function properly. Taken together, the results suggest that voice influences trusting behaviour, and that first impressions of a speaker’s trustworthiness based on vocal cues might not be indicative of future trusting behaviours, and that trust should be measured dynamically.
7

The Association Between Articulator Movement and Formant Trajectories in Diphthongs

McKell, Katherine Morris 01 June 2016 (has links)
The current study examined the association between formant trajectories and tongue and lip movements in the American English diphthongs /aɪ/, /aʊ/, and /ɔɪ/. Seventeen native speakers of American English had electromagnetic sensors placed on their tongues and lips to record movement data along with corresponding acoustic data during productions of the diphthongs in isolation. F1 and F2 trajectories were extracted from the middle 50% of the diphthongs and compared with time-aligned kinematic data from tongue and lip movements. The movement and formant tracks were converted to z-scores and plotted together on a common time scale. Absolute difference scores between kinematic variables and acoustic variables were summed along each track to reflect the association between the movement and acoustic records. Results show that tongue movement has the closest association with changes in F1 and F2 for the diphthong /aɪ/. Lip movement has the closest association with changes in F1 and F2 for the diphthong /aʊ/. Results for the diphthong /ɔɪ/ suggest tongue advancement has the closest association with changes in F2, while neither lip movement nor tongue movement have a clearly defined association with changes in F1. These results suggest that for diphthongs with the lip rounding feature, lip movement may have a greater influence on F1 and F2 than previously considered. Researchers who use formant data to make inferences about tongue movement and vowel space may benefit from considering the possible influence of lip movements on vocal tract resonance.
8

Auditory-motor control and longitudinal speech measures in hyperfunctional and hypokinetic speech disorders

Abur, Defne 24 August 2022 (has links)
The overarching goal of this dissertation was to assess a set of sensorimotor, acoustic, and functional speech measures to inform the understanding of the mechanisms underlying two common speech disorders with evidence of disrupted sensory function: hyperfunctional voice disorders (HVDs) and hypokinetic dysarthria resulting from Parkinson’s disease (PD). The purpose of the first and second study was to elucidate the mechanisms underlying speech symptoms in HVDs. The aim of the first study was to examine whether auditory discrimination and vocal responses to altered auditory feedback (indicative of vocal motor control) differed in a large set of speakers with HVDs (N = 62) compared to controls (N = 62). The results directly implicate disrupted auditory processing in impairments to vocal motor control in HVDs. Building on this finding, the second study sought to compare the same auditory and vocal motor control measures in speakers with HVDs pre- and post-therapy (N = 11) to assess whether successful therapy (i.e., voice symptom improvement) resulted in improvements to auditory-motor function. On average, vocal motor control improved after therapy but there were little changes to measures of auditory processing, which suggests that therapeutic improvements in HVDs may be compensatory rather than a result of resolving the underlying auditory processing deficits. The collective findings from the first and second study improve the understanding of the development of HVDs and highlight the need to consider auditory processing in assessment and treatment of HVDs. The third and fourth study objectives were to characterize auditory-motor control and longitudinal changes to speech acoustics in PD. In the third study, auditory-motor control of both voice and articulatory parameters of speech were assessed in speakers with PD on medication (N = 28) compared to controls (N = 28) and compared to measures of speech function (intelligibility and naturalness ratings). No group differences were found in auditory-motor measures, regardless of speech domain. These results, which describe findings from the largest sample of PD patients completing auditory-motor tasks to-date (N=28), suggest that auditory-motor control is intact in individuals with PD on their typical medication cycle. This work also provided the first evidence that auditory-motor measures reflect measures of speech function (speech intelligibility and naturalness). The fourth and final study in this dissertation examined whether longitudinal changes to speech acoustics in PD were associated with the specific time (in months) between speech samples. Although prior work has examined speech decline in PD, no study to-date has assessed whether speech acoustics are sensitive to disease progression within an individual with PD. The current study examined acoustic speech samples collected from speakers with PD (N = 30) at two separate time points. Longitudinal changes to speech acoustics were examined by time between speech samples, motor phenotype and sex assigned at birth, to shed light on the relationships between acoustic measures of speech, disease progression, motor symptoms, and sex. The study revealed that longitudinal decline in second formant slope, articulation rate (syllables per second) across The Rainbow Passage, and relative fundamental frequency offset values were all associated with increased time between sessions within a speaker. In addition, longitudinal increases in percent pause time in conversational speech were more likely in the PIGD motor phenotype, and longitudinal increases mean fo across conversational speech were more likely in males compared to females with PD. This work provides the first report of acoustic measures of speech that reflect the specific time, in months, of PD progression, as well as acoustic measures of speech that appear to be differentially impacted over time by motor phenotype and by sex. The findings provide evidence that three acoustic measures of speech show promise as measures of PD progression in months and support the notion that speech symptom decline differs by motor phenotype and by sex assigned at birth, which should be considered when planning therapeutic interventions. / 2026-08-31T00:00:00Z
9

Speech Errors Produced By Bilingual Spanish-English Speaking Children and Monolingual English-Speaking Children With and Without Speech Sound Disorder

Itzel Citalli Matamoros Santos (11169567) 26 July 2021 (has links)
<div><b>Purpose:</b> Previous studies have shown that children with SSD speaking a language other than English produce different types of speech errors, although there is a paucity of information investigating these differences in speech sound production (e.g., Core & Scarpelli, 2015; Fabiano-Smith & Goldstein, 2010b; Fabiano-Smith & Hoffman, 2018). This study investigates the types of speech errors produced by bilingual Spanish-English and monolingual English-speaking children matched on age, receptive vocabulary, and articulation accuracy in single words.</div><div><br></div><div><b>Methods: </b>Twelve bilingual English-Spanish speaking children, ages 4;0 to 6;11, were matched to twelve monolingual English-Speaking children. Participants completed standardized and non-standardized tests of speech and language, and performance between groups and assessment measures were compared. Consonant sound productions were categorized as correct, substitution errors, omission errors, or distortion errors.</div><div><br></div><div><b>Results: </b>Bilingual Spanish-English children were significantly more likely than monolingual English children to produce omission errors, while monolingual English children were more likely to produce distortion errors. Both groups produced similar proportions of substitution errors. Bilingual children produced similar proportions of each error type in both of their languages.</div><div><br></div><div><b>Conclusion: </b>SLPs should not rely on English normative data to diagnose SSDs in monolingual and bilingual Spanish-speaking children, as they demonstrate different errors patterns from monolingual English-speakers.</div>
10

Processing Speaker Variability in Spoken Word Recognition: Evidence from Mandarin Chinese

Zhang, Yu 20 September 2017 (has links)
No description available.

Page generated in 0.0719 seconds