• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 148
  • 26
  • 14
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 293
  • 214
  • 52
  • 47
  • 38
  • 35
  • 33
  • 31
  • 29
  • 29
  • 24
  • 23
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Intelligibility of synthetic speech in noise and reverberation

Isaac, Karl Bruce January 2015 (has links)
Synthetic speech is a valuable means of output, in a range of application contexts, for people with visual, cognitive, or other impairments or for situations were other means are not practicable. Noise and reverberation occur in many of these application contexts and are known to have devastating effects on the intelligibility of natural speech, yet very little was known about the effects on synthetic speech based on unit selection or hidden Markov models. In this thesis, we put forward an approach for assessing the intelligibility of synthetic and natural speech in noise, reverberation, or a combination of the two. The approach uses an experimental methodology consisting of Amazon Mechanical Turk, Matrix sentences, and noises that approximate the real-world, evaluated with generalized linear mixed models. The experimental methodologies were assessed against their traditional counterparts and were found to provide a number of additional benefits, whilst maintaining equivalent measures of relative performance. Subsequent experiments were carried out to establish the efficacy of the approach in measuring intelligibility in noise and then reverberation. Finally, the approach was applied to natural speech and the two synthetic speech systems in combinations of noise and reverberation. We have examine and report on the intelligibility of current synthesis systems in real-life noises and reverberation using techniques that bridge the gap between the audiology and speech synthesis communities and using Amazon Mechanical Turk. In the process, we establish Amazon Mechanical Turk and Matrix sentences as valuable tools in the assessment of synthetic speech intelligibility.
52

Multi-modal imaging of brain networks subserving speech comprehension

Halai, Ajay Devshi January 2013 (has links)
Neurocognitive models of speech comprehension generally outline either the spatial or temporal organisation of speech processing and rarely consider combining the two to provide a more complete model. Simultaneous EEG-fMRI recordings have the potential to link these domains, due to the complementary high spatial (fMRI) and temporal (EEG) sensitivities. Although the neural basis of speech comprehension has been investigated intensively during the past few decades there are still some important outstanding questions. For instance, there is considerable evidence from neuropsychology and other convergent sources that the anterior temporal lobe (ATL) should play an important role in accessing meaning. However, fMRI studies do not usually highlight this area, possibly because magnetic susceptibility artefacts cause severe signal loss within the ventral ATL (vATL). In this thesis EEG and fMRI were used to refine the spatial and temporal components of neurocognitive models of speech comprehension, and to attempt to provide a combined spatial and temporal model. Chapter 2 describes an EEG study that was conducted while participants listened to intelligible and unintelligible single words. A two-pass processing framework best explained the results, which showed comprehension to proceed in a somewhat hierarchical manner; however, top-down processes were involved during the early stages. These early processes were found to originate from the mid-superior temporal gyrus (STG) and inferior frontal gyrus (IFG), while the late processes were found within ATL and IFG regions. Chapter 3 compared two novel fMRI methods known to overcome signal loss within vATL: dual-echo and spin-echo fMRI. The results showed dual-echo fMRI outperformed spin-echo fMRI in vATL regions, as well as extra temporal regions. Chapter 4 harnessed the dual-echo method to investigate a speech comprehension task (sentences). Intelligibility related activation was found in bilateral STG, left vATL and left IFG. This is consistent with converging evidence implicating the vATL in semantic processing. Chapter 5 describes how simultaneous EEG-fMRI was used to investigate word comprehension. The results showed activity in superior temporal sulcus (STS), vATL and IFG. The temporal profile showed that these nodes were most active around 400 ms (specifically the anterior STS and vATL), while the vATL was consistently active across the whole epoch. Overall, these studies suggest that models of speech comprehension need to be updated to include the vATL region, as a way of accessing semantic meaning. Furthermore, the temporal evolution is best explained within a two-pass framework. The early top-down influence of vATL regions attempt to map speech-like sounds onto semantic representations. Successful mapping, and therefore comprehension, is achieved around 400 ms in the vATL and anterior STS.
53

PHONOLOGICAL PROCESS USE IN ADOLESCENTS/ADULTS WITH DOWN SYNDROME

Middleton, Drew Evan 01 June 2021 (has links)
The purpose of the current study was to analyze an existing data set featured in Osborne (2020). More specifically, the current study aimed to identify phonological processes occurring in the speech of adolescents and adults with Down syndrome and explore subsequent impacts to speech intelligibility. Phonology coding forms from the Arizona Articulation and Phonology Scale, Fourth Revision were completed by analyzing phonetic transcriptions and audio-recordings generated during the completion of the Word Articulation subtest by participants featured in Osborne (2020). Seventeen distinct phonological processes occurred across all participant responses. Phonological process occurrence and speech intelligibility values were found to have a significant negative correlation value (r(4)= -.7883, p= .063).
54

Articulatory errors leading to unintelligibility in the speech of eighty-seven deaf children,

Numbers, Fred Cheffins 01 January 1936 (has links) (PDF)
No description available.
55

A speech intelligibility test for young deaf children.

Blevins, Bill G. 01 January 1960 (has links) (PDF)
No description available.
56

Four Eighteenth and Nineteenth Century Thinkers on the Truthfulness of Architecture

Popescu, Florentina C. 11 October 2012 (has links)
No description available.
57

Looking awry: a genealogical study of pre-service teacher encounters with popular media and multicultural education

McCoy, Katherine E. January 1995 (has links)
No description available.
58

Psychometric functions of clear and conversational speech for young normal hearing listeners in noise

Smart, Jane 01 June 2007 (has links)
Clear speech is a form of communication that talkers naturally use when speaking in difficult listening conditions or with a person who has a hearing loss. Clear speech, on average, provides listeners with hearing impairments an intelligibility benefit of 17 percentage points (Picheny, Durlach, & Braida, 1985) over conversational speech. In addition, it provides increased intelligibility in various listening conditions (Krause & Braida, 2003, among others), with different stimuli (Bradlow & Bent, 2002; Gagne, Rochette, & Charest, 2002; Helfer, 1997, among others) and across listener populations (Bradlow, Kraus, & Hayes, 2003, among others). Recently, researchers have attempted to compare their findings with clear and conversational speech, at slow and normal rates, with results from other investigators' studies in an effort to determine the relative benefits of clear speech across populations and environments. However, relative intelligibility benefits are difficult to determine unless baseline performance levels can be equated, suggesting that listener psychometric functions with clear speech are needed. The purpose of this study was to determine how speech intelligibility, as measured by percentage key words correct in nonsense sentences by young adults, varies with changes in speaking condition, talker and signal-to-noise ratio (SNR). Forty young, normal hearing adults were presented with grammatically correct nonsense sentences at five SNRs. Each listener heard a total of 800 sentences in four speaking conditions: clear and conversational styles, at slow and normal rates (i.e., clear/slow, clear/normal, conversational/slow, and conversational/normal). Overall results indicate clear/slow and conversational/slow were the most intelligible conditions, followed by clear/normal and then conversational/normal conditions. Moreover, the average intelligibility benefit for clear/slow, clear/normal and conversational/slow conditions (relative to conversational/normal) was maintained across an SNR range of -4 to 0 dB in the middle, or linear, portion of the psychometric function. However, when results are examined by talker, differences are observed in the benefit provided by each condition and in how the benefit varies across noise levels. In order to counteract talker variability, research with a larger number of talkers is recommended for future studies.
59

Between dialect and language: aspects of intelligibility and identity in sinitic and romance

司徒諾宜, Szeto, Lok-yee. January 2001 (has links)
published_or_final_version / Linguistics / Master / Master of Philosophy
60

Improving speech intelligibility with a constant-beamwidth, wide-bandwidth loudspeaker array

Winker, Douglas Frank, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.

Page generated in 0.1038 seconds