• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 391
  • 43
  • 33
  • 16
  • 16
  • 16
  • 16
  • 16
  • 16
  • 13
  • 12
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 1802
  • 1366
  • 1299
  • 1051
  • 770
  • 526
  • 370
  • 306
  • 263
  • 228
  • 172
  • 164
  • 141
  • 133
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The association between the supraglottic activity and glottal stops at the sentence level

Kim, Se In 01 May 2015 (has links)
Contrary to the previous belief that any presence of supraglottic activity indicates presence of hyperfunctional vocal pathology, Stager et al. (2000, 2002) found out that supraglottic compressions do occur in normal subjects. In fact, dynamic false vocal fold compressions during production of phrases with a great number of glottal stops were noted. The present study hypothesized that a similar pattern s would be observed at sentence level, where at least 50% or higher incidence of dynamic FVF compressions would be observed at aurally perceived glottal stops and other linguistic markers, such as vowel-initial words, /t/ final words, punctuations and phrase boundaries, where glottal stops were likely to occur. Nasendoscopic recordings were obtained from 8 healthy subjects (2M; 6F) during production of selected sentence stimuli.. Their audio recordings were rated by two judges to detect the location of glottal stops. Then, the video images were analyzed to categorize the presence and absence of dynamic and static false vocal folds (FVF) or anterior posterior (AP) compressions. Results indicated that the incidence of dynamic FVF compressions was 30%. Nevertheless, the average incidence was elevated at aurally perceived glottal stops and at the linguistic contexts that are known to be associated with glottal stops compared to other contexts.
82

Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception

Flaherty, Ruth 01 January 2014 (has links)
The speech signal carries two types of information: linguistic information (the message content) and indexical information (acoustic cues about the talker). In the traditional view of speech perception, the acoustic differences among talkers were considered "noise". In this view, the listeners' task was to strip away unwanted variability to uncover the idealized phonetic representation of the spoken message. A more recent view suggests that both talker information and linguistic information are stored in memory. Rather than being unwanted "noise", talker information aids in speech recognition especially under difficult listening conditions. For example, it has been shown that normal hearing listeners who completed voice recognition training were subsequently better at recognizing speech from familiar versus unfamiliar voices. For individuals with hearing loss, access to both types of information may be compromised. Some studies have shown that cochlear implant (CI) recipients are relatively poor at using indexical speech information because low-frequency speech cues are poorly conveyed in standard CIs. However, some CI users with preserved residual hearing can now combine acoustic amplification of low frequency information (via a hearing aid) with electrical stimulation in the high frequencies (via the CI). It is referred to as bimodal hearing when a listener uses a CI in one ear and a hearing aid in the opposite ear. A second way electrical and acoustic stimulation is achieved is through a new CI system, the hybrid CI. This device combines electrical stimulation with acoustic hearing in the same ear, via a shortened electrode array that is intended to preserve residual low frequency hearing in the apical portion of the cochlea. It may be that hybrid CI users can learn to use voice information to enhance speech understanding. This study will assess voice learning and its relationship to talker-discrimination, music perception, and spoken word recognition in simulations of Hybrid CI or bimodal hearing. Specifically, our research questions are as follows: (1) Does training increase talker identification? (2) Does familiarity with the talker or linguistic message enhance spoken word recognition? (3) Does enhanced spectral processing (as demonstrated by improved talker recognition) generalize to non-linguistic tasks such as talker discrimination and music perception tasks? To address our research questions, we will recruit normal hearing adults to participate in eight talker identification training sessions. Prior to training, subjects will be administered the forward and backward digit span task to assess short-term memory and working memory abilities. We hypothesize that there will be a correlation between the ability to learn voices and memory. Subjects will also complete a talker-discrimination test and a music perception test that require the use of spectral cues. We predict that training will generalize to performances on these tasks. Lastly, a spoken word recognition (SWR) test will be administered before and after talker identification training. The subjects will listen to sentences produced by eight talkers (four male, four female) and verbally repeat what they had heard. Half of the sentences will contain keywords repeated in training and half of the sentences will have keywords not repeated in training. Additionally, subjects will have only heard sentences from half of the talkers during training. We hypothesize that subjects will show an advantage for trained keywords rather than non-trained keywords and will perform better with familiar talkers than unfamiliar talkers.
83

A system to enhance patient-provider communication in hospitalized patients who use American sign language

Czerniejewski, Emily Michelle 01 May 2012 (has links)
Augmentative and Alternative Communication (AAC) devices have been created, and are currently used, in hospital settings to improve communication between those who require adaptive assistance for speaking and writing. AAC devices are typically used by non-oral patients. While interpreters are required to be available for non-English speaking patients within the hospital, they cannot be available at the bedside of the patient at all hours of the day for routine cares. One population in particular who has difficulty communicating without interpreters are those who are deaf and use American Sign Language (ASL) as a primary means of communication. How are these patients supposed to communicate with medical staff when interpreters are not available? This question was the basis for the current project. Previously developed AAC devices for non-oral patients were adapted to create a translation device to improve bedside communication between hospital staff and patients who are deaf. The limited ability to effectively communicate with patients who are deaf argues for the criticality of having a translation device for Deaf patients in the hospital setting.
84

Patterns of respiratory coordination in children who stutter during conversation

Werle, Danielle Rae 01 May 2014 (has links)
No description available.
85

Examiner and child contributions to therapy

Lyrenmann, Rebecca 01 May 2016 (has links)
The purpose of this research was to analyze child and clinician factors affecting language therapy outcomes and to analyze the potential bi-directional relationship between child and clinician factors. Transcripts of intervention sessions with one child and one trained examiner were coded for factors relating to children's language ability, examiners' strategies for reaching session targets, and differences in examiners' interactional styles. It was found that differences in children's language ability and examiners' interactional styles did not have a strong relationship with therapy outcomes. Differences were observed in the overall frequency of examiners' strategy use across children; however, examiners were not sensitive to individual children's responsiveness to particular strategies. This is a secondary data analysis on an intervention study, which affects interpretation of the results: variability in examiner and child behaviors was decreased due to adherence to intervention protocol. However, the mismatch between examiner strategies and child responses is of interest. Making clinicians explicitly aware of the many types of elicitation and response strategies available may increase examiners' effectiveness, efficiency, or responsiveness.
86

VIBROTACTILE RECEPTION AND DISCRIMINATION OF SPEECH SIGNALS: A COMPARISON AMONG BODY LOCI

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 36-08, Section: B, page: 3855. / Thesis (Ph.D.)--The Florida State University, 1975.
87

THE EFFECTS OF DIFFERENT SPEAKERS ON THE WORD DISCRIMINATION SCORES OF PERSONS WITH SENSORI-NEURAL HEARING IMPAIRMENT

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-04, Section: B, page: 1640. / Thesis (Ph.D.)--The Florida State University, 1976.
88

ENHANCEMENT OF THE AUDITORY EVOKED RESPONSE BY CONDITIONING

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 34-02, Section: B, page: 0508. / Thesis (Ph.D.)--The Florida State University, 1973.
89

AUDITORY EVOKED POTENTIALS FROM PREADOLESCENT RHESUS MONKEYS

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 34-04, Section: B, page: 1360. / Thesis (Ph.D.)--The Florida State University, 1973.
90

ESTIMATING HEARING THRESHOLD WITH AUDITORY EVOKED RESPONSES: THE FEASIBILITY OF USING CHAINED-STIMULI

Unknown Date (has links)
Middle-latency responses were obtained using chained-stimuli as a means of estimating audiometric threshold in a more time efficient manner. The stimulus chains consisted of first a quiet 33 mS interval, then 5 tone pips, each of 15 dB greater intensity than the preceding tone pip. The onsets of the tone pips were separated by 33 mS. Each 200 mS computer averaging sweep recorded the neural activity evoked by these 5 stimuli. The resulting trace reflected the first portion of five sequential middle-latency responses. Thresholds obtained using chained-stimuli were compared to those obtained using conventional middle-latency procedures and to voluntary thresholds. Both 500 Hz and 2000 Hz thresholds were assessed. It was found that the mean chained-stimuli thresholds were 6 to 8 dB higher than conventional middle-latency thresholds and 25 to 30 dB higher than behavioral thresholds. Testing using chained-stimuli was completed in slightly over 1/3 the time required for conventional middle-latency procedures, documenting the time efficiency of this method of testing. The possibility of obtaining responses at lower sensation levels by using 100-3000 Hz filtering of the physiologic responses while observing earlier latency potentials, or by slowing the repetition rate of the stimulus chain were discussed. / Source: Dissertation Abstracts International, Volume: 47-12, Section: B, page: 4816. / Thesis (Ph.D.)--The Florida State University, 1986.

Page generated in 0.0501 seconds