• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 58
  • 24
  • 19
  • 19
  • 5
  • 4
  • 3
  • 1
  • Tagged with
  • 227
  • 227
  • 95
  • 77
  • 72
  • 69
  • 48
  • 42
  • 36
  • 35
  • 31
  • 27
  • 27
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Articulatory Patterns in Children who use Cochlear Implants: An Ultrasound Measure of Velar Stop Production in Bilingual Speakers

Javier, Katherine 28 June 2018 (has links)
Coarticulation occurs in running speech when one speech sound or phoneme overlaps with another. It can be considered a result of the way we sequence and organize our articulators to efficiently produce consecutive consonants and vowels in fluent speech. Previous research has suggested that measures of coarticulation can provide insight into the maturity of the motor speech planning system (Barbier, Perrier, Ménard, Payan, Tiede, & Perkell, 2013; Zharkova & Hewlett, 2009; Zharkova, Hewlett, & Hardcastle, 2011). Speech stability has also been suggested as an indicator of motor speech maturity in previous research using ultrasound imaging of velar-vowel targets (Frisch, Maxfield, & Belmont, 2016). This study extends research by Frisch, Maxfield, & Belmont (2016) to investigate patterns of velar-vowel coarticulation and speech stability in bilingual children who wear cochlear implants. Ultrasound and acoustic data were recorded from one English-Spanish bilingual participant (P1) who wears bilateral cochlear implants, one English-Spanish bilingual control child (P2) with no hearing impairment, and one English-Spanish bilingual adult speaker. Measures of velar-vowel coarticulation and speech stability across three productions of English and Spanish words were recorded and analyzed following procedures of Wodzinski and Frisch (2006). The participants were asked to produce three repetitions of fifteen English and fifteen Spanish target words starting with a /k/+ vowel sequence. Ultrasound imaging was used to record and trace tongue movement at the point of maximum velar closure. Data was compared between English and Spanish words, across participants, and between repetitions of the same word. In comparing English and Spanish words, child participants (P1 and P2) demonstrated increased coarticulation during Spanish productions. All participants showed decreased stability in Spanish productions when compared to English. Adult participant (P3) showed greater overall stability in productions and consistent coarticulation across both languages. Measures of coarticulation and overall stability were relatively equal across P1 and P2, while P3 showed greater and more stable coarticulation across both languages. Preliminary results support findings in previous research suggesting that anticipatory coarticulation and speech stability could be used as an index for assessing speech motor planning in bilingual and clinical populations (Barbier, Perrier, Ménard, Payan, Tiede, & Perkell, 2013; Frisch, Allen, Betancourt, & Maxfield, 2016; Frisch, Maxfield, & Belmont, 2016; Frisch & Wodzinski, 2014; Zharkova & Hewlett, 2009; Zharkova, Hewlett, & Hardcastle, 2011). Results additionally indicate that a young cochlear implant user who receives early intervention and is learning two languages can develop commensurate motor speech planning systems to that of a typical bilingual peer and that patterns of coarticulation and stability may be different in English and Spanish contexts.
62

Top-Down Processes in Simulated Combined Electric-Acoustic Hearing: The Effect of Context and the Role of Low-Frequency Cues in the Perception of Temporally Interrupted Speech

Oh, Soo Hee 12 November 2014 (has links)
In recent years, the number of unilateral cochlear implant (CI) users with functional residual-hearing has increased and bimodal hearing has become more prevalent. According to the multi-source speech perception model, both bottom-up and top-down processes are important components of speech perception in bimodal hearing. Additionally, these two components are thought to interact with each other to different degrees depending on the nature of the speech materials and the quality of the bottom-up cues. Previous studies have documented the benefits of bimodal hearing as compared with a CI alone, but most of them have focused on the importance of bottom-up, low-frequency cues. Because only a few studies have investigated top-down processing in bimodal hearing, relatively little is known about the top-down mechanisms that contribute to bimodal benefit, or the interactions that may occur between bottom-up and top-down processes during bimodal speech perception. The research described in this dissertation investigated top-down processes of bimodal hearing, and potential interactions between top-down and bottom-up processes, in the perception of temporally interrupted speech. Temporally interrupted sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down processing. Young normal hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Sentences were square-wave gated at a rate of 5 Hz with a 50 percent duty cycle. Two factors that were expected to influence bimodal benefit were examined: the amount of linguistic context available in the speech stimuli, and the continuity of low-frequency cues. Experiment 1 evaluated the effect of sentence context on bimodal benefit for temporally interrupted sentences from the City University of New York (CUNY) and Institute of Electrical and Electronics and Engineers (IEEE) sentence corpuses. It was hypothesized that acoustic low-frequency information would facilitate linguistic top-down processing such that the higher context CUNY sentences would produce more bimodal benefit than the lower context IEEE sentences. Three vocoder channel conditions were tested for each type of sentence (8-, 12-, and 16-channels for CUNY; 12-, 16-, and 32-channels for IEEE), in combination with either LP speech or LPHCs. Bimodal benefit was compared for similar amounts of spectral degradation (matched-channels) and similar ranges of baseline performance. Two gain measures, percentage point gain and normalized gain, were examined. Experiment 1 revealed clear effects of context on bimodal benefit for temporally interrupted speech, when LP speech was presented to the residual-hearing ear, thereby providing additional support for the notion that low-frequency cues can enhance listeners' use of top-down processing. However, the bimodal benefits observed for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. In addition, unlike previous findings for continuous speech, no bimodal benefits were observed when LPHCs were presented to the LP ear. Experiments 2 and 3 further investigated the effects of low-frequency cues on bimodal benefit by systematically restoring continuity to temporally interrupted signals in the vocoder and/or LP ears. Stimuli were 12-channel CUNY sentences presented to the vocoder ear, and LPHCs presented to the LP ear. Signal continuity was restored to the vocoder ear by filling silent gaps in sentences with envelope-modulated, speech-shaped noise. Continuity was restored to signals in the LP ear by filling gaps with envelope-modulated LP noise or by using continuous LPHCs. It was hypothesized that the restoration of continuity in one or both ears would improve bimodal benefit relative to the condition in which both ears received temporally interrupted stimuli. The results from Experiments 2 and 3 showed that restoring continuity to the simulated residual-hearing or CI ear improved bimodal benefits, but that the greatest improvement was observed when continuity was restored to both ears. These findings support the conclusion that temporal interruption disrupts top-down enhancement effects in bimodal hearing. Lexical segmentation and perceptual continuity were identified as factors that could potentially explain the increased bimodal benefit for continuous, as compared to temporally interrupted, speech. Taken together, the findings from Experiments 1-3 provide additional evidence that low-frequency sensory information can provide bimodal benefit for speech that is spectrally and/or temporally degraded by improving listeners' ability to make use of top-down processing. Findings further suggest that temporal degradation reduces top-down enhancement effects in bimodal hearing, thereby reducing bimodal benefit for temporally interrupted speech as compared to continuous speech.
63

Accuracy of /t/ Productions in Children with Cochlear Implants as Compared to Normal-Hearing, Articulation Age-Matched Peers

Gier, Terry 21 July 2014 (has links)
Children who receive cochlear implants (CIs) demonstrate considerable variability in speech sound production. Investigations focused on speech sound development in children with CIs have shown initial accelerated growth, followed by a plateau where consonant order of acquisition generally mirrors that of NH children, but is slower (Blamey, Barry, & Pascale, 2001; Serry & Blamey, 1999; Spencer & Guo, 2013). A notable exception to this pattern, /t/, has been shown to be acquired later-than normal in several investigations (Blamey et al., 2001; Chin, 2003; Ertmer, True Kloiber, Jongmin, Connell Kirleis, & Bradford, 2012). The primary purpose of this investigation was to 1) examine the accuracy of /t/ productions in children with CIs and 2) quantify subtle phonetic differences in correctly produced consonants and substituted consonants (or covert contrast). Two groups of children who had participated in a larger study that examined the influence of speech production abilities on speech perception scores of children with CI (Gonzalez, 2013) provided the speech stimuli for this investigation. The experimental group included nine congenitally deafened children with CI, ranging in age from 2;11 to 6;4 years (M=4;9), who were implanted by 3 years of age, had at least 12 months of device experience, and only used an oral mode of communication. These children were matched to typically developing children by articulation ability and gender. Recordings of the verbal responses on the OlimSpac were obtained from the Gonzalez (2013) study. Thirty-three graduate students in speech-language pathology rated the phonetic accuracy of /t/ and the phonemes that were found to be most often substituted for it, /d/ and /ʧ/ on a 7 point equal-appearing interval scale. A three-way ANOVA was performed to determine the differences in perceived consonant accuracy across: group, transcription category, and phoneme substitution. The significant interaction between group and transcription category was of particular interest. Results indicated that children with CIs did not show an unusually delayed development of /t/. When a confusion matrix was generated to depict overall OlimSpac performance, the NH group was noted to outperform the CI group across all phonemes. This would suggest that /t/ was not uniquely poorer in the CI group, but instead these children evidenced poorer phoneme accuracy in general. Finally, group differences also were apparent in substitutions of [t] for target /d/ and /ʧ/ productions (i.e., covert contrast). The clinical applications are described.
64

Cochlear implant sound coding with across-frequency delays

Taft, Daniel Adam January 2009 (has links)
The experiments described in this thesis investigate the temporal relationship between frequency bands in a cochlear implant sound processor. Initial studies were of cochlea-based traveling wave delays for cochlear implant sound processing strategies. These were later broadened into studies of an ensemble of across-frequency delays. / Before incorporating cochlear delays into a cochlear implant processor, a set of suitable delays was determined with a psychoacoustic calibration to pitch perception, since normal cochlear delays are a function of frequency. The first experiment assessed the perception of pitch evoked by electrical stimuli from cochlear implant electrodes. Six cochlear implant users with acoustic hearing in their non-implanted ears were recruited for this, since they were able to compare electric stimuli to acoustic tones. Traveling wave delays were then computed for each subject using the frequencies matched to their electrodes. These were similar across subjects, ranging over 0-6 milliseconds along the electrode array. / The next experiment applied the calibrated delays to the ACE strategy filter outputs before maxima selection. The effects upon speech perception in noise were assessed with cochlear implant users, and a small but significant improvement was observed. A subsequent sensitivity analysis indicated that accurate calibration of the delays might not be necessary after all; instead, a range of across-frequency delays might be similarly beneficial. / A computational investigation was performed next, where a corpus of recorded speech was passed through the ACE cochlear implant sound processing strategy in order to determine how across-frequency delays altered the patterns of stimulation. A range of delay vectors were used in combination with a number of processing parameter sets and noise levels. The results showed that additional stimuli from broadband sounds (such as the glottal pulses of vowels) are selected when frequency bands are desynchronized with across-frequency delays. Background noise contains fewer dominant impulses than a single talker and so is not enhanced in this way. / In the following experiment, speech perception with an ensemble of across-frequency delays was assessed with eight cochlear implant users. Reverse cochlear delays (high frequency delays) were equivalent to conventional cochlear delays. Benefit was diminished for larger delays. Speech recognition scores were at baseline with random delay assignments. An information transmission analysis of speech in quiet indicated that the discrimination of voiced cues was most improved with across-frequency delays. For some subjects, this was seen as improved vowel discrimination based on formant locations and improved transmission of the place of articulation of consonants. / A final study indicated that benefits to speech perception with across-frequency delays are diminished when the number of maxima selected per frame is increased above 8-out-of-22 frequency bands.
65

Children's Perception of Speaker Identity from Spectrally Degraded Input

Vongpaisal, Tara 23 February 2010 (has links)
Speaker identification is a challenge for cochlear implant users because their prosthesis restricts access to the cues that underlie natural voice quality. The present thesis examined speaker recognition in the context of spectrally degraded sentences. The listeners of interest were child implant users who were prelingually deaf as well as hearing children and adults who listened to speech via vocoder simulations of implant processing. Study 1 focused on child implant users' identification of a highly salient speaker—the mother (identified as mother)—and unfamiliar speakers varying in age and gender (identified as man, woman, or girl). In a further experiment, children were required to differentiate their mother's voice from the voices of unfamiliar women. Young hearing children were tested on the same tasks and stimuli. Although child implant users performed more poorly than hearing children overall, they successfully differentiated their mother's voice from other voices. In fact, their performance surpassed expectations based on previous studies of child and adult implant users. Even when natural variations in speaking style were reduced, child implant users successfully identified the speakers. The findings imply that person-specific differences in articulatory style contributed to implanted children's successful performance. Study 2 used vocoder simulations of cochlear implant processing to vary the spectral content of sentences produced by the man, woman, and girl from Study 1. The ability of children (5-7 years and 10-12 years) and adults with normal hearing to identify the speakers was affected by the level of spectral degradation and by the gender of the speaker. Female voices were more difficult to identify than was the man's voice, especially for the younger children. In some respects, hearing individuals' identification of degraded voices was poorer than that of child implant users in Study 1. In a further experiment, hearing children and adults were required to provide verbatim repetitions of spectrally degraded sentences. Their performance on this task greatly exceeded their performance on speaker identification at comparable levels of spectral degradation. The present findings underline the importance of ecologically valid materials and methods when assessing speaker identification, especially in children. Moreover, they raise questions about the efficacy of vocoder models for the study of speaker identification in cochlear implant users.
66

Children's Perception of Speaker Identity from Spectrally Degraded Input

Vongpaisal, Tara 23 February 2010 (has links)
Speaker identification is a challenge for cochlear implant users because their prosthesis restricts access to the cues that underlie natural voice quality. The present thesis examined speaker recognition in the context of spectrally degraded sentences. The listeners of interest were child implant users who were prelingually deaf as well as hearing children and adults who listened to speech via vocoder simulations of implant processing. Study 1 focused on child implant users' identification of a highly salient speaker—the mother (identified as mother)—and unfamiliar speakers varying in age and gender (identified as man, woman, or girl). In a further experiment, children were required to differentiate their mother's voice from the voices of unfamiliar women. Young hearing children were tested on the same tasks and stimuli. Although child implant users performed more poorly than hearing children overall, they successfully differentiated their mother's voice from other voices. In fact, their performance surpassed expectations based on previous studies of child and adult implant users. Even when natural variations in speaking style were reduced, child implant users successfully identified the speakers. The findings imply that person-specific differences in articulatory style contributed to implanted children's successful performance. Study 2 used vocoder simulations of cochlear implant processing to vary the spectral content of sentences produced by the man, woman, and girl from Study 1. The ability of children (5-7 years and 10-12 years) and adults with normal hearing to identify the speakers was affected by the level of spectral degradation and by the gender of the speaker. Female voices were more difficult to identify than was the man's voice, especially for the younger children. In some respects, hearing individuals' identification of degraded voices was poorer than that of child implant users in Study 1. In a further experiment, hearing children and adults were required to provide verbatim repetitions of spectrally degraded sentences. Their performance on this task greatly exceeded their performance on speaker identification at comparable levels of spectral degradation. The present findings underline the importance of ecologically valid materials and methods when assessing speaker identification, especially in children. Moreover, they raise questions about the efficacy of vocoder models for the study of speaker identification in cochlear implant users.
67

Effects of Unilateral and Bilateral Cochlear Implantation on Cortical Activity Measured by an EEG Neuroimaging Method in Children

Wong, Daniel 08 January 2013 (has links)
Bilateral implantation of a cochlear implant (CI) after a >2 year period of unilateral hearing with a second implant has been shown to result in altered latencies in brainstem responses in children with congenital deafness. In this thesis, a neural source localization method was developed to investigate the effects of unilateral CI use on cortical development after the implantation of a 2nd CI. The electroencephalography (EEG) source localization method is based on the linearly constrained minimum variance (LCMV) vector beamformer and utilizes null constraints to minimize the electrical artifact produced by the CI. The accuracy of the method was assessed and optimized through simulations and comparisons to beamforming with magnetoencephalography (MEG) data. After using cluster analyses to ensure that sources compared across subjects originate from the same neural generators, a study was done to examine the effects of unilateral CI hearing on hemispheric lateralization to monaural responses. It was found that a >2 year period of unilateral hearing results in expanded projections from the 1st implanted ear to the contralateral auditory area that is not reversed by implantation of a 2nd CI. A subsequent study was performed to examine the effects of unilateral CI hearing on the contributions of the 1st and 2nd implanted ears to the binaural response. It was found that in children with > 2 years of unilateral hearing, the binaural response is dominated by the 1st implanted ear. Together, these results suggest that the delay between the 1st and 2nd CI should be minimized in bilateral implantation to avoid dominance of auditory pathways from the 1st implanted ear. This dominance limits developmental competition from the 2nd CI and potentially contributes to poorer performance in speech detection in noise tasks.
68

Effects of Unilateral and Bilateral Cochlear Implantation on Cortical Activity Measured by an EEG Neuroimaging Method in Children

Wong, Daniel 08 January 2013 (has links)
Bilateral implantation of a cochlear implant (CI) after a >2 year period of unilateral hearing with a second implant has been shown to result in altered latencies in brainstem responses in children with congenital deafness. In this thesis, a neural source localization method was developed to investigate the effects of unilateral CI use on cortical development after the implantation of a 2nd CI. The electroencephalography (EEG) source localization method is based on the linearly constrained minimum variance (LCMV) vector beamformer and utilizes null constraints to minimize the electrical artifact produced by the CI. The accuracy of the method was assessed and optimized through simulations and comparisons to beamforming with magnetoencephalography (MEG) data. After using cluster analyses to ensure that sources compared across subjects originate from the same neural generators, a study was done to examine the effects of unilateral CI hearing on hemispheric lateralization to monaural responses. It was found that a >2 year period of unilateral hearing results in expanded projections from the 1st implanted ear to the contralateral auditory area that is not reversed by implantation of a 2nd CI. A subsequent study was performed to examine the effects of unilateral CI hearing on the contributions of the 1st and 2nd implanted ears to the binaural response. It was found that in children with > 2 years of unilateral hearing, the binaural response is dominated by the 1st implanted ear. Together, these results suggest that the delay between the 1st and 2nd CI should be minimized in bilateral implantation to avoid dominance of auditory pathways from the 1st implanted ear. This dominance limits developmental competition from the 2nd CI and potentially contributes to poorer performance in speech detection in noise tasks.
69

Toward Better Representations of Sound with Cochlear Implants

Wilson, Blake Shaw January 2015 (has links)
<p>This dissertation is about the first substantial restoration of human sense using a medical intervention. In particular, the development of the modern cochlear implant (CI) is described, with a focus on sound processors for CIs. As of October 2015, more than 460 thousand persons had each received a single CI on one side or bilateral CIs for both sides. More than 75 percent of users of the present-day devices use the telephone routinely, including conversations with previously unknown persons and with varying and unpredictable topics. That ability is a long trip indeed from severe or worse losses in hearing. The sound processors, in conjunction with multiple sites of highly-controlled electrical stimulation in the cochlea, made the trip possible.</p><p>Many methods and techniques were used in the described research, including but not limited to those of signal processing, electrical engineering, neuroscience, speech science, and hearing science. In addition, the results were the products of collaborative efforts, beginning in the late 1970s. For example, our teams at the Duke University Medical Center and the Research Triangle Institute worked closely with investigators at 27 other universities worldwide. </p><p>The most important outcome from the research was unprecedented levels of speech reception for users of CIs, which moved a previously experimental treatment into the mainstream of clinical practice.</p> / Dissertation
70

Reconhecimento de fala com e sem ruído competitivo em crianças usuárias de implante coclear utilizando dois diferentes processadores de fala / Speech recognition in quiet and noise background situation in children with cochlear implants using two different speech processors

Fabiana Danieli 10 June 2010 (has links)
As crianças usuárias de implante coclear multicanal (IC) têm apresentado resultados de percepção de fala cada vez melhores, principalmente devido aos avanços nas técnicas do processamento do som deste dispositivo. O objetivo deste trabalho foi estudar comparativamente a habilidade de reconhecimento de fala no silêncio e na presença de ruído competitivo em crianças usuárias de implante coclear utilizando dois diferentes processadores de fala. Foram avaliadas 26 crianças usuárias do dispositivo de implante coclear Nucleus 24M ou Nucleus 24K, da Cochlear Corporation, divididas em dois grupos de acordo com o processador de fala utilizado. O grupo 1 foi composto por 16 crianças que faziam uso do processador de fala Sprint, com média de idade de 8 anos e dois meses, tempo médio de privação auditiva de 2 anos e 1 mês e tempo médio de uso do implante coclear de 6 anos. O grupo 2 foi composto por 10 crianças que faziam uso do processador de fala Freedom, com média de idade de 9 anos e nove meses, tempo médio de privação sensorial auditiva de 2 anos e três meses e tempo médio de uso do implante coclear de 7 anos e três meses. Foi aplicado o teste HINT/Brasil (Hearing in Noise Test - versão em Português do Brasil) em campo livre nas condições de silêncio (0º azimute) e na presença de ruído competitivo (sentenças de 0º azimute e ruído a 180º azimute). O desempenho dos grupos foi comparado nas duas condições de teste, bem como a distribuição dos grupos quanto à idade, tempo de privação sensorial auditiva e tempo de uso do implante coclear. O desempenho do grupo 2 (Freedom) foi superior em relação ao grupo 1 (Sprint) em todas as condições de avaliação, sendo evidenciada diferença significativa entre eles apenas na condição de silêncio. Não foram encontradas diferenças significativas das variáveis idade, tempo de privação e tempo de uso do IC entre os grupos. As características de processamento do som presentes no processador de fala Freedom parecem ter contribuído para o melhor desempenho do grupo 2 nos testes de percepção de fala realizados. Novos estudos são necessários para complementação destes achados. / The children with multichannel cochlear implants have show significant improvements in speech perception, mainly due to advances in techniques of processing the sound of this device. The objective was to study comparatively the speech recognition skills in quiet and noise background situation in children with cochlear implants using two different speech processors. Were evaluated 26 children with cochlear implant device Nucleus 24M or 24K, Cochlear Corporation, divided in two groups according to the speech processor used. The group 1 consisted of 16 children who used the Sprint processor, mean age of 8 years and two months, mean hearing sensorial privation 2 years and 1 month and mean time of implant use 6 years. The group 2 consisted of 10 children who used the Freedom processor, mean age of 9 years and nine months, mean sensorial privation 2 years and three months and mean time of implant use 7 years and three months. The HINT/Brazil (Hearing in Noise Test - Portuguese version of Brazil) was applied in the sound field in quiet (0° azimuth) and noise background situation (source location for the speech 0° and 180° for noise). Speech recognition performance of groups was compared in quiet and noise background situation, as well as the distribution of the groups in age, sensorial privation and duration of implant use. The speech recognition performance of group 2 (Freedom) was higher than in group 1 (Sprint) in all evaluation situations, with significant difference between groups only on quiet situation. There were no significant differences of the variables age, time of sensorial privation and time of implant use between the groups. The components of the signal processing available on the Nucleus Freedom processor may have contributed to the better speech recognition performance of group 2. Further research is needed to complement these findings.

Page generated in 0.0722 seconds