Spelling suggestions: "subject:"epeech"" "subject:"cpeech""
791 |
Stimulability Approach for Speech Disorders in Young Children: Systematic ReviewRymer, A., Boyd, W., Carpenter, H., Williams, A. Lynn 01 January 2009 (has links)
No description available.
|
792 |
Making Mealtime More than a MessBoggs, Theresa, Greer, Lindsay P., Johnson, Marie A. 01 January 2017 (has links)
No description available.
|
793 |
Automatic speech recognition system for people with speech disordersRamaboka, Manthiba Elizabeth January 2018 (has links)
Thesis (M.Sc. (Computer Science)) --University of Limpopo, 2018 / The conversion of speech to text is essential for communication between speech
and visually impaired people. The focus of this study was to develop and evaluate an
ASR baseline system designed for normal speech to correct speech disorders.
Normal and disordered speech data were sourced from Lwazi project and UCLASS,
respectively. The normal speech data was used to train the ASR system. Disordered
speech was used to evaluate performance of the system. Features were extracted
using the Mel-frequency cepstral coefficients (MFCCs) method in the processing
stage. The cepstral mean combined variance normalization (CMVN) was applied to
normalise the features. A third-order language model was trained using the SRI
Language Modelling (SRILM) toolkit. A recognition accuracy of 65.58% was
obtained. The refinement approach is then applied in the recognised utterance to
remove the repetitions from stuttered speech. The approach showed that 86% of
repeated words in stutter can be removed to yield an improved hypothesized text
output. Further refinement of the post-processing module ASR is likely to achieve a
near 100% correction of stuttering speech
Keywords: Automatic speech recognition (ASR), speech disorder, stuttering
|
794 |
Investigation into automatic speech recognition of different dialects of Northern SothoMapeka, Madimetja Asaph January 2005 (has links)
Thesis (MSc. (Computer Science)) -- University of Limpopo, 2005 / Refer to the document / Telkom (SA),
HP (SA) and
National Research Fund
|
795 |
The relationship between parental language input and language outcomes in children with cochlear implantsGrieb, Melinda Jean 01 May 2010 (has links)
This study used the LENA Digital Language Processor to look at parental input as a possible factor affecting language performance variability in children with cochlear implants. Eight children between the ages of 2 and 6 with cochlear implants wore the LENA DLP for one full day while engaging in typical family activities. Adult word counts, child word counts, and number of conversational turns were compared to the child's Preschool Language Scales 3rd Edition scores and to LENA data from normal hearing children. It was found that parents of children with cochlear implants talk in a similar fashion to parents of normal hearing children in regards to amount of speech. The children, however, were significantly above agerage on word counts, while being significantly below average on PLS 3 scores. Possible reasons for this discrepancy are discussed.
|
796 |
Impairments in the acquisition of new object-name associations after unilateral temporal lobectomy despite fast-mapping encodingSchmitt, Kendra Marie 01 May 2013 (has links)
Learning new object-name associations (i.e., word learning) is an ability crucial to normal development starting in early childhood and continuing through the lifespan. To learn a new word, an object must be associated with an arbitrary phonological (or orthographic) string representing a word. The declarative memory system formulates and encodes associations between two arbitrary stimuli and has been well established as playing a critical role for adult word learning. Research investigating the neural substrates of the declarative memory system and word learning has implicated the hippocampus and the surrounding medial temporal lobe (MTL) as crucial structures. A substantial literature on populations with damage to these particular structures (e.g., hippocampal amnesia, temporal lobectomy) has supports the view that without these structures, declarative learning, and word learning by extension, is grossly impaired. However, a recent study Sharon and colleagues (2011) suggested that non-MTL structures may be sufficient to support word learning under special study conditions ("fast mapping") (Sharon, Moscovitch, & Gilboa, 2011). Fast mapping is a word-learning phenomenon described as the ability to acquire the name for a new word in a single exposure to an unknown word and unfamiliar referent alongside a known word with its referent (e.g., Carey & Bartlett, 1978; Carey, 2010).
This study evaluated the ability of patients with unilateral temporal lobectomy (TL) following early-onset temporal lobe epilepsy to learn new object-name associations in two different word learning conditions: fast mapping (FM) and explicit encoding (EE). The word learning performance was evaluated relative to a group of healthy normal comparison participants (NC). The goal of this study was to examine the role of the hippocampus in word learning to answer the question: does a FM condition promote word learning in participants with temporal lobe epilepsy who have had a left temporal lobectomy?
NC participants were able to acquire a rich representation of novel items (as evidenced by improved familiarity ratings and generalization of items) while TL participants had severely impaired performance on free recall, recognition testing, and generalization tasks. TL participants did not learn novel object-name associations despite a FM paradigm while the NC group performed significantly above chance on recognition testing. These findings in conjunction with broadly similar results obtained from hippocampal amnesic patients tested using the same paradigm (Warren & Duff, 2012), support the necessity of the hippocampus for rapid and flexible associations to be obtained via the declarative memory system.
|
797 |
The association between the supraglottic activity and glottal stops at the sentence levelKim, Se In 01 May 2015 (has links)
Contrary to the previous belief that any presence of supraglottic activity indicates presence of hyperfunctional vocal pathology, Stager et al. (2000, 2002) found out that supraglottic compressions do occur in normal subjects. In fact, dynamic false vocal fold compressions during production of phrases with a great number of glottal stops were noted. The present study hypothesized that a similar pattern s would be observed at sentence level, where at least 50% or higher incidence of dynamic FVF compressions would be observed at aurally perceived glottal stops and other linguistic markers, such as vowel-initial words, /t/ final words, punctuations and phrase boundaries, where glottal stops were likely to occur.
Nasendoscopic recordings were obtained from 8 healthy subjects (2M; 6F) during production of selected sentence stimuli.. Their audio recordings were rated by two judges to detect the location of glottal stops. Then, the video images were analyzed to categorize the presence and absence of dynamic and static false vocal folds (FVF) or anterior posterior (AP) compressions. Results indicated that the incidence of dynamic FVF compressions was 30%. Nevertheless, the average incidence was elevated at aurally perceived glottal stops and at the linguistic contexts that are known to be associated with glottal stops compared to other contexts.
|
798 |
Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perceptionFlaherty, Ruth 01 January 2014 (has links)
The speech signal carries two types of information: linguistic information (the message content) and indexical information (acoustic cues about the talker). In the traditional view of speech perception, the acoustic differences among talkers were considered "noise". In this view, the listeners' task was to strip away unwanted variability to uncover the idealized phonetic representation of the spoken message. A more recent view suggests that both talker information and linguistic information are stored in memory. Rather than being unwanted "noise", talker information aids in speech recognition especially under difficult listening conditions. For example, it has been shown that normal hearing listeners who completed voice recognition training were subsequently better at recognizing speech from familiar versus unfamiliar voices.
For individuals with hearing loss, access to both types of information may be compromised. Some studies have shown that cochlear implant (CI) recipients are relatively poor at using indexical speech information because low-frequency speech cues are poorly conveyed in standard CIs. However, some CI users with preserved residual hearing can now combine acoustic amplification of low frequency information (via a hearing aid) with electrical stimulation in the high frequencies (via the CI). It is referred to as bimodal hearing when a listener uses a CI in one ear and a hearing aid in the opposite ear. A second way electrical and acoustic stimulation is achieved is through a new CI system, the hybrid CI. This device combines electrical stimulation with acoustic hearing in the same ear, via a shortened electrode array that is intended to preserve residual low frequency hearing in the apical portion of the cochlea. It may be that hybrid CI users can learn to use voice information to enhance speech understanding.
This study will assess voice learning and its relationship to talker-discrimination, music perception, and spoken word recognition in simulations of Hybrid CI or bimodal hearing. Specifically, our research questions are as follows: (1) Does training increase talker identification? (2) Does familiarity with the talker or linguistic message enhance spoken word recognition? (3) Does enhanced spectral processing (as demonstrated by improved talker recognition) generalize to non-linguistic tasks such as talker discrimination and music perception tasks?
To address our research questions, we will recruit normal hearing adults to participate in eight talker identification training sessions. Prior to training, subjects will be administered the forward and backward digit span task to assess short-term memory and working memory abilities. We hypothesize that there will be a correlation between the ability to learn voices and memory. Subjects will also complete a talker-discrimination test and a music perception test that require the use of spectral cues. We predict that training will generalize to performances on these tasks. Lastly, a spoken word recognition (SWR) test will be administered before and after talker identification training. The subjects will listen to sentences produced by eight talkers (four male, four female) and verbally repeat what they had heard. Half of the sentences will contain keywords repeated in training and half of the sentences will have keywords not repeated in training. Additionally, subjects will have only heard sentences from half of the talkers during training. We hypothesize that subjects will show an advantage for trained keywords rather than non-trained keywords and will perform better with familiar talkers than unfamiliar talkers.
|
799 |
Listening Rate Preferences of Language Disordered Children as a Function of Grammatical ComplexityOrloff, Wendy Lee 01 January 1977 (has links)
The purpose of this investigation was to determine if performance on a language comprehension task, varying in number of syntactical units (i.e., grammatical complexity) was affected by altered rates of speech. A total of twenty-four language disordered children, aged 7 years, 8 months, through 9 years, 8 months, who were enrolled in language/learning disorders classrooms in the Portland Public Schools served as subjects. The Assessment of Children’s Language Comprehension (Foster et al., 1972) test was administered to each subject via audio-tape at one expanded (100 wpm), one normal (150 wpm), and two compressed rates (200, 250 wpm) of speech.
The results of this investigation showed significant differences between performances at varying rates of speech. The normal speaking rate produced significantly better comprehension scores than the other rates. The fast speaking rate (200 wpm) produced the next best scores, while the slow speaking rate (100 wpm) produced significantly lower scores.
The results also indicated a normal speaking rate appears to be the best overall rate to use among language disordered subjects, regardless of grammatical complexity.
|
800 |
One-third octave band augmented speech discrimination testing for cochlear impaired listenersHeath, Dianne 01 January 1983 (has links)
The purpose of this study was to investigate the effects of a 500 Hz and 3,150 Hz one-third octave band augmentation on the speech discrimination ability of listeners with cochlear hearing impairments. The results were analyzed both within the experimental group of subjects included in the present study and in comparison with data collected on a control group of normal hearing subjects reported earlier.
|
Page generated in 0.0262 seconds