Spelling suggestions: "subject:"epeech gnathology anda audiology"" "subject:"epeech gnathology anda eaudiology""
931 |
Computational Multimedia for Video Self ModelingShen, Ju 01 January 2014 (has links)
Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras.
|
932 |
Social-Emotional Development in Children with Hearing LossHarris, Lori Gayle 01 January 2014 (has links)
Many positive outcomes have been documented for children with hearing loss utilizing current treatment approaches such as early identification and intervention, including appropriately fit sensory devices and communication modes that focus on listening and spoken language. However, challenges related to social-emotional development have been widely observed. The development of communication skills in children with hearing loss is impacted by many factors, including the degree of hearing loss, the child's age at onset and identification, the presence of other disabilities, and when the child receives intervention. While there are a variety of therapeutic options available for children with hearing loss to develop communication skills, listening and spoken language is of particular interest to parents with normal hearing. In addition to affecting social competence and participation, problems with social-emotional development are linked to poor academic performance. This study examined the social-emotional development of a small group of young children who communicated using listening and spoken language as measured by parent and caregiver report. Three psychosocial scales were used to evaluate the children's social-emotional development in comparison to peers. These results were analyzed within the context of other demographic variables. One of the five children was identified as facing problems with social-emotional development.
|
933 |
Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like StimuliRussell, Benjamin Anderson 30 June 2016 (has links)
For cochlear implant (CI) listeners, poorer than normal speech recognition abilities are typically attributed to degraded spectral acuity. However, estimates of spectral acuity have most often been obtained using simple (tonal) stimuli, presented directly to the implanted electrodes, rather than through the speech processor as occurs in everyday listening. Further, little is known about spectral acuity for dynamic stimuli, as compared to static stimuli, even though the perception of dynamic spectral cues is important for speech perception.
The primary goal of the current study was to examine spectral acuity in CI listeners, and a comparison group of normal hearing (NH) listeners, for both static and dynamic stimuli presented through the speech processor. In addition to measuring static and dynamic spectral acuity for simple stimuli (pure tones) in Experiment 1, spectral acuity was measured for complex stimuli (synthetic vowels) in Experiment 2, because measures obtained with speech-like stimuli are more likely to reflect listeners’ ability to make use of spectral cues in naturally-produced speech. Sixteen postlingually-deaf, adult CI users and sixteen NH listeners served as subjects in both experiments.
In Experiment 1, frequency discrimination limens (FDLs) were obtained for 1.5 kHz reference tones, and frequency glide discrimination limens (FGDLs) were obtained for pure-tone frequency glides centered on 1.5 kHz. Glide direction identification thresholds (GDITs) were also measured, in order to determine the amount of frequency change required to identify glide direction. All three measures were obtained for stimuli having both longer (150 ms) and shorter (50 ms) durations.
Spectral acuity for dynamic stimuli (FGDLs, GDITs) was poorer than spectral acuity for static stimuli (FDLs) for both listener groups at both stimulus durations. Stimulus duration had a significant effect on thresholds in NH listeners, for all three measures, but had no significant effect on thresholds in CI listeners for any measure. Regression analyses revealed no systematic relationship between FDLs and FGDLs in NH listeners at either stimulus duration. For CI listeners, the relationship between FDLs and FGDLs was significant at both stimulus durations, suggesting that, for tonal signals, the factors that determine spectral acuity for static stimuli also largely determine spectral acuity for dynamic stimuli.
In Experiment 2, estimates of static and dynamic spectral acuity were obtained using three-formant synthetic vowels, modeled after the vowel /^/. Formant discrimination thresholds (FDTs) were measured for changes in static F2 frequency, whereas formant transition discrimination thresholds (FTDTs) were measured for stimuli that varied in the extent of F2 frequency change. FDTs were measured with 150-ms stimuli, and FTDTs were measured with both 150-ms and 50-ms stimuli. For both listener groups, FTDTs were similar for the longer and shorter stimulus durations, and FTDTs were larger than FDTs at the common duration of 150 ms. Measures from Experiment 2 were compared to analogous measures from Experiment 1 in order to examine the effect of stimulus context (simple versus complex) on estimates of spectral acuity. For NH listeners, measures obtained with complex stimuli (FDTs, FTDTs) were consistently larger than the corresponding measures obtained with simple stimuli (FDLs, FGDLs). For CI listeners, the relationship between simple and complex measures differed across two subgroups of subjects. For one subgroup, thresholds obtained with complex stimuli were smaller than those obtained with simple stimuli; for another subgroup the pattern was reversed. On the basis of these findings, it was concluded that estimates of spectral acuity obtained with simple stimuli cannot accurately predict estimates of spectral acuity obtained with complex (speech-like) stimuli in CI listeners. However, a significant relationship was observed between FDTs and FTDTs. Thus, similar to the measures obtained with pure-tone stimuli in Experiment 1 (FDLs and FGDLs), estimates of static spectral acuity (FDTs) appear to predict estimates of dynamic spectral acuity (FTDTs) when both measures are obtained with stimuli of similar complexity in CI listeners.
Taken together, findings from Experiments 1 and 2 support the following conclusions: (1) Dynamic spectral acuity is poorer than static spectral acuity for both simple and complex stimuli. This outcome was true for both NH and CI listeners, despite the fact that absolute thresholds were substantially larger, on average, for the CI group. (2) For stimuli having the same level of complexity (i.e., tonal or speech-like), dynamic spectral acuity in CI listeners appears to be determined by the same factors that determine spectral acuity for static stimuli. (3) For CI listeners, no systematic relationship was observed between analogous measures of spectral acuity obtained with simple, as compared to complex, stimuli. (4) It is expected that measures of spectral acuity based on complex stimuli would provide a better indication of CI users’ ability to make use of spectral cues in speech; therefore, it may be advisable for studies attempting to examine the relationship between spectral acuity and speech perception in this population to measure spectral acuity using complex, rather than simple, stimuli. (5) Findings from the current study are consistent with recent vowel identification studies suggesting that some poorer-performing CI users have little or no access to dynamic spectral cues, while access to such cues may be relatively good in some better-performing CI users. However, additional research is needed to examine relationship between estimates of spectral acuity obtained here for speech-like stimuli (FDTs, FTDTs) and individual CI users’ perception of static and dynamic spectral cues in naturally-produced speech.
|
934 |
Hormone Replacement Therapy (HRT) Modulates Peripheral and Central Auditory System Processing With AgingWilliamson, Tanika 08 November 2016 (has links)
After the findings were reported for the Women’s Health Initiative (WHI) study in the past decade, there has been a significant decline in the overall use of hormone replacement therapy (HRT) among women. However, there are still millions of middle-aged, menopausal women in the U.S. who are currently undergoing hormone therapy. Their reasons for continuing treatment include relief of severe menopausal symptoms, aid in the management of osteoporosis and reduction in the risk of colon cancer (Ness et al., 2005). The purpose of the following investigation was to evaluate the impact of HRT on the central and peripheral auditory systems both during and after treatment. Over the course of the study, hormone treatments were administered to female aging CBA/CaJ mice to observe what effects estrogen (E) and progestin (P) have on the peripheral and central auditory systems. Female CBA/CaJ middle age mice were ovariectomized and placed into 4 HRT groups (E, P, E+P and Placebo [Pb]). Hormone treatment lasted 6 months followed by a recovery/washout period of 1 month. During this time, electrophysiology tests such as auditory brainstem responses (ABR) and ABR gap in noise (GIN) were used to measure neural activity for the auditory nerve and brainstem. Distortion product otoacoustic emission (DPOAE) testing was also implemented to assess the functional status of the outer hair cells (OHC) and their ability to amplify sound in the cochlea. After 6 months of treatment, animals treated with E exhibited the least amount of changes in ABR thresholds and ABR GIN amplitudes than any other subject groups. Interestingly, P animals exhibited an abrupt increase in ABR thresholds only 3 months after treatment; however, for ABR GIN amplitude levels a progressive reduction observed throughout the study. E+P and Pb animals showed signs of accelerated age-related hearing loss (ARHL) with significantly elevated ABR thresholds and dwindling ABR GIN amplitude levels. No significant signs of recovery were observed for any of the hormone groups. Therefore, in the present murine investigation, the effects of HRT were long lasting.
To further expand on the results obtained for the electrophysiology tests, molecular biology experiments were performed to evaluate the expression of IGF-1R and FoxO3 in the cochlea during hormone therapy, from both in vitro and in vivo perspectives. Both genes play significant roles in the PI3K/AKT pathway and were specifically chosen because of their role in anti-apoptotic responses and cell survival. It was hypothesized that E attenuates the effects of ARHL via the PI3K/AKT pathway by up-regulating IGF-1R and FoxO3 to counteract the effects of oxidative stress in the aging mammalian cochlea. qPCR experiments were performed with stria vascularis (SV) lateral wall cells extracted from the cochlea of each animal in the hormone groups post-treatment (in vivo) and in SVK-1 cells treated with HRT over various lengths of time (in vitro) to evaluate the expression levels of IGF-1R and FoxO3. In-vivo experiments showed that the E-treated animals had significantly higher IG-1R levels compared to the other subject groups after treatment was discontinued. Similarly, IGF-1R levels steadily increased for E-treated SVK-1 cells over the course of hormone therapy, compared to P and E+P cells. FoxO3 expression, on the other hand, declined for all of the hormone-treated cells groups, relative to control SVK-1 cells (in vitro), and no statistical differences were detected for FoxO3 levels among the post-treatment animals (in vivo). These findings indicate that there is cross talk between E and IGF-1R involving the PI3K/AKT pathway, which contributes to the delayed onset of ARHL observed during HRT with E. Meanwhile, FoxO3 may not play a role in neuro-protective properties in the cochlea during HRT, as initially hypothesized.
|
935 |
Investigation of Bilingualism Knowledge of Speech-Language Pathologists and Speech-Language Pathology StudentsLeon, Michelle 01 July 2015 (has links)
The purpose of this thesis was to administer a survey to obtain information on practicing Speech-Language Pathologists’ (SLPs) knowledge of bilingual issues, while also considering whether any academic background on bilingualism guides SLP’s diagnostic and treatment options. This was done by comparing survey results of practicing SLPs with different academic backgrounds on bilingualism with current Master’s students registered at the Communication Sciences and Disorders Masters’ program at Florida International University (FIU). The survey consisted of 26 questions that examined participant’s history, and bilingual knowledge.
Data was collected from 89 surveys. Data analyses showed that students and SLPs with a strong educational background on bilingualism had a tendency to prefer answers that correspond to recent research findings on bilingualism than students and SLPs with no or little educational background on bilingualism. These results suggest that academic background on bilingualism guides assessment interpretations and treatment options of bilingual clients.
|
936 |
Help Me Chat: Eliciting Communicative Acts from Young Children Using Speech-Generating DevicesHernandez-Cartaya, Rebecca A. 08 July 2016 (has links)
Augmentative and alternative communication (AAC) is an evidence-based practice targeting the communication deficits of children with complex communication needs (CCN). While young children with communication disorders are attending preschool and using AAC, and specifically speech-generating devices (SGDs), with increasing frequency, best practices for implementation with this population are largely unexplored. In an effort to contribute to the knowledge base for teachers, the essential communication partners for children in the classroom setting, this research explored the interactions of four teacher-child dyads and analyzed the prompts and cues used to elicit communicative acts from the children.
Results of statistical and descriptive analyses revealed that, while teachers overwhelmingly favor and use verbal prompts over other stimuli, these prompts were no more effective in eliciting communicative acts. These results indicate that teachers would benefit from instruction in a variety of techniques for enhancing communication via AAC; future research directions towards this purpose are detailed.
|
937 |
The Efficacy of Training Parents to Deliver Multiple Oppositions Intervention to Children with Speech Sound DisordersSugden, Eleanor, Baker, Elise, Munro, Natalie, Williams, A. Lynn, Trivette, Carol M. 28 May 2018 (has links)
No description available.
|
938 |
School-Based Speech-Language Pathologist's Perceptions of Sensory Food Aversions in ChildrenMonroe, Ellen 01 May 2020 (has links)
Sensory Food Aversions occur frequently in children who are likely to appear on Speech-Language Pathologist’s (SLP’s) caseloads. The lack of research regarding intervention for Sensory Food Aversions in schools and the assertion of a gap in school-based services for children with feeding disorders was a significant indicator for the need of the study. A quantitative, descriptive, exploratory research design was selected using a self-developed questionnaire in order to exploreschool-based SLP’s perceptions of their knowledge and skills related to Sensory Food Aversions, as well as determine resources available for working with this population. Findings from the study suggest a need for educational training, emphasize the advocacy role of an SLP, and shed light on the challenges/barriers SLPs face in regard to treating Sensory Food Aversions in schools. This study may be useful for SLPs in order to meet the needs of children with Sensory Food Aversions.
|
939 |
Vascular and White-Matter Alterations in Blast and Trauma-Induced Balance and Gait Problems Revealed by Susceptibility-Weighted and Diffusion-Tensor ImagingGattu, R., Akin, Faith W., Cacace, A. T., Murnane, Owen D., Haacke, E. M. 01 August 2014 (has links)
No description available.
|
940 |
The Relationship Between Distortion Product Otoacoustic Emissions and Extended High-Frequency Audiometry in Tinnitus PatientsFabijańska, Anna, Smurzyński, Jacek, Hatzopoulos, Stavros, Kochanek, Krzysztof, Bartnik, Grażyna, Raj-Koziak, Danuta, Mazzoli, Manuela, Skarżyński, Piotr H., Jędrzejczak, Wieslaw W., Szkiełkowska, Agata, Skarżyński, Henryk 01 December 2012 (has links)
BACKGROUND: The aim of this study was to evaluate distortion product otoacoustic emissions (DPOAEs) and extended high-frequency (EHF) thresholds in a control group and in patients with normal hearing sensitivity in the conventional frequency range and reporting unilateral tinnitus.
MATERIAL/METHODS: Seventy patients were enrolled in the study: 47 patients with tinnitus in the left ear (Group 1) and 23 patients with tinnitus in the right ear (Group 2). The control group included 60 otologically normal subjects with no history of pathological tinnitus. Pure-tone thresholds were measured at all standard frequencies from 0.25 to 8 kHz, and at 10, 12.5, 14, and 16 kHz. The DPOAEs were measured in the frequency range from approximately 0.5 to 9 kHz using the primary tones presented at 65/55 dB SPL.
RESULTS: The left ears of patients in Group 1 had higher median hearing thresholds than those in the control subjects at all 4 EHFs, and lower mean DPOAE levels than those in the controls for almost all primary frequencies, but significantly lower only in the 2-kHz region. Median hearing thresholds in the right ears of patients in Group 2 were higher than those in the right ears of the control subjects in the EHF range at 12.5, 14, and 16 kHz. The mean DPOAE levels in the right ears were lower in patients from Group 2 than those in the controls for the majority of primary frequencies, but only reached statistical significance in the 8-kHz region.
CONCLUSIONS: Hearing thresholds in tinnitus ears with normal hearing sensitivity in the conventional range were higher in the EHF region than those in non-tinnitus control subjects, implying that cochlear damage in the basal region may result in the perception of tinnitus. In general, DPOAE levels in tinnitus ears were lower than those in ears of non-tinnitus subjects, suggesting that subclinical cochlear impairment in limited areas, which can be revealed by DPOAEs but not by conventional audiometry, may exist in tinnitus ears. For patients with tinnitus, DPOAE measures combined with behavioral EHF hearing thresholds may provide additional clinical information about the status of the peripheral hearing.
|
Page generated in 0.0564 seconds