Spelling suggestions: "subject:"otolaryngology""
21 |
Bronchoscopy and Airway Disorders in ChildrenMasters, Ian Brent Unknown Date (has links)
Tracheobronchial structural lesions present a considerable diagnostic challenge and workload to tertiary paediatrics. Bronchoscopy is the definitive way of confirming these diagnoses. Quantification of the size of lesions is important to the decision-making processes for management, yet this aspect of assessment has been left to subjective visual estimates of the size as there has not been a method developed that enabled quantitative measurement. The clinical profiles of children with these disorders have long been suspected to be worse than respiratory illnesses in normal children however this aspect has never been studied using objective criteria. The major hypothesis of this thesis is that structural lesions such as malacia disorders of the tracheobronchial tree result in significant respiratory morbidity that is a result of dose dependent crossectional area losses in lesions which improve with increasing age and management strategies. The aims of this thesis were i. to develop a methodology for objectively quantifing airway lesions using a paediatric bronchoscope ii. establish a cohort of children with airway lesions and quantitatively define the airway lesions and then longitudinally study these lesions and the respiratory illness profiles using validated scales of illness over a 2 year period.
|
22 |
Can the auditory late response indicate audibility of speech sounds from hearing aids with different digital processing strategiesIreland, Katie Helen January 2014 (has links)
Auditory late responses (ALR) have been proposed as a hearing aid (HA) evaluation tool but there is limited data exploring alterations to the waveform morphology from using digital HAs. The research had two phases: an adult normal hearing phase and an infant hearing impaired clinical feasibility phase. The adult normal hearing study investigated how different HA strategies and stimuli may influence the ALR. ALRs were recorded from 20 normally hearing young adults. Test sounds, /m/, /g/, /t/, processed in four HA conditions (unaided, linear, wide dynamic range compression (WDRC), non linear frequency compression (NLFC)) were presented at 65 dB nHL.Stimuli were 100 ms duration with a 3 second inter-stimulus interval. An Fsp measure of ALR quality was calculated and its significance determined using bootstrap analysis to objectively indicate response presence from background noise. Data from 16 subjects was included in the statistical analysis. ALRs were present in 96% of conditions and there was good repeatability between unaided ALRs. Unaided amplitude was significantly larger than all aided amplitudes and unaided latencies were significantly earlier than aided latencies in most conditions. There was no significant effect of NLFC on the ALR waveforms. Stimulus type had a significant effect on amplitude but not latency. The results showed that ALRs can be recorded reliably through a digital HA. There was an overall effect of aiding on the response likely due to the delay, compression characteristics and frequency shaping introduced by the HA. Type of HA strategy did not significantly alter the ALR waveform. The differences found in ALR amplitude due to stimulus type may be due to tonotopic organisation of the auditory cortex. The infant hearing impaired study was conducted to explore the feasibility of using ALRs as a means of indicating audibility of sound from HA’s in a clinical population. ALRs were recorded from 5 infants aged between 5-6 months with bilateral sensori neural hearing loss and wearing their customised HA’s. The speech sounds /m/ and /t/ from the adult study were presented at an rms level of 65 dB SPL in 3 conditions: unaided; WDRC; NLFC. Bootstrap analysis of Fsp was again used to determine response presence and probe microphone measures were recorded in the aided conditions to confirm audibility of the test sounds. ALRs were recordable in young infants wearing HAs. 85% of aided responses were present where only 10% of unaided were present. NLFC active improved aided response presence to the high frequency speech sound /t/ for 1 infant. There were no clear differences in the aided waveforms between the speech sounds. The results showed that it is feasible to record ALRs in an infant clinical population. The response appeared more sensitive to improved audibility than frequency alterations.
|
23 |
Effects of reverberation and amplification on sound localisationAl Saleh, Hadeel January 2011 (has links)
Communication often takes place in reverberant spaces making it harder for listeners to understand speech. In such difficult environments, listeners would benefit from being able to locate the sound source. In noisy or reverberant environments hearing-aid wearers often complain that their aids do not sufficiently help to understand speech or to localise a sound source. Simple amplification does not fully resolve the problem and sometimes makes it worse. Recent improvements in hearing aids, such as compression and filtering, can significantly alter the Interaural Time Difference (ITD) and the Inter-aural Level Difference (ILD) cues. Digital signal processing also tends to restrict the availability of fine structure cues, thereby forcing the listener to rely on envelope and level cues. The effect of digital signal processing on localisation, as felt by hearing aid wearers in different listening environments, is not well investigated. In this thesis, we aimed to investigate the effect of reverberation on localisation performance of normal hearing and hearing impaired listeners, and to determine the effects that hearing aids have on localisation cues. Three sets of experiments were conducted: in the first set (n=22 normal hearing listeners) results showed that the participants’ sound localisation ability in simulated reverberant environments is not significantly different from performance in a real reverberation chamber. In the second set of four experiments (n=16 normal hearing listeners), sound localisation ability was tested by introducing simulated reverberation and varying signal onset/offset times of different stimuli – i.e. speech, high-pass speech, low-pass speech, pink noise, 4 kHz pure tone, and 500 Hz pure tone. In the third set of experiments (n=28 bilateral Siemens Prisma 2 Pro hearing aid users) we investigated aided and unaided localisation ability of hearing impaired listeners in anechoic and simulated reverberant environments. Participants were seated in the middle of 21 loudspeakers that were arranged in a frontal horizontal arc (180°) in an anechoic chamber. Simulated reverberation was presented from four corner-speakers. We also performed physical measurements of ITDs and ILDs using a KEMAR simulator. Normal hearing listeners were not significantly affected in their ability to localise speech and pink noise stimuli in reverberation, however reverberation did have a significant effect on localising a 500 Hz pure tone. Hearing impaired listeners performed consistently worse in all simulated reverberant conditions. However, performance for speech stimuli was only significantly worse in the aided conditions. Unaided hearing impaired listeners showed decreased performance in simulated reverberation, specifically, when sounds came from lateral directions. Moreover, low-pass pink noise was most affected by simulated reverberation both in aided and unaided conditions, indicating that reverberation mainly affects ITD cues. Hearing impaired listeners performed significantly worse in all conditions when using their hearing aids. Physical measurements and psychoacoustic experiments consistently indicated that amplification mainly affected the ILD cues. We concluded that reverberation destroys the fine structure ITD cues in sound signals to some extent, thereby reducing localisation performance of hearing impaired listeners for low frequency stimuli. Furthermore we found that hearing aid compression affects ILD cues, which impairs the ability of hearing impaired listener to localise a sound source. Aided sound localisation could be improved for bilateral hearing aid users, if the aids would synchronize compression between both sides.
|
24 |
A feasibility study of visual feedback speech therapy for nasal speech associated with velopharyngeal dysfunctionPhippen, Ginette January 2013 (has links)
Nasal speech associated with velopharyngeal dysfunction (VPD) is seen in children and adults with cleft palate and other conditions that affect soft palate function, with negative effects on quality of life. Treatment options include surgery and prosthetics depending on the nature of the problem. Speech therapy is rarely offered as an alternative treatment as evidence from previous studies is weak. However there is evidence that visual biofeedback approaches are beneficial in other speech disorders and that this approach could benefit individuals with nasal speech who demonstrate potential for improved speech. Theories of learning and feedback also lend support to the view that a combined feedback approach would be most suitable. This feasibility study therefore aimed to develop and evaluate Visual Feedback Therapy (VFTh), a new behavioural speech therapy intervention, incorporating speech activities supported by visual biofeedback and performance feedback, for individuals with mild to moderate nasal speech. Evaluation included perceptual, instrumental and quality of life measures. Eighteen individuals with nasal speech were recruited from a regional cleft palate centre and twelve completed the study, six female and six male, eleven children (7 to 13 years) and one adult, (43 years). Six participants had repaired cleft palate and six had VPD but no cleft. Participants received 8 sessions of VFTh from one therapist. The findings suggest that that the intervention is feasible but some changes are required, including participant screening for adverse response and minimising disruptions to intervention scheduling. In blinded evaluation there was considerable variation in individual results but positive changes occurred in at least one speech symptom between pre and post-intervention assessment for eight participants. Seven participants also showed improved nasalance scores and seven had improved quality of life scores. This small study has provided important information about the feasibility of delivering and evaluating VFTh. It suggests that VFTh shows promise as an alternative treatment option for nasal speech but that further preliminary development and evaluation is required before larger scale research is indicated.
|
25 |
Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilitiesAthalye, Sheetal Purushottam January 2010 (has links)
Studies concerning speech recognition in noise constitute a very broad spectrum of work including aspects like the cocktail party effect or observing performance of individuals in different types of speech-signal or noise as well as benefit and improvement with hearing aids. Another important area that has received much attention is investigating the inter-relations among various auditory and non-auditory capabilities affecting speech intelligibility. Those studies have focussed on the relationship between auditory threshold (hearing sensitivity) and a number of suprathreshold abilities like speech recognition in quiet and noise, frequency resolution, temporal resolution and the non-auditory ability of cognition. There is considerable discrepancy regarding the relationship between speech recognition in noise and hearing threshold level. Some studies conclude that speech recognition performance in noise can be predicted solely from an individual’s hearing threshold level while others conclude that other supra-threshold factors such as frequency and/or temporal resolution must also play a role. Hearing loss involves more than deficits in recognising speech in noise, raising the question whether hearing impairment is a uni- or multi-dimensional construct. Moreover, different extents of hearing loss may display different relationships among measures of hearing ability, or different dimensionality. The present thesis attempts to address these three issues, by examining a wide range of hearing abilities in large samples of participants having a range of hearing ability from normal to moderate-severe impairment. The research extends previous work by including larger samples of participants, a wider range of measures of hearing ability and by differentiating among levels of hearing impairment. Method: Two large multi-centre studies were conducted, involving 103 and 128 participants respectively. A large battery of tests was devised and refined prior to the main studies and implemented on a common PC-based platform. The test domains included measurement of hearing sensitivity, speech recognition in quiet and noise, loudness perception, frequency resolution, temporal resolution, binaural hearing and localization, cognition and subjective measures like listening effort and self-report of hearing disability. Performance tests involved presentation of sounds via circum-aural earphones to one or both ears, as required, at intensities matched to individual hearing impairments to ensure audibility. Most tests involved measurements centred on a low frequency (500 Hz), high frequency (3000 Hz) and broadband. The second study included some refinements based on analysis of the first study. Analyses included multiple regression for prediction of speech recognition in stationary or fluctuating noise and factor analysis to explore the dimensionality of the data. Speech recognition performance was also compared with that predicted using the Speech Intelligibility Index (SII). iii Findings: Findings from regression analysis pooled across the two studies showed that speech recognition in noise can be predicted from a combination of hearing threshold at higher frequencies (3000/4000 Hz) and frequency resolution at low frequency (500 Hz). This supports previous studies that conclude that resolution is important in addition to hearing sensitivity. This was also confirmed by the fact that SII (representing sensitivity rather than resolution) underpredicted difficulties observed in hearing-impaired ears for speech recognition in noise. Speech recognition in stationary noise was predicted mainly by auditory threshold while speech recognition in fluctuating noise was predicted by a combination having a larger contribution from frequency resolution. In mild hearing losses (below 40 dB), speech recognition in noise was predicted mainly by hearing threshold, in moderate hearing losses (above 40 dB) it was predicted mainly by frequency resolution when combined for two studies. Thus it can be observed that the importance of auditory resolution (in this case frequency resolution) increases and the importance of the audiogram decreases as the degree of hearing loss increases, provided speech is presented at audible levels. However, for all degrees of hearing impairment included in the study, prediction based solely on hearing thresholds was not much worse than prediction based on a combination of thresholds and frequency resolution. Lastly, hearing impairment was shown to be multi-dimensional; main factors included hearing threshold, speech recognition in stationary and fluctuating noise, frequency and temporal resolution, binaural processing, loudness perception, cognition and self-reported hearing difficulties. A clinical test protocol for defining an individual auditory profile is suggested based on these findings. Conclusions: Speech recognition in noise depends on a combination of audibility of the speech components (hearing threshold) and frequency resolution. Models such as SII that do not include resolution tend to over-predict somewhat speech recognition performance in noise, especially for more severe hearing impairments. However, the over-prediction is not great. It follows that for clinical purposes there is not much to be gained from more complex psychoacoustic characterisation of sensorineural hearing impairment, when the purpose is to predict or explain difficulty understanding speech in noise. A conventional audiogram and possibly measurement of frequency resolution at 500 Hz is sufficient. However, if the purpose is to acquire a detailed individual auditory profile, the multidimensional nature of hearing loss should not be ignored. Findings from the present study show that, along with loss of sensitivity and reduced frequency resolution ability, binaural processing, loudness perception, cognition and self-report measures help to characterize this multi-dimensionality. Detailed studies should hence focus on these multiple dimensions of hearing loss and incorporate measuring a wide variety of different auditory capabilities, rather than inclusion of just a few, in order gain a complete picture of auditory functioning. Frequency resolution at low frequency (500 Hz) as a predictive factor for speech recognition in noise is a new finding. Few previous studies have included low-frequency measures of hearing, which may explain why it has not emerged previously. Yet this finding appears to be robust, as it was consistent across both of the present studies. It may relate to differentiation of vowel components of speech. The present work was unable to confirm the suggestion from previous studies that measures of temporal resolution help to predict speech recognition in fluctuating noise, possibly because few participants had extremely poor temporal resolution ability.
|
26 |
Hörselnedsättning och självmordsbeteende. : En undersökning av sambandet mellan självskattad hörselnedsätttning och två aspekter av självmordsbeteende.Lundgren, Tove, Järlesäter, Sofie January 2009 (has links)
No description available.
|
27 |
Psykiskt välbefinnande, socialt stöd och tillit hos personer med Usher syndrom typ II och typ III. -En enkätstudieSchröder, Stina, Svensson, Linda January 2009 (has links)
No description available.
|
28 |
Finns det samband mellan psykologiska symptom och sömnsvårigheter hos personer med hörselnedsättning?Carlsson, Ellinor, Norén, Linda January 2009 (has links)
No description available.
|
29 |
King-Kopetzkys syndrom : - en sammanställning och jämförelse av vetenskaplig litteraturKihlsten, Jessica, Strömblad, Jenny January 2007 (has links)
King-Kopetzkys syndrom (KKS) beskrivs som när en person upplever svårigheter med att uppfatta tal, speciellt i bullriga miljöer, men har normala tontrösklar. Orsaken till syndromet är fortfarande oviss och detta gör att de som jobbar inom hörselvården ställs inför ett val då de möter dessa patienter. Patienten kan antingen avfärdas då denne anses höra normalt, eller så tas problemet på allvar och vidare utredning utförs. Genom att göra en systematisk litteraturstudie belyses och förklaras syndromet och syftet har varit att göra en sammanställning och ge en översikt över KKS utifrån de vetenskapliga artiklar som berör ämnet. Målet med detta arbete är att öka förståelsen för KKS. Resultatet pekar på att det är fler kvinnor än män som drabbas och att medelåldern för de med syndromet är ganska låg, runt 32 år. Det finns många olika teorier till orsaken och ingen är i dagsläget mer rätt än den andra. Detsamma gäller mätningar som använts för att försöka utreda syndromet, här framkommer många olika förslag på vägen till diagnostisering. Slutsatsen som dras är att mer forskning kring ämnet är nödvändigt för att få en förklaring och ett tillvägagångssätt för diagnostisering av syndromet.
|
30 |
Experimental studies on the function of the stapedius muscle in manZakrisson, John-Erik January 1974 (has links)
digitalisering@umu.se
|
Page generated in 0.0759 seconds