• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 29
  • 29
  • 12
  • 11
  • 9
  • 9
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The influence of breathing disorders on face shape : a three-dimensional study

Alali, Ala January 2013 (has links)
Breathing disorders can potentially influence craniofacial development through interactions between the respiratory flow and genetic and environmental factors. It has been suggested that certain medical conditions such as persistent rhinitis and renal insufficiency may have an influence on face shape. The effects of these conditions are likely to be subtle; otherwise they would appear as an obvious visible facial feature. The use of three-dimensional imaging provides the opportunity to acquire accurate and high resolution facial data to explore the influence of medical condition on facial morphology. Therefore, the aim of the present study is to investigate the influence of breathing disorders (asthma, atopy, allergic rhinitis and sleep disordered breathing) on face shape in children. The study sample, comprising of 4784 British Caucasian children of which 2922 (61.1%) were diagnosed with a breathing disorder, was selected from the Avon Longitudinal Study of Parents and Children (ALSPAC), which had been conducted to investigate the genetic and environmental determinants of development, health and disease. Three-dimensional surface laser scans were conducted on the children when they were 15 years old. A total of 21 reproducible facial landmarks (x, y, z co-ordinates) were identified. Average facial shells were constructed for each of the different disease groups and compared to facial shells of healthy asymptomatic children. Face-shape variables (angular and linear measurements) were analysed with respect to the different breathing disorders by employing a variety of statistical methods, including t-tests, chi-square tests, principal component analysis, binary logistic regression and analysis of variance (ANOVA). The results reveal that individual breathing disorders have varying influences on facial features, including increased anterior lower face height, a more retrognathic mandible and reduced nose width and prominence. The study also shows that the early removal of adenoids and tonsils can have a significant effect on obstructive breathing, resulting in the restoration of the facial morphology to its normal shape. This was particularly evident in children with normal BMIs. Surprisingly, no significant differences in face shape were detected in children with multiple diseases (combinations of asthma, allergic rhinitis, atopy and sleep-disordered breathing) when compared to healthy children. This may indicate the multifactorial, complex character of this spectrum of diseases. The findings provide evidence of small but potentially real associations between breathing disorders and face shape. This was largely attributable to the use of high-resolution and reproducible three-dimensional facial imaging alongside a large study sample. They also provide the scientific community with a detailed and effective methodology for static facial modelling that could have clinical relevance for early diagnosis of breathing disorders. Furthermore, this research has demonstrated that the ALSPAC patient archive offers a valuable resource to clinicians and the scientific community for investigating associations between various breathing disorders and face shape.
2

Anatomy of the transmastoid endolymphatic sac decompression in the management of Ménière’s disease

Locke, Richard R. January 2008 (has links)
Ménière’s disease affects 1 in 1000 people and produces vertigo and hearing loss (Morrison, 1981). Endolymphatic sac decompression has been advocated on the basis that endolymphatic hydrops is the underlying pathology. The endolymphatic sac is said to be the terminal dilatation of the membranous labyrinth. It has been proposed that endolymph flows from the semicircular canals and cochlea to the endolymphatic sac. Portman (1927) devised a procedure for ‘decompressing’ the endolymphatic sac by removal of the bone from the posterior cranial fossa to relieve the symptoms of Ménière’s disease. Surgery on the endolymphatic sac remains controversial. Shea (1979) and Bagger-Sjöbäck et al (1990, 1993) have studied the endolymphatic sac using different techniques. There are discrepancies in the results between the two studies. The hypothesis that the endolymphatic sac can be safely approached and decompressed by a transmastoid route was tested. A total of thirteen cadaver heads and ten isolated temporal bones were used. A series of dissections were performed to examine the endolymphatic sac, perform measurements and analyse surgical approaches to the sac. Histological and electron microscopic study were performed. The lumen of the endolymphatic sac was not always identifiable in the dura of the posterior cranial fossa or it frequently lay over the sigmoid sinus. In the dura of the posterior cranial fossa where the endolymphatic sac is located was a thickening of the dura. This thickening was present even in the absence of the endolymphatic sac. The endolymphatic sac can be safely approached by a transmastoid approach, if there is an extraosseous component to the endolymphatic sac. The proximal endolymphatic sac can be approached by posterior cranial fossa route.
3

Can the auditory late response indicate audibility of speech sounds from hearing aids with different digital processing strategies

Ireland, Katie Helen January 2014 (has links)
Auditory late responses (ALR) have been proposed as a hearing aid (HA) evaluation tool but there is limited data exploring alterations to the waveform morphology from using digital HAs. The research had two phases: an adult normal hearing phase and an infant hearing impaired clinical feasibility phase. The adult normal hearing study investigated how different HA strategies and stimuli may influence the ALR. ALRs were recorded from 20 normally hearing young adults. Test sounds, /m/, /g/, /t/, processed in four HA conditions (unaided, linear, wide dynamic range compression (WDRC), non linear frequency compression (NLFC)) were presented at 65 dB nHL.Stimuli were 100 ms duration with a 3 second inter-stimulus interval. An Fsp measure of ALR quality was calculated and its significance determined using bootstrap analysis to objectively indicate response presence from background noise. Data from 16 subjects was included in the statistical analysis. ALRs were present in 96% of conditions and there was good repeatability between unaided ALRs. Unaided amplitude was significantly larger than all aided amplitudes and unaided latencies were significantly earlier than aided latencies in most conditions. There was no significant effect of NLFC on the ALR waveforms. Stimulus type had a significant effect on amplitude but not latency. The results showed that ALRs can be recorded reliably through a digital HA. There was an overall effect of aiding on the response likely due to the delay, compression characteristics and frequency shaping introduced by the HA. Type of HA strategy did not significantly alter the ALR waveform. The differences found in ALR amplitude due to stimulus type may be due to tonotopic organisation of the auditory cortex. The infant hearing impaired study was conducted to explore the feasibility of using ALRs as a means of indicating audibility of sound from HA’s in a clinical population. ALRs were recorded from 5 infants aged between 5-6 months with bilateral sensori neural hearing loss and wearing their customised HA’s. The speech sounds /m/ and /t/ from the adult study were presented at an rms level of 65 dB SPL in 3 conditions: unaided; WDRC; NLFC. Bootstrap analysis of Fsp was again used to determine response presence and probe microphone measures were recorded in the aided conditions to confirm audibility of the test sounds. ALRs were recordable in young infants wearing HAs. 85% of aided responses were present where only 10% of unaided were present. NLFC active improved aided response presence to the high frequency speech sound /t/ for 1 infant. There were no clear differences in the aided waveforms between the speech sounds. The results showed that it is feasible to record ALRs in an infant clinical population. The response appeared more sensitive to improved audibility than frequency alterations.
4

Effects of reverberation and amplification on sound localisation

Al Saleh, Hadeel January 2011 (has links)
Communication often takes place in reverberant spaces making it harder for listeners to understand speech. In such difficult environments, listeners would benefit from being able to locate the sound source. In noisy or reverberant environments hearing-aid wearers often complain that their aids do not sufficiently help to understand speech or to localise a sound source. Simple amplification does not fully resolve the problem and sometimes makes it worse. Recent improvements in hearing aids, such as compression and filtering, can significantly alter the Interaural Time Difference (ITD) and the Inter-aural Level Difference (ILD) cues. Digital signal processing also tends to restrict the availability of fine structure cues, thereby forcing the listener to rely on envelope and level cues. The effect of digital signal processing on localisation, as felt by hearing aid wearers in different listening environments, is not well investigated. In this thesis, we aimed to investigate the effect of reverberation on localisation performance of normal hearing and hearing impaired listeners, and to determine the effects that hearing aids have on localisation cues. Three sets of experiments were conducted: in the first set (n=22 normal hearing listeners) results showed that the participants’ sound localisation ability in simulated reverberant environments is not significantly different from performance in a real reverberation chamber. In the second set of four experiments (n=16 normal hearing listeners), sound localisation ability was tested by introducing simulated reverberation and varying signal onset/offset times of different stimuli – i.e. speech, high-pass speech, low-pass speech, pink noise, 4 kHz pure tone, and 500 Hz pure tone. In the third set of experiments (n=28 bilateral Siemens Prisma 2 Pro hearing aid users) we investigated aided and unaided localisation ability of hearing impaired listeners in anechoic and simulated reverberant environments. Participants were seated in the middle of 21 loudspeakers that were arranged in a frontal horizontal arc (180°) in an anechoic chamber. Simulated reverberation was presented from four corner-speakers. We also performed physical measurements of ITDs and ILDs using a KEMAR simulator. Normal hearing listeners were not significantly affected in their ability to localise speech and pink noise stimuli in reverberation, however reverberation did have a significant effect on localising a 500 Hz pure tone. Hearing impaired listeners performed consistently worse in all simulated reverberant conditions. However, performance for speech stimuli was only significantly worse in the aided conditions. Unaided hearing impaired listeners showed decreased performance in simulated reverberation, specifically, when sounds came from lateral directions. Moreover, low-pass pink noise was most affected by simulated reverberation both in aided and unaided conditions, indicating that reverberation mainly affects ITD cues. Hearing impaired listeners performed significantly worse in all conditions when using their hearing aids. Physical measurements and psychoacoustic experiments consistently indicated that amplification mainly affected the ILD cues. We concluded that reverberation destroys the fine structure ITD cues in sound signals to some extent, thereby reducing localisation performance of hearing impaired listeners for low frequency stimuli. Furthermore we found that hearing aid compression affects ILD cues, which impairs the ability of hearing impaired listener to localise a sound source. Aided sound localisation could be improved for bilateral hearing aid users, if the aids would synchronize compression between both sides.
5

A feasibility study of visual feedback speech therapy for nasal speech associated with velopharyngeal dysfunction

Phippen, Ginette January 2013 (has links)
Nasal speech associated with velopharyngeal dysfunction (VPD) is seen in children and adults with cleft palate and other conditions that affect soft palate function, with negative effects on quality of life. Treatment options include surgery and prosthetics depending on the nature of the problem. Speech therapy is rarely offered as an alternative treatment as evidence from previous studies is weak. However there is evidence that visual biofeedback approaches are beneficial in other speech disorders and that this approach could benefit individuals with nasal speech who demonstrate potential for improved speech. Theories of learning and feedback also lend support to the view that a combined feedback approach would be most suitable. This feasibility study therefore aimed to develop and evaluate Visual Feedback Therapy (VFTh), a new behavioural speech therapy intervention, incorporating speech activities supported by visual biofeedback and performance feedback, for individuals with mild to moderate nasal speech. Evaluation included perceptual, instrumental and quality of life measures. Eighteen individuals with nasal speech were recruited from a regional cleft palate centre and twelve completed the study, six female and six male, eleven children (7 to 13 years) and one adult, (43 years). Six participants had repaired cleft palate and six had VPD but no cleft. Participants received 8 sessions of VFTh from one therapist. The findings suggest that that the intervention is feasible but some changes are required, including participant screening for adverse response and minimising disruptions to intervention scheduling. In blinded evaluation there was considerable variation in individual results but positive changes occurred in at least one speech symptom between pre and post-intervention assessment for eight participants. Seven participants also showed improved nasalance scores and seven had improved quality of life scores. This small study has provided important information about the feasibility of delivering and evaluating VFTh. It suggests that VFTh shows promise as an alternative treatment option for nasal speech but that further preliminary development and evaluation is required before larger scale research is indicated.
6

Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities

Athalye, Sheetal Purushottam January 2010 (has links)
Studies concerning speech recognition in noise constitute a very broad spectrum of work including aspects like the cocktail party effect or observing performance of individuals in different types of speech-signal or noise as well as benefit and improvement with hearing aids. Another important area that has received much attention is investigating the inter-relations among various auditory and non-auditory capabilities affecting speech intelligibility. Those studies have focussed on the relationship between auditory threshold (hearing sensitivity) and a number of suprathreshold abilities like speech recognition in quiet and noise, frequency resolution, temporal resolution and the non-auditory ability of cognition. There is considerable discrepancy regarding the relationship between speech recognition in noise and hearing threshold level. Some studies conclude that speech recognition performance in noise can be predicted solely from an individual’s hearing threshold level while others conclude that other supra-threshold factors such as frequency and/or temporal resolution must also play a role. Hearing loss involves more than deficits in recognising speech in noise, raising the question whether hearing impairment is a uni- or multi-dimensional construct. Moreover, different extents of hearing loss may display different relationships among measures of hearing ability, or different dimensionality. The present thesis attempts to address these three issues, by examining a wide range of hearing abilities in large samples of participants having a range of hearing ability from normal to moderate-severe impairment. The research extends previous work by including larger samples of participants, a wider range of measures of hearing ability and by differentiating among levels of hearing impairment. Method: Two large multi-centre studies were conducted, involving 103 and 128 participants respectively. A large battery of tests was devised and refined prior to the main studies and implemented on a common PC-based platform. The test domains included measurement of hearing sensitivity, speech recognition in quiet and noise, loudness perception, frequency resolution, temporal resolution, binaural hearing and localization, cognition and subjective measures like listening effort and self-report of hearing disability. Performance tests involved presentation of sounds via circum-aural earphones to one or both ears, as required, at intensities matched to individual hearing impairments to ensure audibility. Most tests involved measurements centred on a low frequency (500 Hz), high frequency (3000 Hz) and broadband. The second study included some refinements based on analysis of the first study. Analyses included multiple regression for prediction of speech recognition in stationary or fluctuating noise and factor analysis to explore the dimensionality of the data. Speech recognition performance was also compared with that predicted using the Speech Intelligibility Index (SII). iii Findings: Findings from regression analysis pooled across the two studies showed that speech recognition in noise can be predicted from a combination of hearing threshold at higher frequencies (3000/4000 Hz) and frequency resolution at low frequency (500 Hz). This supports previous studies that conclude that resolution is important in addition to hearing sensitivity. This was also confirmed by the fact that SII (representing sensitivity rather than resolution) underpredicted difficulties observed in hearing-impaired ears for speech recognition in noise. Speech recognition in stationary noise was predicted mainly by auditory threshold while speech recognition in fluctuating noise was predicted by a combination having a larger contribution from frequency resolution. In mild hearing losses (below 40 dB), speech recognition in noise was predicted mainly by hearing threshold, in moderate hearing losses (above 40 dB) it was predicted mainly by frequency resolution when combined for two studies. Thus it can be observed that the importance of auditory resolution (in this case frequency resolution) increases and the importance of the audiogram decreases as the degree of hearing loss increases, provided speech is presented at audible levels. However, for all degrees of hearing impairment included in the study, prediction based solely on hearing thresholds was not much worse than prediction based on a combination of thresholds and frequency resolution. Lastly, hearing impairment was shown to be multi-dimensional; main factors included hearing threshold, speech recognition in stationary and fluctuating noise, frequency and temporal resolution, binaural processing, loudness perception, cognition and self-reported hearing difficulties. A clinical test protocol for defining an individual auditory profile is suggested based on these findings. Conclusions: Speech recognition in noise depends on a combination of audibility of the speech components (hearing threshold) and frequency resolution. Models such as SII that do not include resolution tend to over-predict somewhat speech recognition performance in noise, especially for more severe hearing impairments. However, the over-prediction is not great. It follows that for clinical purposes there is not much to be gained from more complex psychoacoustic characterisation of sensorineural hearing impairment, when the purpose is to predict or explain difficulty understanding speech in noise. A conventional audiogram and possibly measurement of frequency resolution at 500 Hz is sufficient. However, if the purpose is to acquire a detailed individual auditory profile, the multidimensional nature of hearing loss should not be ignored. Findings from the present study show that, along with loss of sensitivity and reduced frequency resolution ability, binaural processing, loudness perception, cognition and self-report measures help to characterize this multi-dimensionality. Detailed studies should hence focus on these multiple dimensions of hearing loss and incorporate measuring a wide variety of different auditory capabilities, rather than inclusion of just a few, in order gain a complete picture of auditory functioning. Frequency resolution at low frequency (500 Hz) as a predictive factor for speech recognition in noise is a new finding. Few previous studies have included low-frequency measures of hearing, which may explain why it has not emerged previously. Yet this finding appears to be robust, as it was consistent across both of the present studies. It may relate to differentiation of vowel components of speech. The present work was unable to confirm the suggestion from previous studies that measures of temporal resolution help to predict speech recognition in fluctuating noise, possibly because few participants had extremely poor temporal resolution ability.
7

Modelling the effect of cochlear implant filterbank characteristics on speech perception

Chowdhury, Shibasis January 2013 (has links)
The characteristics of a cochlear implant (CI) filterbank determine the coding of spectral and temporal information in it. Hence, it is important to optimise the filterbank parameters to achieve optimal benefit in CI users. The present thesis aimed at modelling how the manipulation of the filterbank analysis length and the assignment of spectral channels may effect CI speech perception, using CI acoustical simulation techniques. Investigations were carried out to study the efficacy of providing additional spectral information in low and/or mid frequency channels using a longer filterbank analysis window, with respect to CI processed speech perception in various types of background noise. However, the increase of filterbank analysis length has an associated trade-off, which is a reduction in temporal information. Only a few CI acoustic simulations studies have modelled the characteristics of the FFT filterbank, the most commonly used filterbank in commercial CI processors. An initial experiment was carried out to validate the CI acoustical simulation technique used in the present thesis that implemented an FFT filterbank analysis. Next, the effect of a reduction in temporal information with the increase of the FFT analysis window length was studied. A filterbank with 16 ms analysis window, without the implementation of its finer spectral coding abilities, performed marginally poorer to that of a 4 ms analysis window in a sentence recognition test. The finer spectral coding abilities of the filterbank with 16 ms analysis window, when implemented, revealed that CI processed speech perception in noise could be significantly improved if additional spectral information is provided in the low and mid frequencies. The assignment of additional spectral channels to the low and mid frequencies led to a corresponding reduction in spectral channels assigned to high frequencies. However, no detrimental effect in speech perception was observed as long as at least two spectral channels represented information above 3 kHz. The assignment of additionallow and mid frequency spectral channels also led to significant levels of spectral shift. The significant benefits from additional low and mid frequency information, however, were lost when the effects of spectral shift were introduced in acute experiments, without any training or acclimatisation period. The findings of the present thesis highlight that a longer filterbank analysis, such as 16 ms, may be implemented in CI devices without the fear of any perceptual cost due to a reduction in temporal information, at least for tasks that do not require talker separation. Providing additional low and mid frequency spectral information with a longer filterbank analysis has the potential to improve CI speech perception. However, to obtain potential benefits, the effects of spectral shift should be overcome. The findings of this thesis, however, need to be interpreted considering the limitations of CI acoustical simulation experiments.
8

Behavioural and neural correlates of tinnitus

Berger, Joel I. January 2014 (has links)
Tinnitus, often defined as the perception of sound in the absence of an external stimulus, affects millions of people worldwide and, in extreme cases, can be severely debilitating. While certain changes within the auditory system have been linked to tinnitus, the exact underlying causes of the phenomenon have not, as yet, been elucidated. Animal models of tinnitus have considerably furthered understanding of the some of the changes associated with the condition, allowing researchers to examine changes following noise exposure, the most common trigger for tinnitus. This thesis documents the development of an animal model of tinnitus, using the guinea pig to examine neural changes following induction of tinnitus. In the first study, a novel adaptation of a behavioural test was developed, in order to be able to determine whether guinea pigs were experiencing tinnitus following the administration of sodium salicylate, a common inducer of tinnitus in humans. This test relies on a phenomenon known as prepulse inhibition, whereby a startle response can be reduced in amplitude by placing a gap in a low-level, continuous background noise immediately prior to the startling stimulus. The hypothesis for this test is that if the background sound is adjusted to be similar to an animal’s tinnitus (induced artificially following noise exposure or drug administration), the tinnitus percept will fill in the gap and the startle response will not be reduced. The results from this first study indicated that using the Preyer reflex (a flexion of the pinnae in response to a startling stimulus) as this startle measure was more robust in guinea pigs than the commonly-used whole-body startle. Furthermore, transient tinnitus was reliably identified following salicylate administration. Following the development and validation of this test, a study was conducted to determine whether guinea pigs experienced tinnitus following unilateral noise exposure. Neural changes commonly associated with the condition (increases in spontaneous firing rates and changes in auditory brainstem responses) were examined, to determine whether there were any differences between animals that did develop tinnitus following noise exposure and those that did not. Two different methods were applied to the behavioural data to determine which animals were experiencing tinnitus. Regardless of the behavioural criteria used, increased spontaneous firing rates were observed in the inferior colliculus of noise-exposed guinea pigs, in comparison to control animals, but there were no differences between tinnitus and no-tinnitus animals. Conversely, significant reductions in the latency of components of the auditory brainstem response were present only in the tinnitus animals. The final study examined whether the original hypothesis for the behavioural test (that tinnitus is filling in the gap) was valid, or whether there was an alternative explanation for the deficits in behavioural gap detection observed previously, such as changes in the temporal acuity of the auditory system preventing detection of the gap. Recordings were made in the inferior colliculus of noise-exposed animals, separated into tinnitus and no-tinnitus groups according to the behavioural test, as well as unexposed control animals, to determine whether there were changes in the responses of single-units in detecting gaps of varying duration embedded in background noise. While some minor changes were present in no-tinnitus animals, tinnitus animals showed no significant changes in neural gap detection thresholds, demonstrating that changes in temporal acuity cannot account for behavioural gap detection deficits observed following noise exposure. Interestingly, significant shifts in the response types of cells were observed which did appear to relate to tinnitus. The present data indicate that the Preyer reflex gap detection test is appropriate for examining tinnitus in guinea pigs. It also suggests that increases in spontaneous firing rates at the level of the inferior colliculus cannot solely account for tinnitus. Changes in auditory brainstem responses, as well as shifts in response types, do appear to relate to tinnitus and warrant further investigation.
9

Benefits of multichannel recording of auditory late response (ALR)

Mirahmadizoghi, Seyedsiavash January 2015 (has links)
The main purpose of this work is to explore whether and how much multichannel signal processing strategies can be beneficial for improving the detection procedure for auditory late response (ALR) in clinical applications in comparison with single channel recording. To achieve this target, four multichannel noise reduction methods based on independent component analysis (ICA) were proposed for noise reduction for multichannel recording of ALR. The four alternative component selection strategies introduced in this work are: Magnitude Squared Coherence (MSC) [based on coherency of the ICs with an evoking stimulus], the maximum Signal to Noise Ratio (Max-SNR) of ICs over a particular interval, the kurtosis (maximum non-Gaussianity of the ICs), and minimum entropy of the ICs. The proposed methods are applied for the noise reduction of auditory late response (ALR) data captured using 63 channel EEG from 10 normal hearing participants. The performances of the proposed methods for improving signal quality were compared with each other and also with the single channel alternatives. All automated component selection approaches produced high SNR for multichannel ALR data. MSC-ICs produced significantly higher SNR than Max-Kurt-ICs or Min-Entropy-ICs. However the performance of MSC-ICs and Max-Fmp-ICs were not significantly different. Therefore, the MSC-ICs approach was selected for further work. MSC-ICs were used for three different clinical applications: Finding hearing threshold level, exploring the effect of attention and exploring inter- and intra- subject variability. The results for MSC-ICs were compared to the single channel signal processing alternative of weighted averaging. The results confirm that the multichannel signal processing can significantly improve the detection procedure for threshold measurement and for measuring the effects of attention. However, no significant enhancement was found for detecting inter- and intra- subject variability with multichannel processing over single channel alternative. The MSC-ICs method was also used in an application for removing cardiac artifact from the ALR recordings and the results was compared with an existing artifact rejection platform based on constraint ICA (cICA). The results of this comparison show that the proposed method can significantly improve the quality of cardiac artifact rejection from ALR data. Finally, the use of MSC-ICs was explored for reducing the required time for recording ALR. Time reduction was investigated in two senses: 1. reducing the number of stimulus repetitions. 2. Optimizing the position and the number of the recording electrodes in multichannel recordings (potentially saving the time required to place many electrodes on the scalp). The results show that using multichannel processing can significantly reduce the number of stimulus repetitions and consequently the time of recording in comparison with the single channel alternative. Minimum required number of stimulus repetition (average over10 subjects) for having SNR equal to single channel processing at Cz was found to be 74 for un-weighted averaging and 85 for weighted averaging. Moreover, the results of optimal electrode placement procedure confirm that, the ALR can be recorded form the vertex (with the same SNR as when ALR is recorded using 63 channels) by using fewer numbers of electrodes. For the data set of this study (10 normal hearing adults) the same SNR as with 63 channels was achieved by using 40 channels. Placing 40 electrodes (instead of 63) on the scalp decreases the required time for recording ALR considerably, i.e. 53% improved.
10

Assessing expressive spoken language in children with permanent childhood hearing impairment in mid-childhood

Worsfold, Sarah January 2011 (has links)
No description available.

Page generated in 0.0601 seconds