• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 176
  • 123
  • 54
  • 46
  • 38
  • 37
  • 29
  • 20
  • 8
  • 8
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 1315
  • 307
  • 182
  • 168
  • 162
  • 160
  • 120
  • 112
  • 110
  • 101
  • 98
  • 87
  • 85
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Investigation of transparent conductive ZnO:Al thin films deposited by RF sputtering

Chang, Chih-Yuan 04 August 2009 (has links)
In this thesis, we focus on the properties of Al-doped ZnO (AZO) thin films for opto-electronic applications. AZO films were prepared by radio-frequency sputtering on silicon and optical glass substrates with 98wt% ZnO and 2wt% Al2O3 alloy target. AZO films were prepared under various deposition parameters (RF power, background pressure, Ar flow, and substrate temperature). The optimal parameters for the conductive and transparent AZO films are power = 100W, pressure = 3mTorr, Ar flow = 5sccm, and substrate temperature 250¢J. The film exhibits the resistivity(£l) of 2.5¡Ñ10-3 Ω-cm and 85% transparency in the 400-1800nm range. To find out optimum substrate temperature for the AZO film on p-GaAs (p=2¡Ñ1018), the samples were deposited at various temperatures followed by annealing at 400¢J for 30sec. The current-voltage (I-V) characteristics were measured. AZO films make good ohmic contact to p-GaAs to act as an electrode layer. InGaAs quantum-dot solar cells of AZO contact layers have been fabricated. A high filling factor of 52% is achieved.
22

A Target Field Based Design of a Phase Gradient Transmit Array for TRASE MRI

Bellec, Jesse 04 September 2015 (has links)
A target field method approach to the design of RF phase gradient fields, intended for TRASE MRI, produced a superposition of axial currents C_m*sin(m*phi) for m=1,2,3..., and a solenoidal current C_0*z (m=0), where C_m are constants. Omission of terms m>2 produced a phase gradient field with a linear phase and uniform magnitude within a target ROI of 2.5 cm diameter. A set of three RF coils (uniform birdcage, gradient mode birdcage, and 4-loop Maxwell) was found to be sufficient to generate both positive and negative x and y phase gradients. In addition, the phase gradient amplitude can be controlled by simply adjusting the power split to the three RF component coils. Bench measurements of an experimentally constructed 1.8 deg/mm transverse phase gradient showed excellent agreement with predicted results. A linear phase and magnitude within ± 4% of the median value was achieved within the ROI. / October 2015
23

Phase Sensitive Interrogation of a RF Nanostrain Resonator for Structrual Health Monitoring Sensing Applications

Meng, Rui 15 July 2010 (has links)
The purpose of this study was to design a passive nanostrain Radio Frequency strain sensor that helps to monitor the strain changes caused by traffic in motion on bridges. The phase sensitive interrogation method was applied meaning that the strain changes will be measured by the cavity sensor phase shift. The results revealed that the RF strain sensor could achieve a resolution of a few nanostrain. The principle conclusion was that the designed RF strain sensor has nanostrain sensitivity. Coaxial-Cylinder sensor sensitivity was 8 nanostrain. Cylinder volume resonant cavity sensor sensitivity was 8 nanostrain for high Q and 4 nanostrain for low Q. (BW = 160Hz) These sensitivities were somewhat larger than theoretical estimates due to noise from sauces other than the thermal noise used in the theoretical estimation. Therefore sensors will be useful for Structural Health Monitoring applications.
24

Phase Sensitive Interrogation of a RF Nanostrain Resonator for Structrual Health Monitoring Sensing Applications

Meng, Rui 15 July 2010 (has links)
The purpose of this study was to design a passive nanostrain Radio Frequency strain sensor that helps to monitor the strain changes caused by traffic in motion on bridges. The phase sensitive interrogation method was applied meaning that the strain changes will be measured by the cavity sensor phase shift. The results revealed that the RF strain sensor could achieve a resolution of a few nanostrain. The principle conclusion was that the designed RF strain sensor has nanostrain sensitivity. Coaxial-Cylinder sensor sensitivity was 8 nanostrain. Cylinder volume resonant cavity sensor sensitivity was 8 nanostrain for high Q and 4 nanostrain for low Q. (BW = 160Hz) These sensitivities were somewhat larger than theoretical estimates due to noise from sauces other than the thermal noise used in the theoretical estimation. Therefore sensors will be useful for Structural Health Monitoring applications.
25

Can the auditory late response indicate audibility of speech sounds from hearing aids with different digital processing strategies

Ireland, Katie Helen January 2014 (has links)
Auditory late responses (ALR) have been proposed as a hearing aid (HA) evaluation tool but there is limited data exploring alterations to the waveform morphology from using digital HAs. The research had two phases: an adult normal hearing phase and an infant hearing impaired clinical feasibility phase. The adult normal hearing study investigated how different HA strategies and stimuli may influence the ALR. ALRs were recorded from 20 normally hearing young adults. Test sounds, /m/, /g/, /t/, processed in four HA conditions (unaided, linear, wide dynamic range compression (WDRC), non linear frequency compression (NLFC)) were presented at 65 dB nHL.Stimuli were 100 ms duration with a 3 second inter-stimulus interval. An Fsp measure of ALR quality was calculated and its significance determined using bootstrap analysis to objectively indicate response presence from background noise. Data from 16 subjects was included in the statistical analysis. ALRs were present in 96% of conditions and there was good repeatability between unaided ALRs. Unaided amplitude was significantly larger than all aided amplitudes and unaided latencies were significantly earlier than aided latencies in most conditions. There was no significant effect of NLFC on the ALR waveforms. Stimulus type had a significant effect on amplitude but not latency. The results showed that ALRs can be recorded reliably through a digital HA. There was an overall effect of aiding on the response likely due to the delay, compression characteristics and frequency shaping introduced by the HA. Type of HA strategy did not significantly alter the ALR waveform. The differences found in ALR amplitude due to stimulus type may be due to tonotopic organisation of the auditory cortex. The infant hearing impaired study was conducted to explore the feasibility of using ALRs as a means of indicating audibility of sound from HA’s in a clinical population. ALRs were recorded from 5 infants aged between 5-6 months with bilateral sensori neural hearing loss and wearing their customised HA’s. The speech sounds /m/ and /t/ from the adult study were presented at an rms level of 65 dB SPL in 3 conditions: unaided; WDRC; NLFC. Bootstrap analysis of Fsp was again used to determine response presence and probe microphone measures were recorded in the aided conditions to confirm audibility of the test sounds. ALRs were recordable in young infants wearing HAs. 85% of aided responses were present where only 10% of unaided were present. NLFC active improved aided response presence to the high frequency speech sound /t/ for 1 infant. There were no clear differences in the aided waveforms between the speech sounds. The results showed that it is feasible to record ALRs in an infant clinical population. The response appeared more sensitive to improved audibility than frequency alterations.
26

Effects of reverberation and amplification on sound localisation

Al Saleh, Hadeel January 2011 (has links)
Communication often takes place in reverberant spaces making it harder for listeners to understand speech. In such difficult environments, listeners would benefit from being able to locate the sound source. In noisy or reverberant environments hearing-aid wearers often complain that their aids do not sufficiently help to understand speech or to localise a sound source. Simple amplification does not fully resolve the problem and sometimes makes it worse. Recent improvements in hearing aids, such as compression and filtering, can significantly alter the Interaural Time Difference (ITD) and the Inter-aural Level Difference (ILD) cues. Digital signal processing also tends to restrict the availability of fine structure cues, thereby forcing the listener to rely on envelope and level cues. The effect of digital signal processing on localisation, as felt by hearing aid wearers in different listening environments, is not well investigated. In this thesis, we aimed to investigate the effect of reverberation on localisation performance of normal hearing and hearing impaired listeners, and to determine the effects that hearing aids have on localisation cues. Three sets of experiments were conducted: in the first set (n=22 normal hearing listeners) results showed that the participants’ sound localisation ability in simulated reverberant environments is not significantly different from performance in a real reverberation chamber. In the second set of four experiments (n=16 normal hearing listeners), sound localisation ability was tested by introducing simulated reverberation and varying signal onset/offset times of different stimuli – i.e. speech, high-pass speech, low-pass speech, pink noise, 4 kHz pure tone, and 500 Hz pure tone. In the third set of experiments (n=28 bilateral Siemens Prisma 2 Pro hearing aid users) we investigated aided and unaided localisation ability of hearing impaired listeners in anechoic and simulated reverberant environments. Participants were seated in the middle of 21 loudspeakers that were arranged in a frontal horizontal arc (180°) in an anechoic chamber. Simulated reverberation was presented from four corner-speakers. We also performed physical measurements of ITDs and ILDs using a KEMAR simulator. Normal hearing listeners were not significantly affected in their ability to localise speech and pink noise stimuli in reverberation, however reverberation did have a significant effect on localising a 500 Hz pure tone. Hearing impaired listeners performed consistently worse in all simulated reverberant conditions. However, performance for speech stimuli was only significantly worse in the aided conditions. Unaided hearing impaired listeners showed decreased performance in simulated reverberation, specifically, when sounds came from lateral directions. Moreover, low-pass pink noise was most affected by simulated reverberation both in aided and unaided conditions, indicating that reverberation mainly affects ITD cues. Hearing impaired listeners performed significantly worse in all conditions when using their hearing aids. Physical measurements and psychoacoustic experiments consistently indicated that amplification mainly affected the ILD cues. We concluded that reverberation destroys the fine structure ITD cues in sound signals to some extent, thereby reducing localisation performance of hearing impaired listeners for low frequency stimuli. Furthermore we found that hearing aid compression affects ILD cues, which impairs the ability of hearing impaired listener to localise a sound source. Aided sound localisation could be improved for bilateral hearing aid users, if the aids would synchronize compression between both sides.
27

A feasibility study of visual feedback speech therapy for nasal speech associated with velopharyngeal dysfunction

Phippen, Ginette January 2013 (has links)
Nasal speech associated with velopharyngeal dysfunction (VPD) is seen in children and adults with cleft palate and other conditions that affect soft palate function, with negative effects on quality of life. Treatment options include surgery and prosthetics depending on the nature of the problem. Speech therapy is rarely offered as an alternative treatment as evidence from previous studies is weak. However there is evidence that visual biofeedback approaches are beneficial in other speech disorders and that this approach could benefit individuals with nasal speech who demonstrate potential for improved speech. Theories of learning and feedback also lend support to the view that a combined feedback approach would be most suitable. This feasibility study therefore aimed to develop and evaluate Visual Feedback Therapy (VFTh), a new behavioural speech therapy intervention, incorporating speech activities supported by visual biofeedback and performance feedback, for individuals with mild to moderate nasal speech. Evaluation included perceptual, instrumental and quality of life measures. Eighteen individuals with nasal speech were recruited from a regional cleft palate centre and twelve completed the study, six female and six male, eleven children (7 to 13 years) and one adult, (43 years). Six participants had repaired cleft palate and six had VPD but no cleft. Participants received 8 sessions of VFTh from one therapist. The findings suggest that that the intervention is feasible but some changes are required, including participant screening for adverse response and minimising disruptions to intervention scheduling. In blinded evaluation there was considerable variation in individual results but positive changes occurred in at least one speech symptom between pre and post-intervention assessment for eight participants. Seven participants also showed improved nasalance scores and seven had improved quality of life scores. This small study has provided important information about the feasibility of delivering and evaluating VFTh. It suggests that VFTh shows promise as an alternative treatment option for nasal speech but that further preliminary development and evaluation is required before larger scale research is indicated.
28

Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities

Athalye, Sheetal Purushottam January 2010 (has links)
Studies concerning speech recognition in noise constitute a very broad spectrum of work including aspects like the cocktail party effect or observing performance of individuals in different types of speech-signal or noise as well as benefit and improvement with hearing aids. Another important area that has received much attention is investigating the inter-relations among various auditory and non-auditory capabilities affecting speech intelligibility. Those studies have focussed on the relationship between auditory threshold (hearing sensitivity) and a number of suprathreshold abilities like speech recognition in quiet and noise, frequency resolution, temporal resolution and the non-auditory ability of cognition. There is considerable discrepancy regarding the relationship between speech recognition in noise and hearing threshold level. Some studies conclude that speech recognition performance in noise can be predicted solely from an individual’s hearing threshold level while others conclude that other supra-threshold factors such as frequency and/or temporal resolution must also play a role. Hearing loss involves more than deficits in recognising speech in noise, raising the question whether hearing impairment is a uni- or multi-dimensional construct. Moreover, different extents of hearing loss may display different relationships among measures of hearing ability, or different dimensionality. The present thesis attempts to address these three issues, by examining a wide range of hearing abilities in large samples of participants having a range of hearing ability from normal to moderate-severe impairment. The research extends previous work by including larger samples of participants, a wider range of measures of hearing ability and by differentiating among levels of hearing impairment. Method: Two large multi-centre studies were conducted, involving 103 and 128 participants respectively. A large battery of tests was devised and refined prior to the main studies and implemented on a common PC-based platform. The test domains included measurement of hearing sensitivity, speech recognition in quiet and noise, loudness perception, frequency resolution, temporal resolution, binaural hearing and localization, cognition and subjective measures like listening effort and self-report of hearing disability. Performance tests involved presentation of sounds via circum-aural earphones to one or both ears, as required, at intensities matched to individual hearing impairments to ensure audibility. Most tests involved measurements centred on a low frequency (500 Hz), high frequency (3000 Hz) and broadband. The second study included some refinements based on analysis of the first study. Analyses included multiple regression for prediction of speech recognition in stationary or fluctuating noise and factor analysis to explore the dimensionality of the data. Speech recognition performance was also compared with that predicted using the Speech Intelligibility Index (SII). iii Findings: Findings from regression analysis pooled across the two studies showed that speech recognition in noise can be predicted from a combination of hearing threshold at higher frequencies (3000/4000 Hz) and frequency resolution at low frequency (500 Hz). This supports previous studies that conclude that resolution is important in addition to hearing sensitivity. This was also confirmed by the fact that SII (representing sensitivity rather than resolution) underpredicted difficulties observed in hearing-impaired ears for speech recognition in noise. Speech recognition in stationary noise was predicted mainly by auditory threshold while speech recognition in fluctuating noise was predicted by a combination having a larger contribution from frequency resolution. In mild hearing losses (below 40 dB), speech recognition in noise was predicted mainly by hearing threshold, in moderate hearing losses (above 40 dB) it was predicted mainly by frequency resolution when combined for two studies. Thus it can be observed that the importance of auditory resolution (in this case frequency resolution) increases and the importance of the audiogram decreases as the degree of hearing loss increases, provided speech is presented at audible levels. However, for all degrees of hearing impairment included in the study, prediction based solely on hearing thresholds was not much worse than prediction based on a combination of thresholds and frequency resolution. Lastly, hearing impairment was shown to be multi-dimensional; main factors included hearing threshold, speech recognition in stationary and fluctuating noise, frequency and temporal resolution, binaural processing, loudness perception, cognition and self-reported hearing difficulties. A clinical test protocol for defining an individual auditory profile is suggested based on these findings. Conclusions: Speech recognition in noise depends on a combination of audibility of the speech components (hearing threshold) and frequency resolution. Models such as SII that do not include resolution tend to over-predict somewhat speech recognition performance in noise, especially for more severe hearing impairments. However, the over-prediction is not great. It follows that for clinical purposes there is not much to be gained from more complex psychoacoustic characterisation of sensorineural hearing impairment, when the purpose is to predict or explain difficulty understanding speech in noise. A conventional audiogram and possibly measurement of frequency resolution at 500 Hz is sufficient. However, if the purpose is to acquire a detailed individual auditory profile, the multidimensional nature of hearing loss should not be ignored. Findings from the present study show that, along with loss of sensitivity and reduced frequency resolution ability, binaural processing, loudness perception, cognition and self-report measures help to characterize this multi-dimensionality. Detailed studies should hence focus on these multiple dimensions of hearing loss and incorporate measuring a wide variety of different auditory capabilities, rather than inclusion of just a few, in order gain a complete picture of auditory functioning. Frequency resolution at low frequency (500 Hz) as a predictive factor for speech recognition in noise is a new finding. Few previous studies have included low-frequency measures of hearing, which may explain why it has not emerged previously. Yet this finding appears to be robust, as it was consistent across both of the present studies. It may relate to differentiation of vowel components of speech. The present work was unable to confirm the suggestion from previous studies that measures of temporal resolution help to predict speech recognition in fluctuating noise, possibly because few participants had extremely poor temporal resolution ability.
29

Modelling the effect of cochlear implant filterbank characteristics on speech perception

Chowdhury, Shibasis January 2013 (has links)
The characteristics of a cochlear implant (CI) filterbank determine the coding of spectral and temporal information in it. Hence, it is important to optimise the filterbank parameters to achieve optimal benefit in CI users. The present thesis aimed at modelling how the manipulation of the filterbank analysis length and the assignment of spectral channels may effect CI speech perception, using CI acoustical simulation techniques. Investigations were carried out to study the efficacy of providing additional spectral information in low and/or mid frequency channels using a longer filterbank analysis window, with respect to CI processed speech perception in various types of background noise. However, the increase of filterbank analysis length has an associated trade-off, which is a reduction in temporal information. Only a few CI acoustic simulations studies have modelled the characteristics of the FFT filterbank, the most commonly used filterbank in commercial CI processors. An initial experiment was carried out to validate the CI acoustical simulation technique used in the present thesis that implemented an FFT filterbank analysis. Next, the effect of a reduction in temporal information with the increase of the FFT analysis window length was studied. A filterbank with 16 ms analysis window, without the implementation of its finer spectral coding abilities, performed marginally poorer to that of a 4 ms analysis window in a sentence recognition test. The finer spectral coding abilities of the filterbank with 16 ms analysis window, when implemented, revealed that CI processed speech perception in noise could be significantly improved if additional spectral information is provided in the low and mid frequencies. The assignment of additional spectral channels to the low and mid frequencies led to a corresponding reduction in spectral channels assigned to high frequencies. However, no detrimental effect in speech perception was observed as long as at least two spectral channels represented information above 3 kHz. The assignment of additionallow and mid frequency spectral channels also led to significant levels of spectral shift. The significant benefits from additional low and mid frequency information, however, were lost when the effects of spectral shift were introduced in acute experiments, without any training or acclimatisation period. The findings of the present thesis highlight that a longer filterbank analysis, such as 16 ms, may be implemented in CI devices without the fear of any perceptual cost due to a reduction in temporal information, at least for tasks that do not require talker separation. Providing additional low and mid frequency spectral information with a longer filterbank analysis has the potential to improve CI speech perception. However, to obtain potential benefits, the effects of spectral shift should be overcome. The findings of this thesis, however, need to be interpreted considering the limitations of CI acoustical simulation experiments.
30

A broadband Microwave Transceiver Front-end for an Airborne Software Defined Radio Experiment

Blair, Arthur Paul Jr. 26 January 2015 (has links)
This document describes the design, simulation, construction, and test of a wideband analog transceiver front-end for use in an airborne software defined radio (SDR) experiment. The transceiver must operate in the GSM-1800 and IEEE 802.11b/g WiFi frequency bands and accommodate beamforming. It consists of a transmitter and dual band receiver. The receiver input is fed by a helical antenna and the outputs are digitized for use in the SDR. The transmitter is fed by a complex baseband output from a Digital-to-Analog Converter (DAC) and its output fed to another helical antenna. The requirements for the transceiver were driven by a spectral survey of the operating environment and the physical and electrical limitations of the platform. The spectral survey showed a great disparity in the received power levels between the signals of interest and potential interferers. Simulations of several candidate receiver architectures showed that meeting the needs of the experiment would require a high degree of linearity and filtering. It was found that the receiver requirements could be met by a single downconversion with high order filters and passband sampling. A series of analyses determined the requirements of the individual components that make up the system. Performance was verified by simulations using measured data of the individual components and lab tests of the assembled hardware. Suggestions for improved performance and expanded operation are made. / Master of Science

Page generated in 0.0301 seconds