• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 13
  • 9
  • 9
  • 8
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 39
  • 26
  • 24
  • 21
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Design and Implementation of Calculated Readout by Spectral Parallelism (CRISP) in Magnetic Resonance Imaging (MRI)

So, Simon Sai-Man January 2010 (has links)
CRISP is a data acquisition and image reconstruction technique that offers theoretical increases in signal-to-noise ratio (SNR) and dynamic range over traditional methods in magnetic resonance imaging (MRI). The incoming broadband MRI signal is de-multiplexed into multiple narrow frequency bands using analog filters. Signal from each narrowband channel is then individually captured and digitized. The original signal is recovered by recombining all the channels via weighted addition, where the weights correspond to the frequency responses of each narrowband filter. With ideal bandpasses and bandwidth dependent noise after filtering, SNR increase is proportional to sqrt(N), where N is the number of bandpasses. In addition to SNR improvement, free induction decay (FID) echoes in CRISP experience a slower decay rate. In situations where resolution is limited by digitization noise, CRISP is able to capture data further out into the higher frequency regions of k-space, which leads to a relative increase in resolution. The conversion from one broadband MR signal into multiple narrowband channels is realized using a comb or bank of active analog bandpass filters. A custom CRISP RF receiver chain is implemented to downconvert and demodulate the raw MR signal prior to narrowband filtering, and to digitize the signals from each filter channel simultaneously. Results are presented demonstrating that the CRISP receiver chain can acquire 2D MR images (without narrowband filters) with SNR similar to SNR of images obtained with a clinical system. Acquiring 2D CRISP images (with narrowband filters) was not possible due to the lack of phase lock between rows in k-space. RMS noise of narrowband, broadband and unfiltered 1D echoes are compared.
62

Cognition in Hearing Aid Users : Memory for Everyday Speech / Kognition hos hörapparatsanvändare : Att minnas talade vardagsmeningar

Ng, Hoi Ning Elaine January 2013 (has links)
The thesis investigated the importance of cognition for speech understanding in experienced and new hearing aid users. The aims were 1) to develop a cognitive test (Sentence-final Word Identification and Recall, or SWIR test) to measure the effects of a noise reduction algorithm on processing of highly intelligible speech (everyday sentences); 2) to investigate, using the SWIR test, whether hearing aid signal processing would affect memory for heard speech in experienced hearing aid users; 3) to test whether the effects of signal processing on the ability to recall speech would interact with background noise and individual differences in working memory capacity; 4) to explore the potential clinical application of the SWIR test; and 5) to examine the relationship between cognition and speech recognition in noise in new users over the first six months of hearing aid use. Results showed that, for experienced users, noise reduction freed up cognitive resources and alleviated the negative  impact of noise on memory when speech stimuli were presented in a background of speech babble spoken in the listener’s native language. The possible underlying mechanisms are that noise reduction facilitates auditory stream segregation between target and irrelevant speech and reduces the attention captured by the linguistic information in irrelevant speech. The effects of noise reduction and SWIR performance were modulated by individual differences in working memory capacity. SWIR performance was related to the self-reported outcome of hearing aid use. For new users, working memory capacity played a more important role in speech recognition in noise before acclimatization to hearing aid amplification than after six months. This thesis demonstrates for the first time that hearing aid signal processing can significantly improve the ability of individuals with hearing impairment to recall highly intelligible speech stimuli presented in babble noise. It also adds to the literature showing the key role of working memory capacity in listening with hearing aids, especially for new users. By virtue of its relation to subjective measures of hearing aid outcome, the SWIR test can potentially be used as a tool in assessing hearing aid outcome. / Avhandlingens övergripande mål var att studera kognitionens betydelse för talförståelse hos vana och nya hörapparatsanvändare. Syftena var att 1) utveckla ett kognitivt test (Sentence-final Word Identification and Recall, eller SWIR test) för att mäta en brusreducerande algoritms effekt på bearbetningen av tydligt tal (vardagsmeningar); 2) att med hjälp av SWIR testet undersöka huruvida hörapparatens signalbehandling påverkade återgivningen av uppfattat tal hos vana hörapparatsanvändare; 3) att utvärdera om effekten av signalbehandling på förmågan att komma ihåg tal påverkas av störande bakgrundsljud samt individuella skillnader i arbetsminnets kapacitet; 4) att undersöka den potentiella kliniska tillämpningen av SWIR testet och 5) att undersöka förhållandet mellan kognition och taluppfattning i störande bakgrundsljud hos nya hörapparatsanvändare under de första sex månaderna med hörapparater. Resultaten visade att för vana hörapparatsanvändare lindrade brusreduceringen det störande ljudets negativa inverkan på minnet när meningar presenterades i form av irrelevant tal på deltagarnas modersmål. De möjliga underliggande mekanismerna är att brusreducering underlättar diskriminering av de auditiva informationsflödena mellan det som ska uppfattas och det som är irrelevant, samt minskar graden av uppmärksamhet som fångas av den språkliga informationen i det irrelevanta talet. Effekterna av brusreducering och resultaten av SWIR var beroende av individuella skillnader i arbetsminnets kapacitet. Resultaten av SWIR har också samband med det självrapporterade utfallet av  hörapparatsanvändning. För nya användare spelar arbetsminnets kapacitet initialt en viktigare roll för taluppfattning i störande bakgrundsljud, innan anpassningen till hörapparatens förstärkning skett, än efter sex månader. Denna avhandling visar för första gången att hörapparatens signalbehandling kan signifikant förbättra möjligheten för individer med hörselnedsättning att minnas tydligt tal, som presenteras i störande bakgrundsljud. Avhandlingen bidrar till litteraturen med en diskussion om hur arbetsminnets kapacitet spelar roll i taluppfattning med hörapparat, i synnerhet för nya användare. Med stöd av dess samband med det självrapporterade utfallet, kan SWIR testet användas som redskap i bedömning av hörapparaters effekt.
63

Precise Size Control and Noise Reduction of Solid-state Nanopores for the Detection of DNA-protein Complexes

Beamish, Eric 07 December 2012 (has links)
Over the past decade, solid-state nanopores have emerged as a versatile tool for the detection and characterization of single molecules, showing great promise in the field of personalized medicine as diagnostic and genotyping platforms. While solid-state nanopores offer increased durability and functionality over a wider range of experimental conditions compared to their biological counterparts, reliable fabrication of low-noise solid-state nanopores remains a challenge. In this thesis, a methodology for treating nanopores using high electric fields in an automated fashion by applying short (0.1-2 s) pulses of 6-10 V is presented which drastically improves the yield of nanopores that can be used for molecular recognition studies. In particular, this technique allows for sub-nanometer control over nanopore size under experimental conditions, facilitates complete wetting of nanopores, reduces noise by up to three orders of magnitude and rejuvenates used pores for further experimentation. This improvement in fabrication yield (over 90%) ultimately makes nanopore-based sensing more efficient, cost-effective and accessible. Tuning size using high electric fields facilitates nanopore fabrication and improves functionality for single-molecule experiments. Here, the use of nanopores for the detection of DNA-protein complexes is examined. As proof-of-concept, neutravidin bound to double-stranded DNA is used as a model complex. The creation of the DNA-neutravidin complex using polymerase chain reaction with biotinylated primers and subsequent purification and multiplex creation is discussed. Finally, an outlook for extending this scheme for the identification of proteins in a sample based on translocation signatures is presented which could be implemented in a portable lab-on-a-chip device for the rapid detection of disease biomarkers.
64

Correlated Polarity Noise Reduction: Development, Analysis, and Application of a Novel Noise Reduction Paradigm

Wells, Jered R January 2013 (has links)
<p>Image noise is a pervasive problem in medical imaging. It is a property endemic to all imaging modalities and one especially familiar in those modalities that employ ionizing radiation. Statistical uncertainty is a major limiting factor in the reduction of ionizing radiation dose; patient exposure must be minimized but high image quality must also be achieved to retain the clinical utility of medical images. One way to achieve the goal of radiation dose reduction is through the use of image post processing with noise reduction algorithms. By acquiring images at lower than normal exposure followed by algorithmic noise reduction, it is possible to restore image noise to near normal levels. However, many denoising algorithms degrade the integrity of other image quality components in the process. </p><p>In this dissertation, a new noise reduction algorithm is investigated: Correlated Polarity Noise Reduction (CPNR). CPNR is a novel noise reduction technique that uses a statistical approach to reduce noise variance while maintaining excellent resolution and a "normal" noise appearance. In this work, the algorithm is developed in detail with the introduction of several methods for improving polarity estimation accuracy and maintaining the normality of the residual noise intensity distribution. Several image quality characteristics are assessed in the production of this new algorithm including its effects on residual noise texture, residual noise magnitude distribution, resolution effects, and nonlinear distortion effects. An in-depth review of current linear methods for medical imaging system resolution analysis will be presented along with several newly discovered improvements to existing techniques. This is followed by the presentation of a new paradigm for quantifying the frequency response and distortion properties of nonlinear algorithms. Finally, the new CPNR algorithm is applied to computed tomography (CT) to assess its efficacy as a dose reduction tool in 3-D imaging.</p><p>It was found that the CPNR algorithm can be used to reduce x ray dose in projection radiography by a factor of at least two without objectionable degradation of image resolution. This is comparable to other nonlinear image denoising algorithms such as the bilateral filter and wavelet denoising. However, CPNR can accomplish this level of dose reduction with few edge effects and negligible nonlinear distortion of the anatomical signal as evidenced by the newly developed nonlinear assessment paradigm. In application to multi-detector CT, XCAT simulations showed that CPNR can be used to reduce noise variance by 40% with minimal blurring of anatomical structures under a filtered back-projection reconstruction paradigm. When an apodization filter was applied, only 33% noise variance reduction was achieved, but the edge-saving qualities were largely retained. In application to cone-beam CT for daily patient positioning in radiation therapy, up to 49% noise variance reduction was achieved with as little as 1% reduction in the task transfer function measured from reconstructed data at the cutoff frequency. </p><p>This work concludes that the CPNR paradigm shows promise as a viable noise reduction tool which can be used to maintain current standards of clinical image quality at almost half of normal radiation exposure This algorithm has favorable resolution and nonlinear distortion properties as measured using a newly developed set of metrics for nonlinear algorithm resolution and distortion assessment. Simulation studies and the initial application of CPNR to cone-beam CT data reveal that CPNR may be used to reduce CT dose by 40%-49% with minimal degradation of image resolution.</p> / Dissertation
65

Compensation for Nonlinear Distortion in Noise for Robust Speech Recognition

Harvilla, Mark J. 01 October 2014 (has links)
The performance, reliability, and ubiquity of automatic speech recognition systems has flourished in recent years due to steadily increasing computational power and technological innovations such as hidden Markov models, weighted finite-state transducers, and deep learning methods. One problem which plagues speech recognition systems, especially those that operate offline and have been trained on specific in-domain data, is the deleterious effect of noise on the accuracy of speech recognition. Historically, robust speech recognition research has focused on traditional noise types such as additive noise, linear filtering, and reverberation. This thesis describes the effects of nonlinear dynamic range compression on automatic speech recognition and develops a number of novel techniques for characterizing and counteracting it. Dynamic range compression is any function which reduces the dynamic range of an input signal. Dynamic range compression is a widely-used tool in audio engineering and is almost always a component of a practical telecommunications system. Despite its ubiquity, this thesis is the first work to comprehensively study and address the effect of dynamic range compression on speech recognition. More specifically, this thesis treats the problem of dynamic range compression in three ways: (1) blind amplitude normalization methods, which counteract dynamic range compression when its parameter values allow the function to be mathematically inverted, (2) blind amplitude reconstruction techniques, i.e., declipping, which attempt to reconstruct clipped segments of the speech signal that are lost through non-invertible dynamic range compression, and (3) matched-training techniques, which attempt to select the pre-trained acoustic model with the closest set of compression parameters. All three of these methods rely on robust estimation of the dynamic range compression distortion parameters. Novel algorithms for the blind prediction of these parameters are also introduced. The algorithms' quality is evaluated in terms of the degree to which they decrease speech recognition word error rate, as well as in terms of the degree to which they increase a given speech signal's signal-to-noise ratio. In all evaluations, the possibility of independent additive noise following the application of dynamic range compression is assumed.
66

Um método não-limiar para redução de ruído em sinais de voz no domínio wavelet

Soares, Wendel Cleber [UNESP] 29 May 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:30:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-05-29Bitstream added on 2014-06-13T20:21:16Z : No. of bitstreams: 1 soares_wc_dr_ilha.pdf: 2948445 bytes, checksum: cf47c579c7e9a4f2d231373d9ed5f704 (MD5) / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Neste trabalho é feito um estudo dos métodos de redução de ruído aditivo em sinais de voz baseados em wavelets e, através deste estudo, propõe-se um novo método não-limiar para redução de ruído em sinais de voz no domínio wavelet. Em geral os sinais de voz podem estar contaminados com ruídos artificiais ou reais. O problema consiste que dado um sinal limpo adiciona-se o ruído branco ou colorido, obtendo assim o sinal ruidoso, ambos no domínio do tempo. O que se propõe neste trabalho, é aplicar a transformada wavelet, obtendo assim o sinal transformado no domínio wavelet, reduzindo ou atenuando o ruído sem o uso de limiar. Os métodos mais usados no domínio wavelet são os métodos de redução por limiar, pois permitem bons resultados para sinais contaminados por ruído branco, mas não são eficientes no processamento de sinais contaminados por ruído colorido, que é o tipo de ruído mais comum em situações reais. Nesses métodos, o limiar, geralmente, é calculado nos intervalos de silêncio e aplicado em todo o sinal. Os coeficientes no domínio wavelet são comparados com este limiar e aqueles que estão abaixo deste valor são eliminados ou reduzidos, fazendo assim uma aplicação linear deste limiar. Esta eliminação, na maioria das vezes, causa descontinuidades no tempo e na frequência no sinal processado. Além disso, a forma com que o limiar é calculado pode degradar os trechos de voz do sinal processado, principalmente nos casos em que o limiar depende fortemente da última janela do último trecho de silêncio. O método proposto nesta pesquisa consiste na execução de três processamentos, agindo de acordo com as suas características nas regiões de voz e silêncio, sem o uso de limiar. A execução dos três processamentos é sintetizada numa única função, denominada de função de transferência, que atua como um filtro no processamento do sinal... / In this work a study of the methods for speech noise reduction based on wavelets is done and, through this study, a new non-thresholding method for speech noise reduction in the wavelet domain is proposed. Generally, a speech signal may be corrupted by artificial or real noise. Let a clean signal be corrupted by white or colored noise, rising a noisy signal in time domain. This work proposes the wavelet application to which gives rise to in the wavelet domain. In this domain, noise is reduced or attenuated without a threshold use. After, the signal is recomposed using the inverse discrete wavelet transform. The most used methods in the wavelet domain wavelet are the thresholding reduction methods, because they allow good results for signals corrupted by white noise, but they do not have the same efficiency when processing signals corrupted by colored noise, this is the most common noise in real situations. In those methods, the threshold is usually calculated in the silence intervals and applied to the whole signal. The coefficients in the wavelet domain are compared with this threshold and those that have absolute value below this value are eliminated or reduced, making a linear application of this threshold. This elimination causes discontinuities in time and in the frequency of the processed signal. Besides, the form with that the threshold is applied can degrade the voice segments of the processed signal, principally in cases that the threshold depends strongly on the last window of the last silence segment. The method proposed in this research consists in the execution of three processing, acting according to their characteristics in the voice and silence segments, without the threshold use. The three processing execution is synthesized in an unique function, called transfer function, acting as a filter in the signal processing. This method has as main objective the overcoming... (Complete abstract click electronic access below)
67

A New Approach for the Enhancement of Dual-energy Computed Tomography Images

January 2011 (has links)
abstract: Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all until recently. Different materials may have the same CT number, which may lead to uncertainty or misdiagnosis. Dual-energy CT (DECT) was reintroduced recently to solve this problem by using the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between two low and high energy images or measurements, so that it is difficult to acquire the accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, a new model and an image enhancement technique for DECT are proposed, based on the fact that the attenuation of a high density material decreases more rapidly as X-ray energy increases. This fact has been previously ignored in most of DECT image enhancement techniques. The proposed technique consists of offset correction, spectral error correction, and adaptive noise suppression. It reduced noise, improved contrast effectively and showed better material differentiation in real patient images as well as phantom studies. / Dissertation/Thesis / Ph.D. Bioengineering 2011
68

An FPGA Based Software/Hardware Codesign for Real Time Video Processing : A Video Interface Software and Contrast Enhancement Hardware Codesign Implementation using Xilinx Virtex II Pro FPGA

Wang, Jian January 2006 (has links)
Xilinx Virtex II Pro FPGA with integrated PowerPC core offers an opportunity to implementing a software and hardware codesign. The software application executes on the PowerPC processor while the FPGA implementation of hardware cores coprocess with PowerPC to achieve the goals of acceleration. Another benefit of coprocessing with the hardware acceleration core is the release of processor load. This thesis demonstrates such an FPGA based software and hardware codesign by implementing a real time video processing project on Xilinx ML310 development platform which is featured with a Xilinx Virtex II Pro FPGA. The software part in this project performs video and memory interface task which includes image capture from camera, the store of image into on-board memory, and the display of image on a screen. The hardware coprocessing core does a contrast enhancement function on the input image. To ease the software development and make this project flexible for future extension, an Embedded Operating System MontaVista Linux is installed on the ML310 platform. Thus the software video interface application is developed using Linux programming method, for example the use of Video4Linux API. The last but not the least implementation topic is the software and hardware interface, which is the Linux device driver for the hardware core. This thesis report presents all the above topics of Operating System installation, video interface software development, contrast enhancement hardware implementation, and hardware core’s Linux device driver programming. After this, a measurement result is presented to show the performance of hardware acceleration and processor load reduction, by comparing to the results from a software implementation of the same contrast enhancement function. This is followed by a discussion chapter, including the performance analysis, current design’s limitations and proposals for improvements. This report is ended with an outlook from this master thesis.
69

Adaptive anatomical preservation optimal denoising for radiation therapy daily MRI

Maitree, Rapeepan, Perez-Carrillo, Gloria J. Guzman, Shimony, Joshua S., Gach, H. Michael, Chundury, Anupama, Roach, Michael, Li, H. Harold, Yang, Deshan 01 September 2017 (has links)
Low-field magnetic resonance imaging (MRI) has recently been integrated with radiation therapy systems to provide image guidance for daily cancer radiation treatments. The main benefit of the low-field strength is minimal electron return effects. The main disadvantage of low-field strength is increased image noise compared to diagnostic MRIs conducted at 1.5 T or higher. The increased image noise affects both the discernibility of soft tissues and the accuracy of further image processing tasks for both clinical and research applications, such as tumor tracking, feature analysis, image segmentation, and image registration. An innovative method, adaptive anatomical preservation optimal denoising (AAPOD), was developed for optimal image denoising, i. e., to maximally reduce noise while preserving the tissue boundaries. AAPOD employs a series of adaptive nonlocal mean (ANLM) denoising trials with increasing denoising filter strength (i. e., the block similarity filtering parameter in the ANLM algorithm), and then detects the tissue boundary losses on the differences of sequentially denoised images using a zero-crossing edge detection method. The optimal denoising filter strength per voxel is determined by identifying the denoising filter strength value at which boundary losses start to appear around the voxel. The final denoising result is generated by applying the ANLM denoising method with the optimal per-voxel denoising filter strengths. The experimental results demonstrated that AAPOD was capable of reducing noise adaptively and optimally while avoiding tissue boundary losses. AAPOD is useful for improving the quality of MRIs with low-contrast-to-noise ratios and could be applied to other medical imaging modalities, e.g., computed tomography. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
70

Precise Size Control and Noise Reduction of Solid-state Nanopores for the Detection of DNA-protein Complexes

Beamish, Eric January 2012 (has links)
Over the past decade, solid-state nanopores have emerged as a versatile tool for the detection and characterization of single molecules, showing great promise in the field of personalized medicine as diagnostic and genotyping platforms. While solid-state nanopores offer increased durability and functionality over a wider range of experimental conditions compared to their biological counterparts, reliable fabrication of low-noise solid-state nanopores remains a challenge. In this thesis, a methodology for treating nanopores using high electric fields in an automated fashion by applying short (0.1-2 s) pulses of 6-10 V is presented which drastically improves the yield of nanopores that can be used for molecular recognition studies. In particular, this technique allows for sub-nanometer control over nanopore size under experimental conditions, facilitates complete wetting of nanopores, reduces noise by up to three orders of magnitude and rejuvenates used pores for further experimentation. This improvement in fabrication yield (over 90%) ultimately makes nanopore-based sensing more efficient, cost-effective and accessible. Tuning size using high electric fields facilitates nanopore fabrication and improves functionality for single-molecule experiments. Here, the use of nanopores for the detection of DNA-protein complexes is examined. As proof-of-concept, neutravidin bound to double-stranded DNA is used as a model complex. The creation of the DNA-neutravidin complex using polymerase chain reaction with biotinylated primers and subsequent purification and multiplex creation is discussed. Finally, an outlook for extending this scheme for the identification of proteins in a sample based on translocation signatures is presented which could be implemented in a portable lab-on-a-chip device for the rapid detection of disease biomarkers.

Page generated in 0.0891 seconds