• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 13
  • 9
  • 8
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 40
  • 26
  • 24
  • 20
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Smart control of electromagnetically driven dosing pumps

Kramer, Thomas, Petzold, Martin, Weber, Jürgen, Ohligschläger, Olaf, Müller, Axel 03 May 2016 (has links) (PDF)
Electromagnetically driven dosing pumps are suitable for metering any kind of liquid in motor vehicles in a precise manner. Due to the working principle and the pump design, an undesired noise occurs when the armature reaches the mechanical end stops. The noise can be reduced by an adequate self-learning control of the supply energy using a position estimation and velocity control. Based on preliminary investigations /1/, a method for noise reduction is realised by using a user-friendly, tiny and cost-efficient hardware, which enables a use in series manufacturing. The method requires only a voltage and current measurement as input signals. The core of the hardware is an 8-bit microcontroller with 8 kilobytes flash memory including necessary peripherals. A smart software development enables an implementation of the entire noise reduction method onto the tiny flash memory.
42

Low Light Video Enhancement along with Objective and Subjective Quality Assessment

Dalasari, Venkata Gopi Krishna, Jayanty, Sri Krishna January 2016 (has links)
Enhancing low light videos has been quite a challenge over the years. A video taken in low light always has the issues of low dynamic range and high noise. This master thesis presents contribution within the field of low light video enhancement. Three models are proposed with different tone mapping algorithms for extremely low light low quality video enhancement. For temporal noise removal, a motion compensated kalman structure is presented. Dynamic range of the low light video is stretched using three different methods. In Model 1, dynamic range is increased by adjustment of RGB histograms using gamma correction with a modified version of adaptive clipping thresholds. In Model 2, a shape preserving dynamic range stretch of the RGB histogram is applied using SMQT. In Model 3, contrast enhancement is done using CLAHE. In the final stage, the residual noise is removed using an efficient NLM. The performance of the models are compared on various Objective VQA metrics like NIQE, GCF and SSIM. To evaluate the actual performance of the models subjective tests are conducted, due to the large number of applications that target humans as the end user of the video.The performance of the three models are compared for a total of ten real time input videos taken in extremely low light environment. A total of 25 human observers subjectively evaluated the performance of the three models based on the parameters: contrast, visibility, visually pleasing, amount of noise and overall quality. A detailed statistical evaluation of the relative performance of the three models is also provided.
43

[en] METHODS FOR THE ACCELERATION OF NON-LOCAL MEANS NOISE REDUCTION ALGORITHM / [pt] MÉTODOS PARA ACELERAÇÃO DO NON-LOCAL MEANS ALGORITMO DE REDUÇÃO DE RUÍDO

NOAM SHAHAM 15 February 2008 (has links)
[pt] Non-local means é um novo algoritmo de redução de ruídos para imagens apresentado por Buades e Morel em 2004. Este algoritmo funciona consideravelmente melhor do que os algoritmos anteriores, mas sua lenta execução causada pela alta complexidade o impede de ser usado em aplicações comuns. O objetivo deste trabalho é investigar maneiras de reduzir o tempo de execução do algoritmo, possibilitando seu uso em aplicações comuns de processamento de imagem, tal como fotografia e centros de impressão. / [en] Non Local Means is an innovative noise reduction algorithm for images presented by Buades and Morel in 2004. It performs remarkably better than older generation algorithms but has a performance penalty that prevents it from being used in mainstream consumer application. The objective of this work is to find ways of reducing the time-complexity of the algorithm and enabling its use in main stream image processing applications such as home photography or photo printing centers.
44

Redução de ruído em sinais de voz no domínio wavelet /

Duarte, Marco Aparecido Queiroz. January 2005 (has links)
Resumo: Neste trabalho é feito um estudo sobre os métodos de redução de ruído aditivo em sinais de voz baseados em wavelets e, através deste estudo, propõe-se um novo método de redução de ruído em sinais de voz no domínio wavelet. O princípio básico da maioria dos métodos de redução de ruído baseados em wavelets é a determinação e aplicação de um limiar, que permite bons resultados para sinais contaminados por ruído branco, mas não são eficientes no processamento de sinais contaminados por ruído colorido, que é o tipo de ruído mais comum em situações reais. Nesses métodos, o limiar, geralmente, é calculado nos intervalos de silêncio e aplicado em todo o sinal. Os coeficientes no domínio wavelet são comparados com este limiar e aqueles que estão abaixo deste valor são eliminados, fazendo assim uma aplicação linear deste limiar. Esta eliminação acaba causando descontinuidades no tempo e na freqüência no sinal processado. Além disso, a forma com que o limiar é calculado pode degradar os trechos de voz do sinal processado, principalmente nos casos em que o limiar depende fortemente da última janela do último trecho de silêncio. O método proposto neste trabalho também é baseado em corte por limiar, mas em vez de uma aplicação linear do limiar, ele faz uma aplicação não-linear, o que evita as descontinuidades causadas por outros algoritmos. O limiar é calculado nos trechos de silêncio e não depende apenas da última janela do último trecho de silêncio, mas sim de todas as janelas, já que este limiar é uma média de todos os limiares calculados neste trecho. Isto faz com que a redução do ruído seja mais uniforme e introduza menos distorções no sinal processado. Além disso, nos trechos de voz ainda é calculado um novo limiar que também será usado, em conjunto com o limiar calculado no silêncio. Isto faz com que a energia da janela que... (Resumo completo, clicar acesso eletrônico abaixo). / Abstract: In this work a study of additive noise reduction in speech based on wavelets is presented and, based on this study a new noise reduction method in speech in the wavelet domain is proposed. The basic idea of most methods of noise reduction based on wavelets is the determination and application of a threshold, that produces good results for signals contaminated by white noise, but they are not very efficient in processing signals contaminated by colored noise, which is more common in real situations. In those methods, the threshold, generally, is calculated in the silence intervals and applied to the whole signal. The coefficients in the wavelet domain are compared with this threshold and those that are below this value are eliminated, making a linear application of this threshold. This elimination causes discontinuities in time and frequency of the processed signal. Besides, the way that the threshold is computed can degrade the voice segments of the processed signal, principally when the threshold depends strongly on the last window of the last silence segment. The proposed method in this work is also based in thresholding, but, instead of a linear application of the threshold, it makes a non-linear application, which avoids the discontinuities caused by other algorithms. The threshold is calculated in the silence segments and is not dependent only on the last window of the last silence segment, but of all the windows, since this threshold is an average of all thresholds calculated in this segment. It makes noise reduction more uniform and introduces less distortion in the processed signal. Besides, in the voice segments a new threshold is calculated that will be also used with the threshold calculated in the silence. It makes that the energy of the window that is being processed is also considered. This way, it is... (Complete abstract, click electronic address below). / Orientador: Francisco Villarreal Alvarado / Coorientador: Jozué Vieira Filho / Banca: Carlos Roberto Minussi / Banca: Fernando Oscar Runstein / Banca: Roberto Kawakami Harrop Galvão / Banca: Ricardo Tokio Higuti / Doutor
45

Um método não-limiar para redução de ruído em sinais de voz no domínio wavelet /

Soares, Wendel Cleber. January 2009 (has links)
Resumo: Neste trabalho é feito um estudo dos métodos de redução de ruído aditivo em sinais de voz baseados em wavelets e, através deste estudo, propõe-se um novo método não-limiar para redução de ruído em sinais de voz no domínio wavelet. Em geral os sinais de voz podem estar contaminados com ruídos artificiais ou reais. O problema consiste que dado um sinal limpo adiciona-se o ruído branco ou colorido, obtendo assim o sinal ruidoso, ambos no domínio do tempo. O que se propõe neste trabalho, é aplicar a transformada wavelet, obtendo assim o sinal transformado no domínio wavelet, reduzindo ou atenuando o ruído sem o uso de limiar. Os métodos mais usados no domínio wavelet são os métodos de redução por limiar, pois permitem bons resultados para sinais contaminados por ruído branco, mas não são eficientes no processamento de sinais contaminados por ruído colorido, que é o tipo de ruído mais comum em situações reais. Nesses métodos, o limiar, geralmente, é calculado nos intervalos de silêncio e aplicado em todo o sinal. Os coeficientes no domínio wavelet são comparados com este limiar e aqueles que estão abaixo deste valor são eliminados ou reduzidos, fazendo assim uma aplicação linear deste limiar. Esta eliminação, na maioria das vezes, causa descontinuidades no tempo e na frequência no sinal processado. Além disso, a forma com que o limiar é calculado pode degradar os trechos de voz do sinal processado, principalmente nos casos em que o limiar depende fortemente da última janela do último trecho de silêncio. O método proposto nesta pesquisa consiste na execução de três processamentos, agindo de acordo com as suas características nas regiões de voz e silêncio, sem o uso de limiar. A execução dos três processamentos é sintetizada numa única função, denominada de função de transferência, que atua como um filtro no processamento do sinal... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In this work a study of the methods for speech noise reduction based on wavelets is done and, through this study, a new non-thresholding method for speech noise reduction in the wavelet domain is proposed. Generally, a speech signal may be corrupted by artificial or real noise. Let a clean signal be corrupted by white or colored noise, rising a noisy signal in time domain. This work proposes the wavelet application to which gives rise to in the wavelet domain. In this domain, noise is reduced or attenuated without a threshold use. After, the signal is recomposed using the inverse discrete wavelet transform. The most used methods in the wavelet domain wavelet are the thresholding reduction methods, because they allow good results for signals corrupted by white noise, but they do not have the same efficiency when processing signals corrupted by colored noise, this is the most common noise in real situations. In those methods, the threshold is usually calculated in the silence intervals and applied to the whole signal. The coefficients in the wavelet domain are compared with this threshold and those that have absolute value below this value are eliminated or reduced, making a linear application of this threshold. This elimination causes discontinuities in time and in the frequency of the processed signal. Besides, the form with that the threshold is applied can degrade the voice segments of the processed signal, principally in cases that the threshold depends strongly on the last window of the last silence segment. The method proposed in this research consists in the execution of three processing, acting according to their characteristics in the voice and silence segments, without the threshold use. The three processing execution is synthesized in an unique function, called transfer function, acting as a filter in the signal processing. This method has as main objective the overcoming... (Complete abstract click electronic access below) / Orientador: Francisco Villarreal Alvarado / Coorientador: Jozué Vieira Filho / Banca: Carlos Roberto Minussi / Banca: Ailton Akira Shinoda / Banca: Jorge Diaz Calle / Banca: Leandro de Campos Teixeira Gomes / Doutor
46

Spatial, Spectral, and Perceptual Nonlinear Noise Reduction for Hands-free Microphones in a Car

Faneuff, Jeffery J 06 August 2002 (has links)
"Speech enhancement in an automobile is a challenging problem because interference can come from engine noise, fans, music, wind, road noise, reverberation, echo, and passengers engaging in other conversations. Hands-free microphones make the situation worse because the strength of the desired speech signal reduces with increased distance between the microphone and talker. Automobile safety is improved when the driver can use a hands-free interface to phones and other devices instead of taking his eyes off the road. The demand for high quality hands-free communication in the automobile requires the introduction of more powerful algorithms. This thesis shows that a unique combination of five algorithms can achieve superior speech enhancement for a hands-free system when compared to beamforming or spectral subtraction alone. Several different designs were analyzed and tested before converging on the configuration that achieved the best results. Beamforming, voice activity detection, spectral subtraction, perceptual nonlinear weighting, and talker isolation via pitch tracking all work together in a complementary iterative manner to create a speech enhancement system capable of significantly enhancing real world speech signals. The following conclusions are supported by the simulation results using data recorded in a car and are in strong agreement with theory. Adaptive beamforming, like the Generalized Side-lobe Canceller (GSC), can be effectively used if the filters only adapt during silent data frames because too much of the desired speech is cancelled otherwise. Spectral subtraction removes stationary noise while perceptual weighting prevents the introduction of offensive audible noise artifacts. Talker isolation via pitch tracking can perform better when used after beamforming and spectral subtraction because of the higher accuracy obtained after initial noise removal. Iterating the algorithm once increases the accuracy of the Voice Activity Detection (VAD), which improves the overall performance of the algorithm. Placing the microphone(s) on the ceiling above the head and slightly forward of the desired talker appears to be the best location in an automobile based on the experiments performed in this thesis. Objective speech quality measures show that the algorithm removes a majority of the stationary noise in a hands-free environment of an automobile with relatively minimal speech distortion."
47

Denoising of Infrared Images Using Independent Component Analysis

Björling, Robin January 2005 (has links)
<p>Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.</p> / <p>The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.</p>
48

An FPGA Based Software/Hardware Codesign for Real Time Video Processing : A Video Interface Software and Contrast Enhancement Hardware Codesign Implementation using Xilinx Virtex II Pro FPGA

Wang, Jian January 2006 (has links)
<p>Xilinx Virtex II Pro FPGA with integrated PowerPC core offers an opportunity to implementing a software and hardware codesign. The software application executes on the PowerPC processor while the FPGA implementation of hardware cores coprocess with PowerPC to achieve the goals of acceleration. Another benefit of coprocessing with the hardware acceleration core is the release of processor load. This thesis demonstrates such an FPGA based software and hardware codesign by implementing a real time video processing project on Xilinx ML310 development platform which is featured with a Xilinx Virtex II Pro FPGA. The software part in this project performs video and memory interface task which includes image capture from camera, the store of image into on-board memory, and the display of image on a screen. The hardware coprocessing core does a contrast enhancement function on the input image. To ease the software development and make this project flexible for future extension, an Embedded Operating System MontaVista Linux is installed on the ML310 platform. Thus the software video interface application is developed using Linux programming method, for example the use of Video4Linux API. The last but not the least implementation topic is the software and hardware interface, which is the Linux device driver for the hardware core. This thesis report presents all the above topics of Operating System installation, video interface software development, contrast enhancement hardware implementation, and hardware core’s Linux device driver programming. After this, a measurement result is presented to show the performance of hardware acceleration and processor load reduction, by comparing to the results from a software implementation of the same contrast enhancement function. This is followed by a discussion chapter, including the performance analysis, current design’s limitations and proposals for improvements. This report is ended with an outlook from this master thesis.</p>
49

Precise Size Control and Noise Reduction of Solid-state Nanopores for the Detection of DNA-protein Complexes

Beamish, Eric 07 December 2012 (has links)
Over the past decade, solid-state nanopores have emerged as a versatile tool for the detection and characterization of single molecules, showing great promise in the field of personalized medicine as diagnostic and genotyping platforms. While solid-state nanopores offer increased durability and functionality over a wider range of experimental conditions compared to their biological counterparts, reliable fabrication of low-noise solid-state nanopores remains a challenge. In this thesis, a methodology for treating nanopores using high electric fields in an automated fashion by applying short (0.1-2 s) pulses of 6-10 V is presented which drastically improves the yield of nanopores that can be used for molecular recognition studies. In particular, this technique allows for sub-nanometer control over nanopore size under experimental conditions, facilitates complete wetting of nanopores, reduces noise by up to three orders of magnitude and rejuvenates used pores for further experimentation. This improvement in fabrication yield (over 90%) ultimately makes nanopore-based sensing more efficient, cost-effective and accessible. Tuning size using high electric fields facilitates nanopore fabrication and improves functionality for single-molecule experiments. Here, the use of nanopores for the detection of DNA-protein complexes is examined. As proof-of-concept, neutravidin bound to double-stranded DNA is used as a model complex. The creation of the DNA-neutravidin complex using polymerase chain reaction with biotinylated primers and subsequent purification and multiplex creation is discussed. Finally, an outlook for extending this scheme for the identification of proteins in a sample based on translocation signatures is presented which could be implemented in a portable lab-on-a-chip device for the rapid detection of disease biomarkers.
50

Exploring Discrete Cosine Transform for Multi-resolution Analysis

Abedi, Safdar Ali Syed 10 August 2005 (has links)
Multi-resolution analysis has been a very popular technique in the recent years. Wavelets have been used extensively to perform multi resolution image expansion and analysis. DCT, however, has been used to compress image but not for multi resolution image analysis. This thesis is an attempt to explore the possibilities of using DCT for multi-resolution image analysis. Naive implementation of block DCT for multi-resolution expansion has many difficulties that lead to signal distortion. One of the main causes of distortion is the blocking artifacts that appear when reconstructing images transformed by DCT. The new algorithm is based on line DCT which eliminates the need for block processing. The line DCT is one dimensional array based on cascading the image rows and columns in one transform operation. Several images have been used to test the algorithm at various resolution levels. The reconstruction mean square error rate is used as an indication to the success of the method. The proposed algorithm has also been tested against the traditional block DCT.

Page generated in 0.0385 seconds