• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Estudo para otimização do algoritmo Non-local means visando aplicações em tempo real

Silva, Hamilton Soares da 25 July 2014 (has links)
Made available in DSpace on 2015-05-08T14:59:57Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3935872 bytes, checksum: 5a4c90590e53b3ea1d71bbe61a628b56 (MD5) Previous issue date: 2014-07-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The aim of this work is to study the non-local means algorithm and propose techniques to optimize and implement this algorithm for its application in real-time. Two alternatives are suggested for implementation. The first deals with the development of an accelerator card for computers, which has a PCI bus containing specialized hardware that implements the NLM filter. The second implementation uses densely GPU multiprocessor environment, which exists in the parent video. Both proposals significantly accelerates the NLM algorithm, while maintains the same visual quality of traditional software implementations, enabling real-time use. Image denoising is an important area for digital image processing. Recently, its use is becoming more popular due to improvements of of the new acquisition equipments and, thus, the increase of image resolution that favors the occurrence of such perturbations. It is widely studied in the fields of image processing, computer vision and predictive maintenance of electrical substations, motors, tires, building facilities, pipes and fittings, focusing on reducing the noise without removing details of the original image. Several approaches have been proposed for filtering noise. One of such approaches is the non-local method called Non-Local Means (NLM), which uses the entire image rather than local information and stands out as the state of the art. However, a problem in this method is its high computational complexity, which turns its application almost impossible in real time applications, even for small images / O propósito deste trabalho é estudar o algoritmo non-local means(NLM) e propor técnicas para otimizar e implementar o referido algoritmo visando sua aplicação em tempo real. Ao todo são sugeridas duas alternativas de implementação. A primeira trata do desenvolvimento de uma placa aceleradora para computadores que possuam Barramento PCI, contendo um hardware especializado que implementa o Filtro NLM. A segunda implementação utiliza o ambiente densamente multiprocessado GPU, existente nas controladoras de vídeo. As duas propostas aceleraram significativamente o algoritmo NLM, mantendo a mesma qualidade visual das implementações tradicionais em software, tornando possível sua utilização em tempo real. A filtragem de ruídos é uma área importante para o processamento digital de imagens, sendo cada vez mais utilizada devido as melhorias dos novos equipamentos de captação, e o consequente aumento da resolução da imagem, que favorece o aparecimento dessas perturbações. Ela é amplamente estudada nos campos de tratamento de imagens, visão computacional e manutenção preditiva de subestações elétricas, motores, pneus, instalações prediais, tubos e conexões, focando em reduzir os ruídos sem que se remova os detalhes da imagem original. Várias abordagens foram propostas para filtragem de ruídos, uma delas é o método não-local, chamado de Non-Local Means (NLM), que não só utiliza as informações locais, mas a imagem inteira, destaca-se como o estado da arte, porém, há um problema neste método, que é a sua alta complexidade computacional, que o torna praticamente inviável de ser utilizado em aplicações em tempo real, até mesmo para imagens pequenas
182

Proposta de implementação em hardware para o algoritmo non-local means.

Gambarra, Lucas Lucena 13 August 2012 (has links)
Made available in DSpace on 2015-05-14T12:36:33Z (GMT). No. of bitstreams: 1 Arquivototal.pdf: 8166722 bytes, checksum: dce897544fda4a4b57abafaa66d843be (MD5) Previous issue date: 2012-08-13 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A digital image is a representation of a two-dimensional image using binary numbers coded to allow its storage, transfer, printing, reproduction and its processingby electronic means. It is formed by a set of points defined by numerical values(grayscale), in which each point represents a pixel. In any grayscale digital image, the measurement of the gray level observed in each pixel is subject to alterations. These alterations, called noise, are due to the random nature of the photons counting process by sensors used for image capture. The noise may be amplified by virtue of some digital corrections, or by image processing software such as tools to increase contrast. Image denoising with the goal to recover or estimate the original image is still one of the most fundamental and widely studied problems related to image processing. In many areas, such as aerospace and medical image analysis, noise removal is a key step to improve the quality of results. Among the alternatives for this purpose, the method proposed by Buades (2005), known as Non-Local Means (NLM), represents the state of the art. Although quite effective for removing noise, the NLM is too slow to be performed in a practical manner. Its high computational complexity is caused by the need of weights calculated for all the image pixels during the filtering of each pixel, resulting in quadratic complexity relative to the number of the image pixels. The weights are obtained by calculating the difference between the neighborhoods corresponding to each pixel. Many applications have timing requirements so that their results are useful. This work proposes a hardware implementation for the Non-Local Means algorithm for image denoising with a lower computation time using pipelines, hardware parallelism and piecewise linear approximation. It is about 290 times faster than the original nonlocal means algorithm, yet produces comparable results in terms of mean-squared error (MSE) and perceptual image quality. / Imagem digital é a representação de uma imagem bidimensional usando números binários codificados de modo a permitir seu armazenamento, transferência, impressão ou reprodução, e seu processamento por meios eletrônicos. É formada por um conjunto de pontos definidos por valores numéricos, escala de cinza, no qual cada ponto representa um pixel. Em qualquer imagem digital em nível de cinza, a medição do valor em cada pixel é sujeita a algumas perturbações. Essas perturbações são devidas à natureza aleatória do processo de contagem de fótons nos sensores usados para captura da imagem. O ruído pode ser amplificado por correções digitais ou por qualquer software de processamento de imagem como, por exemplo, as ferramentas que aumentam o contraste. A remoção de ruídos cujo objetivo é recuperar, ou estimar a imagem original, é ainda um dos mais fundamentais e amplamente estudados problemas do processamento de imagem. Em diversas áreas, a remoção de ruídos é uma etapa fundamental para melhorar a qualidade dos resultados. Entre as alternativas com essa finalidade, o método proposto por Buades (2005), conhecido como Non-Local Means (NLM), representa o estado da arte. Embora bastante eficaz quanto à remoção de ruídos, o NLM é muito lento para ser realizado de modo prático. Sua complexidade computacional é alta devido à necessidade de cálculo de pesos para todos os pixels da imagem durante o processo de filtragem de cada pixel, resultando numa complexidade quadrática no número de pixels da imagem. Os pesos são obtidos por meio do cálculo da diferença entre as vizinhanças de pixels correspondentes. Muitas aplicações possuem requisitos de tempo para que seus resultados sejam úteis, e nesse contexto, este trabalho propõe uma implementação em FPGA para o algoritmo Non-local means com o objetivo de obter um baixo tempo de execução usando, para isto: pipelines, paralelismo em hardware e aproximação linear por partes. A implementação proposta é aproximadamente 290 vezes mais rápida que o algoritmo Non-local means em software e apresenta, além disso, resultados semelhantes ao algoritmo original quanto ao erro médio quadrático (MSE) e a qualidade perceptiva de imagem.
183

Multitemporal SAR images denoising and change detection : applications to Sentinel-1 data / Débruitage et détection de changements pour les séries temporelles d'images SAR : applications aux données Sentinel-1

Zhao, Weiying 21 January 2019 (has links)
Le bruit de chatoiement (speckle) lié aux systèmes d'imagerie cohérente a des conséquences sur l'analyse et l'interprétation des images radar à synthèse d'ouverture (RSO). Pour corriger ce défaut, nous profitons de séries temporelles d'images RSO bien recalées. Nous améliorons le filtre adaptatif temporel non-local à l'aide de méthodes performantes de débruitage adaptatif et proposons un filtrage temporel adaptatif basé sur les patchs. Pour réduire le biais du débruitage, nous proposons une méthode originale, rapide et efficace de débruitage multitemporel. L'idée principale de l'approche proposée est d'utiliser l'image dite "de ratio", donnée par le rapport entre l'image et la moyenne temporelle de la pile. Cette image de ratio est plus facile à débruiter qu'une image isolée en raison de sa meilleure stationnarité. Par ailleurs, les structures fines stables dans le temps sont bien préservées grâce au moyennage multitemporel. Disposant d'images débruitées, nous proposons ensuite d'utiliser la méthode du rapport de vraisemblance généralisé simplifié pour détecter les zones de changement ainsi que l'amplitude des changements et les instants de changements intéressants dans de longues séries d'images correctement recalées. En utilisant le partitionnement spectral, on applique le rapport de vraisemblance généralisé simplifié pour caractériser les changements des séries temporelles. Nous visualisons les résultats de détection en utilisant l'échelle de couleur 'jet' et une colorisation HSV. Ces méthodes ont été appliquées avec succès pour étudier des zones cultivées, des zones urbaines, des régions portuaires et des changements dus à des inondations. / The inherent speckle which is attached to any coherent imaging system affects the analysis and interpretation of synthetic aperture radar (SAR) images. To take advantage of well-registered multi-temporal SAR images, we improve the adaptive nonlocal temporal filter with state-of-the-art adaptive denoising methods and propose a patch based adaptive temporal filter. To address the bias problem of the denoising results, we propose a fast and efficient multitemporal despeckling method. The key idea of the proposed approach is the use of the ratio image, provided by the ratio between an image and the temporal mean of the stack. This ratio image is easier to denoise than a single image thanks to its improved stationarity. Besides, temporally stable thin structures are well-preserved thanks to the multi-temporal mean. Without reference image, we propose to use a patch-based auto-covariance residual evaluation method to examine the residual image and look for possible remaining structural contents. With speckle reduction images, we propose to use simplified generalized likelihood ratio method to detect the change area, change magnitude and change times in long series of well-registered images. Based on spectral clustering, we apply the simplified generalized likelihood ratio to detect the time series change types. Then, jet colormap and HSV colorization may be used to vividly visualize the detection results. These methods have been successfully applied to monitor farmland area, urban area, harbor region, and flooding area changes.
184

Redukce šumu audionahrávek pomocí hlubokých neuronových sítí / Audio noise reduction using deep neural networks

Talár, Ondřej January 2017 (has links)
The thesis focuses on the use of deep recurrent neural network, architecture Long Short-Term Memory for robust denoising of audio signal. LSTM is currently very attractive due to its characteristics to remember previous weights, or edit them not only according to the used algorithms, but also by examining changes in neighboring cells. The work describes the selection of the initial dataset and used noise along with the creation of optimal test data. For network training, the KERAS framework for Python is selected. Candidate networks for possible solutions are explored and described, followed by several experiments to determine the true behavior of the neural network.
185

Moderní metody potlačování šumu v audiosignálu založené na fázi / Modern audio denoising with utilization of phase information

Skyva, Pavel January 2019 (has links)
The thesis deals with modern methods of audio denoising. Reconstruction of the audiosignal is primarly based on utilization of phase information of signals and phase derivatives. Denoising methods also use sparse signal representations. In thesis is described the way of searching sparse coefficients using proximal Condat algorithm and following computation of reconstructed signal using this coefficients. The reconstruction algorithms are implemented in the MATLAB software with toolbox LTFAT included. Results of the reconstruction are compared using objective evaluation method Signal-to-Noise Ratio (SNR) and also by subjective evaluation.
186

Zpracování obrazů raných smrkových kultur snímaných MR technikou / Processing of images of early spruce needles scanned by MR technology

Raichl, Jaroslav January 2009 (has links)
This semester project deals with filtering of the images detected by use of NMR obtained by NMR application measurement of nuclear magnetic resonance (NMR). This thesis includes the theory of nuclear magnetic resonance, digital filters, basic digital filter banks structures, theory of Wavelet transformation and description of Signal to Noise Ratio calculation. Basic procedure of the MR signal denoising is summarized in the theoretical part of the thesis. The denoising of the images detected by nuclear magnetic resonance is described. In experimental part filtering methods for images denoising are described, which are implemented in program Matlab. These methods are based on Wavelet transformation, digital filter banks with proper thresholding. Effectiveness of filtering methods designed was verified on 2D NMR images. All of these 2D images were measure on MR tomography in the Institute of Scientific Instruments Academy of Science of the Czech Republic in Brno.
187

Směrové reprezentace obrazů / Directional Image Representations

Zátyik, Ján January 2011 (has links)
Various methods describes an image by specific shapes, which are called basis or frames. With these basis can be transformed the image into a representation by transformation coefficients. The aim is that the image can be described by a small number of coefficients to obtain so-called sparse representation. This feature can be used for example for image compression. But basis are not able to describe all the shapes that may appear in the image. This lack increases the number of transformation coefficients describing the image. The aim of this thesis is to study the general principle of calculating the transformation coefficients and to compare classical methods of image analysis with some of the new methods of image analysis. Compares effectiveness of method for image reconstruction from a limited number of coefficients and a noisy image. Also, compares image interpolation method using characteristics of two different transformations with bicubic transformation. Theoretical part describes the transformation methods. Describes some methods from aspects of multi/resolution, localization in time and frequency domains, redundancy and directionality. Furthermore, gives examples of transformations on a particular image. The practical part of the thesis compares efficiency of the Fourier, Wavelet, Contourlet, Ridgelet, Radon, Wavelet Packet and WaveAtom transform in image recontruction from a limited number of the most significant transformation coefficients. Besides, ability of image denoising using these methods with thresholding techniques applied to transformation coefficients. The last section deals with the interpolation of image interpolation by combining of two methods and compares the results with the classical bicubic interpolation.
188

Redukce šumu audionahrávek pomocí hlubokých neuronových sítí / Audio noise reduction using deep neural networks

Talár, Ondřej January 2017 (has links)
The thesis focuses on the use of deep recurrent neural network, architecture Long Short-Term Memory for robust denoising of audio signal. LSTM is currently very attractive due to its characteristics to remember previous weights, or edit them not only according to the used algorithms, but also by examining changes in neighboring cells. The work describes the selection of the initial dataset and used noise along with the creation of optimal test data. For creation of the training network is selected KERAS framework for Python and are explored and discussed possible candidates for viable solutions.
189

Regularizační metody založené na metodách nejmenších čtverců / Regularizační metody založené na metodách nejmenších čtverců

Michenková, Marie January 2013 (has links)
Title: Regularization Techniques Based on the Least Squares Method Author: Marie Michenková Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D. Abstract: In this thesis we consider a linear inverse problem Ax ≈ b, where A is a linear operator with smoothing property and b represents an observation vector polluted by unknown noise. It was shown in [Hnětynková, Plešinger, Strakoš, 2009] that high-frequency noise reveals during the Golub-Kahan iterative bidiagonalization in the left bidiagonalization vectors. We propose a method that identifies the iteration with maximal noise revealing and reduces a portion of high-frequency noise in the data by subtracting the corresponding (properly scaled) left bidiagonalization vector from b. This method is tested for different types of noise. Further, Hnětynková, Plešinger, and Strakoš provided an estimator of the noise level in the data. We propose a modification of this estimator based on the knowledge of the point of noise revealing. Keywords: ill-posed problems, regularization, Golub-Kahan iterative bidiagonalization, noise revealing, noise estimate, denoising 1
190

Méthodes et structures non locales pour la restaurationd'images et de surfaces 3D / Non local methods and structures for images and 3D surfaces restoration

Guillemot, Thierry 03 February 2014 (has links)
Durant ces dernières années, les technologies d’acquisition numériques n’ont cessé de se perfectionner, permettant d’obtenir des données d’une qualité toujours plus fine. Néanmoins, le signal acquis reste corrompu par des défauts qui ne peuvent être corrigés matériellement et nécessitent l’utilisation de méthodes de restauration adaptées. J'usqu’au milieu des années 2000, ces approches s’appuyaient uniquement sur un traitement local du signal détérioré. Avec l’amélioration des performances de calcul, le support du filtre a pu être étendu à l’ensemble des données acquises en exploitant leur caractère autosimilaire. Ces approches non locales ont principalement été utilisées pour restaurer des données régulières et structurées telles que des images. Mais dans le cas extrême de données irrégulières et non structurées comme les nuages de points 3D, leur adaptation est peu étudiée à l’heure actuelle. Avec l’augmentation de la quantité de données échangées sur les réseaux de communication, de nouvelles méthodes non locales ont récemment été proposées. Elles utilisent un modèle a priori extrait à partir de grands ensembles d’échantillons pour améliorer la qualité de la restauration. Néanmoins, ce type de méthode reste actuellement trop coûteux en temps et en mémoire. Dans cette thèse, nous proposons, tout d’abord, d’étendre les méthodes non locales aux nuages de points 3D, en définissant une surface de points capable d’exploiter leur caractère autosimilaire. Nous introduisons ensuite une nouvelle structure de données, le CovTree, flexible et générique, capable d’apprendre les distributions d’un grand ensemble d’échantillons avec une capacité de mémoire limitée. Finalement, nous généralisons les méthodes de restauration collaboratives appliquées aux données 2D et 3D, en utilisant notre CovTree pour apprendre un modèle statistique a priori à partir d’un grand ensemble de données. / In recent years, digital technologies allowing to acquire real world objects or scenes have been significantly improved in order to obtain high quality datasets. However, the acquired signal is corrupted by defects which can not be rectified materially and require the use of adapted restoration methods. Until the middle 2000s, these approaches were only based on a local process applyed on the damaged signal. With the improvement of computing performance, the neighborhood used by the filter has been extended to the entire acquired dataset by exploiting their self-similar nature. These non-local approaches have mainly been used to restore regular and structured data such as images. But in the extreme case of irregular and unstructured data as 3D point sets, their adaptation is few investigated at this time. With the increase amount of exchanged data over the communication networks, new non-local methods have recently been proposed. These can improve the quality of the restoration by using an a priori model extracted from large data sets. However, this kind of method is time and memory consuming. In this thesis, we first propose to extend the non-local methods for 3D point sets by defining a surface of points which exploits their self-similar of the point cloud. We then introduce a new flexible and generic data structure, called the CovTree, allowing to learn the distribution of a large set of samples with a limited memory capacity. Finally, we generalize collaborative restoration methods applied to 2D and 3D data by using our CovTree to learn a statistical a priori model from a large dataset.

Page generated in 0.0515 seconds