Spelling suggestions: "subject:"confocal means"" "subject:"confocal beans""
11 |
Agrégation d'estimateurs et méthodes à patch pour le débruitage d'images numériquesSalmon, Jospeh 09 December 2010 (has links) (PDF)
Le problème étudié dans cette thèse est celui du débruitage d'images numériques cor- rompues par un bruit blanc gaussien. Les méthodes utilisées pour récupérer une meilleure image reposent sur les patchs et sont des variantes des Non-Local Means. Les contributions de la thèse sont à la fois pratiques et théoriques. Tout d'abord, on étudie précisément l'influence des divers paramètres de la méthode. On met ensuite en lumière une lim- ite observée sur le traitement des bords par les méthodes à patchs habituelles. On donne alors une meilleure façon de combiner l'information fournie à partir des patchs pour estimer pixel par pixel. D'un point de vue théorique, on présente un cadre non asymptotique pour contrôler notre estimateur. On donne alors des résultats de type inégalités oracles pour des estimateurs vérifiant des propriétés plus restrictives. Les techniques utilisées reposent sur l'agrégation d'estimateurs, et plus particulièrement sur l'agrégation à poids exponentiels. La méthode requiert typiquement une mesure du risque, obtenue à travers un estimateur sans biais de celui-ci, par exemple par la méthode de Stein. Les méthodes de débruitage étudiées sont analysées numériquement par simulations.
|
12 |
Odstranění šumu z obrazů kalibračních vzorků získaných elektronovým mikroskopem / Denoising of Images from Electron MicroscopeHolub, Zbyněk January 2017 (has links)
Tato Diplomová práce je zaměřena na odstranění šumu ze snímků získaných pomocí Transmisního elektronového mikroskopu. V práci jsou popsány principy digitalizace výsledných snímků a popis jednotlivých šumových složek, které vznikají při digitalizaci snímků. Tyto nechtěné složky ovlivňují kvalitu výsledného snímku. Proto byly vybrány filtrační metody založené na minimalizaci totální variace, jejichž principy jsou v této práci popsány. Jako referenční filtrační metoda byla vybrána filtrace pomocí Non-local means filtru. Tento filtr byl vybrán, jelikož v dnešní dobře patří mezi nejvíce využívané metody, které mají vysokou účinnost. Pro objektivní hodnocení kvality filtrací byly použity tyto hodnotící kritéria – SNR, PSNR a SSIM. V závěru této práce, jsou všechny získané výsledky zobrazeny a jsou diskutovány účinnosti jednotlivých filtrační metod.
|
13 |
HARDI Denoising using Non-local Means on the ℝ³ x 𝕊² ManifoldKuurstra, Alan 20 December 2011 (has links)
Magnetic resonance imaging (MRI) has long become one of the most powerful and accurate tools of medical diagnostic imaging. Central to the diagnostic capabilities of MRI is the notion of contrast, which is determined by the biochemical composition of examined tissue as well as by its morphology. Despite the importance of the prevalent T₁, T₂, and proton density contrast mechanisms to clinical diagnosis, none of them has demonstrated effectiveness in delineating the morphological structure of the white matter - the information which is known to be related to a wide spectrum of brain-related disorders. It is only with the recent advent of diffusion-weighted MRI that scientists have been able to perform quantitative measurements of the diffusivity of white matter, making possible the structural delineation of neural fibre tracts in the human brain. One diffusion imaging technique in particular, namely high angular resolution diffusion imaging (HARDI), has inspired a substantial number of processing methods capable of obtaining the orientational information of multiple fibres within a single voxel while boasting minimal acquisition requirements.
HARDI characterization of fibre morphology can be enhanced by increasing spatial and angular resolutions. However, doing so drastically reduces the signal-to-noise ratio. Since pronounced measurement noise tends to obscure and distort diagnostically relevant details of diffusion-weighted MR signals, increasing spatial or angular resolution necessitates application of the efficient and reliable tools of image denoising. The aim of this work is to develop an effective framework for the filtering of HARDI measurement noise which takes into account both the manifold to which the HARDI signal belongs and the statistical nature of MRI noise. These goals are accomplished using an approach rooted in non-local means (NLM) weighted averaging. The average includes samples, and therefore dependencies, from the entire manifold and the result of the average is used to deduce an estimate of the original signal value in accordance with MRI statistics. NLM averaging weights are determined adaptively based on a neighbourhood similarity measure. The novel neighbourhood comparison proposed in this thesis is one of spherical neighbourhoods, which assigns large weights to samples with similar local orientational diffusion characteristics. Moreover, the weights are designed to be invariant to both spatial rotations as well as to the particular sampling scheme in use. This thesis provides a detailed description of the proposed filtering procedure as well as experimental results with synthetic and real-life data. It is demonstrated that the proposed filter has substantially better denoising capabilities as compared to a number of alternative methods.
|
14 |
HARDI Denoising using Non-local Means on the ℝ³ x 𝕊² ManifoldKuurstra, Alan 20 December 2011 (has links)
Magnetic resonance imaging (MRI) has long become one of the most powerful and accurate tools of medical diagnostic imaging. Central to the diagnostic capabilities of MRI is the notion of contrast, which is determined by the biochemical composition of examined tissue as well as by its morphology. Despite the importance of the prevalent T₁, T₂, and proton density contrast mechanisms to clinical diagnosis, none of them has demonstrated effectiveness in delineating the morphological structure of the white matter - the information which is known to be related to a wide spectrum of brain-related disorders. It is only with the recent advent of diffusion-weighted MRI that scientists have been able to perform quantitative measurements of the diffusivity of white matter, making possible the structural delineation of neural fibre tracts in the human brain. One diffusion imaging technique in particular, namely high angular resolution diffusion imaging (HARDI), has inspired a substantial number of processing methods capable of obtaining the orientational information of multiple fibres within a single voxel while boasting minimal acquisition requirements.
HARDI characterization of fibre morphology can be enhanced by increasing spatial and angular resolutions. However, doing so drastically reduces the signal-to-noise ratio. Since pronounced measurement noise tends to obscure and distort diagnostically relevant details of diffusion-weighted MR signals, increasing spatial or angular resolution necessitates application of the efficient and reliable tools of image denoising. The aim of this work is to develop an effective framework for the filtering of HARDI measurement noise which takes into account both the manifold to which the HARDI signal belongs and the statistical nature of MRI noise. These goals are accomplished using an approach rooted in non-local means (NLM) weighted averaging. The average includes samples, and therefore dependencies, from the entire manifold and the result of the average is used to deduce an estimate of the original signal value in accordance with MRI statistics. NLM averaging weights are determined adaptively based on a neighbourhood similarity measure. The novel neighbourhood comparison proposed in this thesis is one of spherical neighbourhoods, which assigns large weights to samples with similar local orientational diffusion characteristics. Moreover, the weights are designed to be invariant to both spatial rotations as well as to the particular sampling scheme in use. This thesis provides a detailed description of the proposed filtering procedure as well as experimental results with synthetic and real-life data. It is demonstrated that the proposed filter has substantially better denoising capabilities as compared to a number of alternative methods.
|
15 |
Novel image processing algorithms and methods for improving their robustness and operational performanceRomanenko, Ilya January 2014 (has links)
Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels.
|
16 |
Performance Analysis of Non Local Means Algorithm using Hardware AcceleratorsAntony, Daniel Sanju January 2016 (has links) (PDF)
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited.
In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
|
17 |
Zabezpečení senzorů - ověření pravosti obrazu / Sensor Security - Verification of Image AuthenticityJuráček, Ivo January 2020 (has links)
Diploma thesis is about image sensor security. Goal of the thesis was study data integrity gained from the image sensors. Proposed method is about source camera identification from noise characteristics in image sensors. Research was about influence of denoising algorithms applied to digital images, which was acquired from 15 different image sensors. Finally the statistical evaluation had been done from computed results.
|
18 |
Analýza 3D CT obrazových dat se zaměřením na detekci a klasifikaci specifických struktur tkání / Analysis of 3D CT image data aimed at detection and classification of specific tissue structuresŠalplachta, Jakub January 2017 (has links)
This thesis deals with the segmentation and classification of paraspinal muscle and subcutaneous adipose tissue in 3D CT image data in order to use them subsequently as internal calibration phantoms to measure bone mineral density of a vertebrae. Chosen methods were tested and afterwards evaluated in terms of correctness of the classification and total functionality for subsequent BMD value calculation. Algorithms were tested in programming environment Matlab® on created patient database which contains lumbar spines of twelve patients. Following sections of this thesis contain theoretical research of the issue of measuring bone mineral density, segmentation and classification methods and description of practical part of this work.
|
19 |
Blancheur du résidu pour le débruitage d'image / Residual whiteness for image denoisingRiot, Paul 06 February 2018 (has links)
Nous proposons une étude de l’utilisation avancée de l’hypothèse de blancheur du bruit pour améliorer les performances de débruitage. Nous mettons en avant l’intérêt d’évaluer la blancheur du résidu par des mesures de corrélation dans différents cadres applicatifs. Dans un premier temps, nous nous plaçons dans un cadre variationnel et nous montrons qu’un terme de contrainte sur la blancheur du résidu peut remplacer l’attache aux données L2 en améliorant significativement les performances de débruitage. Nous le complétons ensuite par des termes de contrôle de la distribution du résidu au moyen des moments bruts. Dans une seconde partie, nous proposons une alternative au rapport de vraisemblance menant, à la norme L2 dans le cas Gaussien blanc, pour mesurer la dissimilarité entre patchs. La métrique introduite, fondée sur l’autocorrélation de la différence des patchs, se révèle plus performante pour le débruitage et la reconnaissance de patchs similaires. Finalement, les problématiques d’évaluation de qualité sans oracle et de choix local de modèle sont abordées. Encore une fois, la mesure de la blancheur du résidu apporte une information pertinente pour estimer localement la fidélité du débruitage. / We propose an advanced use of the whiteness hypothesis on the noise to imrove denoising performances. We show the interest of evaluating the residual whiteness by correlation measures in multiple applications. First, in a variational denoising framework, we show that a cost function locally constraining the residual whiteness can replace the L2 norm commonly used in the white Gaussian case, while significantly improving the denoising performances. This term is then completed by cost function constraining the residual raw moments which are a mean to control the residual distribution. In the second part of our work, we propose an alternative to the likelihood ratio, leading to the L2 norm in the white Gaussian case, to evaluate the dissimilarity between noisy patches. The introduced metric, based on the autocorrelation of the patches difference, achieves better performances both for denoising and similar patches recognition. Finally, we tackle the no reference quality evaluation and the local model choice problems. Once again, the residual whiteness bring a meaningful information to locally estimate the truthfulness of the denoising.
|
Page generated in 0.034 seconds