• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 23
  • 20
  • 17
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

奇異值分解在影像處理上之運用 / Singular Value Decomposition: Application to Image Processing

顏佑君, Yen, Yu Chun Unknown Date (has links)
奇異值分解(singular valve decomposition)是一個重要且被廣為運用的矩陣分解方法,其具備許多良好性質,包括低階近似理論(low rank approximation)。在現今大數據(big data)的年代,人們接收到的資訊數量龐大且形式多元。相較於文字型態的資料,影像資料可以提供更多的資訊,因此影像資料扮演舉足輕重的角色。影像資料的儲存比文字資料更為複雜,若能運用影像壓縮的技術,減少影像資料中較不重要的資訊,降低影像的儲存空間,便能大幅提升影像處理工作的效率。另一方面,有時影像在被存取的過程中遭到雜訊汙染,產生模糊影像,此模糊的影像被稱為退化影像(image degradation)。近年來奇異值分解常被用於解決影像處理問題,對於影像資料也有充分的解釋能力。本文考慮將奇異值分解應用在影像壓縮與去除雜訊上,以奇異值累積比重作為選取奇異值的準則,並透過模擬實驗來評估此方法的效果。 / Singular value decomposition (SVD) is a robust and reliable matrix decomposition method. It has many attractive properties, such as the low rank approximation. In the era of big data, numerous data are generated rapidly. Offering attractive visual effect and important information, image becomes a common and useful type of data. Recently, SVD has been utilized in several image process and analysis problems. This research focuses on the problems of image compression and image denoise for restoration. We propose to apply the SVD method to capture the main signal image subspace for an efficient image compression, and to screen out the noise image subspace for image restoration. Simulations are conducted to investigate the proposed method. We find that the SVD method has satisfactory results for image compression. However, in image denoising, the performance of the SVD method varies depending on the original image, the noise added and the threshold used.
62

Medical Image Processing on the GPU : Past, Present and Future

Eklund, Anders, Dufort, Paul, Forsberg, Daniel, LaConte, Stephen January 2013 (has links)
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.
63

Abordagens não-locais para filtragem de ruído Poisson

Bindilatti, André de Andrade 23 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:17Z (GMT). No. of bitstreams: 1 6285.pdf: 2877079 bytes, checksum: 80439eede94d8bbebc2443de9d032d34 (MD5) Previous issue date: 2014-05-23 / Universidade Federal de Sao Carlos / A common problem to applications such as positron emission tomography, low-exposure X-ray imaging, fluorescence microscopy, optical and infrared astronomy, and others, is the degradation of the original signal by Poisson Noise. This problem arises in applications in which the image acquisition process is based on counting photons reaching a detector surface during a given exposure time. Recently, a new algorithm for image denoising, called Nonlocal-Means (NLM), was proposed. The NLM algorithm consists of a nonlocal approach that explores the inherent image redundancy for denoising, that is, it explores the principle in which, in natural images, there are similar regions, yet locally disjoint. NLM was originally proposed for additive noise reduction. The goal of this work is to extend the NLM algorithm for Poisson noise filtering. To achieve this goal, symmetric divergences, also known as stochastic distances, have been applied as similarity metrics to the NLM algorithm. Stochastic distances assume a parametric model for the data distribution. Therefore they can accommodate different stochastic noise models. However, knowledge of the model parameters is necessary to calculate the stochastic distances. In this research, estimation and non-local filtering schemes were considered under Poisson noise hypothesis, leading to competitive results with the state of- the-art. / Um problema comum a aplicações como tomografia por emissão de pósitrons, imageamento por baixa exposição de raios-X, microscopia de fluorescência, astronomia ótica ou por infravermelho, dentre outras, é a degradação do sinal original por ruído Poisson. Esse problema surge em aplicações nas quais o processo de aquisição de imagem se baseia na contagem de fótons atingindo a superfície de um detector durante um dado tempo de exposição. Recentemente, um novo algoritmo para a redução de ruído em imagens, chamado Non Local-Means (NLM) foi proposto. O algoritmo NLM consiste em uma abordagem não-local que explora a redundância inerente da imagem para a filtragem de ruído, isto é, explora o principio em que, em imagens naturais existem muitas regiões similares, porém, localmente disjuntas. Essa abordagem foi originalmente proposta para a redução de ruído aditivo. O objetivo deste trabalho foi estender o algoritmo NLM para a filtragem de ruído Poisson, que é dependente de sinal. Para alcançar esse propósito, divergências simétricas, também conhecidas como distâncias estocásticas, foram utilizadas como métricas de similaridade para o algoritmo NLM. Distâncias estocásticas assumem um modelo paramétrico sobre a distribuição dos dados, portanto podem acomodar diferentes modelos estocásticos de ruído. No entanto, conhecimento dos parâmetros de modelo é necessário para o cálculo das distâncias estocásticas. Neste trabalho de pesquisa, esquemas de estimativa e filtragem não-local foram considerados sobre hipótese de ruído Poisson, levando a resultados competitivos com o estado-da-arte.
64

Studies on Kernel Based Edge Detection an Hyper Parameter Selection in Image Restoration and Diffuse Optical Image Reconstruction

Narayana Swamy, Yamuna January 2017 (has links) (PDF)
Computational imaging has been playing an important role in understanding and analysing the captured images. Both image segmentation and restoration has been in-tegral parts of computational imaging. The studies performed in this thesis has been focussed toward developing novel algorithms for image segmentation and restoration. Study related to usage of Morozov Discrepancy Principle in Di use Optical Imaging was also presented here to show that hyper parameter selection could be performed with ease. The Laplacian of Gaussian (LoG) and Canny operators use Gaussian smoothing be-fore applying the derivative operator for edge detection in real images. The LoG kernel was based on second derivative and is highly sensitive to noise when compared to the Canny edge detector. A new edge detection kernel, called as Helmholtz of Gaussian (HoG), which provides higher di suavity is developed in this thesis and it was shown that it is more robust to noise. The formulation of the developed HoG kernel is similar to LoG. It was also shown both theoretically and experimentally that LoG is a special case of HoG. This kernel when used as an edge detector exhibited superior performance compared to LoG, Canny and wavelet based edge detector for the standard test cases both in one- and two-dimensions. The linear inverse problem encountered in restoration of blurred noisy images is typically solved via Tikhonov minimization. The outcome (restored image) of such min-imitation is highly dependent on the choice of regularization parameter. In the absence of prior information about the noise levels in the blurred image, ending this regular-inaction/hyper parameter in an automated way becomes extremely challenging. The available methods like Generalized Cross Validation (GCV) may not yield optimal re-salts in all cases. A novel method that relies on minimal residual method for ending the regularization parameter automatically was proposed here and was systematically compared with the GCV method. It was shown that the proposed method performance was superior to the GCV method in providing high quality restored images in cases where the noise levels are high Di use optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. An automated method for selection of regularization/hyper parameter that incorporates Morozov discrepancy principle(MDP) into the Tikhonov method was proposed and shown to be a promising method for the dynamic Di use Optical Tomography.
65

New PDE models for imaging problems and applications

Calatroni, Luca January 2016 (has links)
Variational methods and Partial Differential Equations (PDEs) have been extensively employed for the mathematical formulation of a myriad of problems describing physical phenomena such as heat propagation, thermodynamic transformations and many more. In imaging, PDEs following variational principles are often considered. In their general form these models combine a regularisation and a data fitting term, balancing one against the other appropriately. Total variation (TV) regularisation is often used due to its edgepreserving and smoothing properties. In this thesis, we focus on the design of TV-based models for several different applications. We start considering PDE models encoding higher-order derivatives to overcome wellknown TV reconstruction drawbacks. Due to their high differential order and nonlinear nature, the computation of the numerical solution of these equations is often challenging. In this thesis, we propose directional splitting techniques and use Newton-type methods that despite these numerical hurdles render reliable and efficient computational schemes. Next, we discuss the problem of choosing the appropriate data fitting term in the case when multiple noise statistics in the data are present due, for instance, to different acquisition and transmission problems. We propose a novel variational model which encodes appropriately and consistently the different noise distributions in this case. Balancing the effect of the regularisation against the data fitting is also crucial. For this sake, we consider a learning approach which estimates the optimal ratio between the two by using training sets of examples via bilevel optimisation. Numerically, we use a combination of SemiSmooth (SSN) and quasi-Newton methods to solve the problem efficiently. Finally, we consider TV-based models in the framework of graphs for image segmentation problems. Here, spectral properties combined with matrix completion techniques are needed to overcome the computational limitations due to the large amount of image data. Further, a semi-supervised technique for the measurement of the segmented region by means of the Hough transform is proposed.
66

Performance Analysis of Non Local Means Algorithm using Hardware Accelerators

Antony, Daniel Sanju January 2016 (has links) (PDF)
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited. In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
67

Blancheur du résidu pour le débruitage d'image / Residual whiteness for image denoising

Riot, Paul 06 February 2018 (has links)
Nous proposons une étude de l’utilisation avancée de l’hypothèse de blancheur du bruit pour améliorer les performances de débruitage. Nous mettons en avant l’intérêt d’évaluer la blancheur du résidu par des mesures de corrélation dans différents cadres applicatifs. Dans un premier temps, nous nous plaçons dans un cadre variationnel et nous montrons qu’un terme de contrainte sur la blancheur du résidu peut remplacer l’attache aux données L2 en améliorant significativement les performances de débruitage. Nous le complétons ensuite par des termes de contrôle de la distribution du résidu au moyen des moments bruts. Dans une seconde partie, nous proposons une alternative au rapport de vraisemblance menant, à la norme L2 dans le cas Gaussien blanc, pour mesurer la dissimilarité entre patchs. La métrique introduite, fondée sur l’autocorrélation de la différence des patchs, se révèle plus performante pour le débruitage et la reconnaissance de patchs similaires. Finalement, les problématiques d’évaluation de qualité sans oracle et de choix local de modèle sont abordées. Encore une fois, la mesure de la blancheur du résidu apporte une information pertinente pour estimer localement la fidélité du débruitage. / We propose an advanced use of the whiteness hypothesis on the noise to imrove denoising performances. We show the interest of evaluating the residual whiteness by correlation measures in multiple applications. First, in a variational denoising framework, we show that a cost function locally constraining the residual whiteness can replace the L2 norm commonly used in the white Gaussian case, while significantly improving the denoising performances. This term is then completed by cost function constraining the residual raw moments which are a mean to control the residual distribution. In the second part of our work, we propose an alternative to the likelihood ratio, leading to the L2 norm in the white Gaussian case, to evaluate the dissimilarity between noisy patches. The introduced metric, based on the autocorrelation of the patches difference, achieves better performances both for denoising and similar patches recognition. Finally, we tackle the no reference quality evaluation and the local model choice problems. Once again, the residual whiteness bring a meaningful information to locally estimate the truthfulness of the denoising.
68

Some advances in patch-based image denoising / Quelques avancées dans le débruitage d'images par patchs

Houdard, Antoine 12 October 2018 (has links)
Cette thèse s'inscrit dans le contexte des méthodes non locales pour le traitement d'images et a pour application principale le débruitage, bien que les méthodes étudiées soient suffisamment génériques pour être applicables à d'autres problèmes inverses en imagerie. Les images naturelles sont constituées de structures redondantes, et cette redondance peut être exploitée à des fins de restauration. Une manière classique d’exploiter cette auto-similarité est de découper l'image en patchs. Ces derniers peuvent ensuite être regroupés, comparés et filtrés ensemble.Dans le premier chapitre, le principe du "global denoising" est reformulé avec le formalisme classique de l'estimation diagonale et son comportement asymptotique est étudié dans le cas oracle. Des conditions précises à la fois sur l'image et sur le filtre global sont introduites pour assurer et quantifier la convergence.Le deuxième chapitre est consacré à l'étude d’a priori gaussiens ou de type mélange de gaussiennes pour le débruitage d'images par patches. Ces a priori sont largement utilisés pour la restauration d'image. Nous proposons ici quelques indices pour répondre aux questions suivantes : Pourquoi ces a priori sont-ils si largement utilisés ? Quelles informations encodent-ils ?Le troisième chapitre propose un modèle probabiliste de mélange pour les patchs bruités, adapté à la grande dimension. Il en résulte un algorithme de débruitage qui atteint les performance de l'état-de-l'art.Le dernier chapitre explore des pistes d'agrégation différentes et propose une écriture de l’étape d'agrégation sous la forme d'un problème de moindre carrés. / This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed.
69

Change Detection Using Multitemporal SAR Images

Yousif, Osama January 2013 (has links)
Multitemporal SAR images have been used successfully for the detection of different types of environmental changes. The detection of urban change using SAR images is complicated due to the special characteristics of SAR images—for example, the existence of speckle and the complex mixture of the urban environment. This thesis investigates the detection of urban changes using SAR images with the following specific objectives: (1) to investigate unsupervised change detection, (2) to investigate reduction of the speckle effect and (3) to investigate spatio-contextual change detection. Beijing and Shanghai, the largest cities in China, were selected as study areas. Multitemporal SAR images acquired by ERS-2 SAR (1998~1999) and Envisat ASAR (2008~2009) sensors were used to detect changes that have occurred in these cities. Unsupervised change detection using SAR images is investigated using the Kittler-Illingworth algorithm. The problem associated with the diversity of urban changes—namely, more than one typology of change—is addressed using the modified ratio operator. This operator clusters both positive and negative changes on one side of the change-image histogram. To model the statistics of the changed and the unchanged classes, four different probability density functions were tested. The analysis indicates that the quality of the resulting change map will strongly depends on the density model chosen. The analysis also suggests that use of a local adaptive filter (e.g., enhanced Lee) removes fine geometric details from the scene. Speckle suppression and geometric detail preservation in SAR-based change detection, are addressed using the nonlocal means (NLM) algorithm. In this algorithm, denoising is achieved through a weighted averaging process, in which the weights are a function of the similarity of small image patches defined around each pixel in the image. To decrease the computational complexity, the PCA technique is used to reduce the dimensionality of the neighbourhood feature vectors. Simple methods to estimate the dimensionality of the new space and the required noise variance are proposed. The experimental results show that the NLM algorithm outperformed traditional local adaptive filters (e.g., enhanced Lee) in eliminating the effect of speckle and in maintaining the geometric structures in the scene. The analysis also indicates that filtering the change variable instead of the individual SAR images is effective in terms of both the quality of the results and the time needed to carry out the computation. The third research focuses on the application of Markov random field (MRF) in change detection using SAR images. The MRF-based change detection algorithm shows limited capacity to simultaneously maintain fine geometric detail in urban areas and combat the effect of speckle noise. This problem has been addressed through the introduction of a global constraint on the pixels’ class labels. Based on NLM theory, a global probability model is developed. The iterated conditional mode (ICM) scheme for the optimization of the MAP-MRF criterion function is extended to include a step that forces the maximization of the global probability model. The experimental results show that the proposed algorithm is better at preserving the fine structural detail, effective in reducing the effect of speckle, less sensitive to the value of the contextual parameter, and less affected by the quality of the initial change map compared with traditional MRF-based change detection algorithm. / <p>QC 20130610</p>
70

<b>Advanced Algorithms for X-ray CT Image Reconstruction and Processing</b>

Madhuri Mahendra Nagare (17897678) 05 February 2024 (has links)
<p dir="ltr">X-ray computed tomography (CT) is one of the most widely used imaging modalities for medical diagnosis. Improving the quality of clinical CT images while keeping the X-ray dosage of patients low has been an active area of research. Recently, there have been two major technological advances in the commercial CT systems. The first is the use of Deep Neural Networks (DNN) to denoise and sharpen CT images, and the second is use of photon counting detectors (PCD) which provide higher spectral and spatial resolution compared to the conventional energy-integrating detectors. While both techniques have potential to improve the quality of CT images significantly, there are still challenges to improve the quality further.</p><p dir="ltr"><br></p><p dir="ltr">A denoising or sharpening algorithm for CT images must retain a favorable texture which is critically important for radiologists. However, commonly used methodologies in DNN training produce over-smooth images lacking texture. The lack of texture is a systematic error leading to a biased estimator.</p><p><br></p><p dir="ltr">In the first portion of this thesis, we propose three algorithms to reduce the bias, thereby to retain the favorable texture. The first method proposes a novel approach to designing a loss function that penalizes bias in the image more while training a DNN, producing more texture and detail in results. Our experiments verify that the proposed loss function outperforms the commonly used mean squared error loss function. The second algorithm proposes a novel approach to designing training pairs for a DNN-based sharpener. While conventional sharpeners employ noise-free ground truth producing over-smooth images, the proposed Noise Preserving Sharpening Filter (NPSF) adds appropriately scaled noise to both the input and the ground truth to keep the noise texture in the sharpened result similar to that of the input. Our evaluations show that the NPSF can sharpen noisy images while producing desired noise level and texture. The above two algorithms merely control the amount of texture retained and are not designed to produce texture that matches to a target texture. A Generative Adversarial Network (GAN) can produce the target texture. However, naive application of GANs can introduce inaccurate or even unreal image detail. Therefore, we propose a Texture Matching GAN (TMGAN) that uses parallel generators to separate anatomical features from the generated texture, which allows the GAN to be trained to match the target texture without directly affecting the underlying CT image. We demonstrate that TMGAN generates enhanced image quality while also producing texture that is desirable for clinical application.</p><p><br></p><p dir="ltr">In the second portion of this research, we propose a novel algorithm for the optimal statistical processing of photon-counting detector data for CT reconstruction. Current reconstruction and material decomposition algorithms for photon counting CT are not able to utilize simultaneously both the measured spectral information and advanced prior models. We propose a modular framework based on Multi-Agent Consensus Equilibrium (MACE) to obtain material decomposition and reconstructions using the PCD data. Our method employs a detector agent that uses PCD measurements to update an estimate along with a prior agent that enforces both physical and empirical knowledge about the material-decomposed sinograms. Importantly, the modular framework allows the two agents to be designed and optimized independently. Our evaluations on simulated data show promising results.</p>

Page generated in 0.0615 seconds