• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 23
  • 20
  • 17
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
22

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
23

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
24

Wavelet-based Outlier Detection And Denoising Of Airborne Laser Scanning Data

Akyay, Tolga 01 December 2008 (has links) (PDF)
The method of airborne laser scanning &ndash / also named as LIDAR &ndash / has recently turned out to be an efficient way for generating high quality digital surface and elevation models. In this work, wavelet-based outlier detection and different wavelet thresholding (wavelet shrinkage) methods for denoising of airborne laser scanning data are discussed. The task is to investigate the effect of wavelet-based outlier detection and find out which wavelet thresholding methods provide best denoising results for post-processing. Data and results are analyzed and visualized by using a MATLAB program which was developed during this work.
25

Αφαίρεση θορύβου από ψηφιακές εικόνες μικροσυστοιχιών DNA

Καπρινιώτης, Αχιλλέας 18 June 2009 (has links)
Στο πείραμα των μικροσυστοιχιών, η απόκτηση εικόνας συνοδεύεται πάντα από θόρυβο, ο οποίος είναι έμφυτος σε τέτοιου είδους διεργασίες. Είναι λοιπόν επιτακτική ανάγκη να χρησιμοποιηθούν τεχνικές προς καταστολή αυτού. Στην παρούσα εργασία αναλύονται μέθοδοι και παρουσιάζονται τα αποτελέσματά τους σε 5 επιλεγμένα παραδείγματα. Ιδιαίτερη έμφαση δίνεται στο wavelet denoising και συγκεκριμένα στους αλγορίθμους soft thresholding, hard thresholding και stationary wavelet transform. / The subject of this diploma thesis is the manufacturing of a driver assistance system. Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this diploma is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as wavelets and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs).
26

Sparse coding for machine learning, image processing and computer vision

Mairal, Julien 30 November 2010 (has links) (PDF)
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
27

Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

Zhao, Yanjun 18 December 2014 (has links)
Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelets descriptors have been widely used in multi-resolution image analysis. However, making the wavelets transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other theories or information, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling an image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing.
28

Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

Zhao, Yanjun 18 December 2014 (has links)
Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelet descriptors have been widely used in multi-resolution image analysis. However, making the wavelet transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other methods, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling each image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing.
29

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
30

Denoising and Segmentation of MCT Slice Images of Leather Fiber

Hua, Yuai, Lu, Jianmei, Zhang, Huayong, Cheng, Jinyong, Liang, Wei, Li, Tianduo 26 June 2019 (has links)
Content: The braiding structure of leather fibers has not been understood clearly and it is very useful and interesting to study it. Microscopic X-ray tomography (MCT) technology can produce cross-sectional images of the leather without destroying its structure. The three-dimensional structure of leather fibers can be reconstructed by using MCT slice images, so as to show the braiding structure and regularity of leather fibers. The denoising and segmentation of MCT slice images of leather fibers is the basic procedure for three-dimensional reconstruction. In order to study the braiding structure of leather fibers in the round, the image of resinembedded leather fibers MCT slices and in situ leather fibers MCT slices were analyzed and processed. It is showed that the resin-embedded leather fiber MCT slices were quite different from that of in situ leather fiber MCT slices. In-situ leather fiber MCT slice image could be denoised relatively easily. But denoising of resin-embedded leather fiber MCT slice image is a challenge because of its strong noise. In addition, some fiber bundles adhere to each other in the slice image, which are difficult to be segmented. There are many methods of image denoising and segmentation, but there is no general method to process all types of images. In this paper, a series of computer-aided denoising and segmentation algorithms are designed for in-situ MCT slice images of leather fibers and resin-embedded MCT slice images. The fiber bundles in wide field MCT images are distributed densely, adherent to each other. Many fiber bundles are separated in one image and tightly bound in another. This brings great difficulties to image segmentation. To solve this problem, the following segmentation methods are used: Grayscale-threshold segmentation method, The region-growing segmentation method, Three-dimensional image segmentation method. The denoising and segmentation algorithm proposed in this paper has remarkable effect in processing a series of original MCT slice images and resin-embedded leather fibers MCT slice images. A series of threedimensional images based on this work demonstrate the fine spatial braiding structure of leather fiber, which would help us to understand the braiding structure of leather fibers better. Take-Away: presentation ppt, Figures

Page generated in 0.0931 seconds