• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Image processing methods for 3D intraoperative ultrasound

Hellier, Pierre 30 June 2010 (has links) (PDF)
Ce document constitue une synth`ese de travaux de recherche en vue de l'obten- tion du diplˆome d'habilitation `a diriger les recherches. A la suite ce cette in- troduction r ́edig ́ee en franc ̧ais, le reste de ce document sera en anglais. Je suis actuellement charg ́e de recherches INRIA au centre de Rennes Bretagne Atlantique. J'ai rejoint en Septembre 2001 l' ́equipe Vista dirig ́ee par Patrick Bouthemy, puis l' ́equipe Visages dirig ́ee par Christian Barillot en Janvier 2004. Depuis Janvier 2010, je travaille dans l' ́equipe-projet Serpico dirig ́ee par Charles Kervrann dont l'objet est l'imagerie et la mod ́elisation de la dynamique intra- cellulaire. Parmi mes activit ́es pass ́ees, ce document va se concentrer uniquement sur les activit ́es portant sur la neurochirurgie guid ́ee par l'image. En parti- culier, les travaux effectu ́es sur le recalage non-rigide ne seront pas pr ́esent ́es ici. Concernant le recalage, ces travaux ont commenc ́e pendant ma th`ese avec le d ́eveloppement d'une m ́ethode de recalage 3D bas ́e sur le flot optique [72], l'incorporation de contraintes locales dans ce processus de recalage [74] et la validation de m ́ethodes de recalage inter-sujets [71]. J'ai poursuivi ces travaux apr`es mon recrutement avec Anne Cuzol et Etienne M ́emin sur la mod ́elisation fluide du recalage [44], avec Nicolas Courty sur l'acc ́el ́eration temps-r ́eel de m ́ethode de recalage [42], et sur l' ́evaluation des m ́ethodes de recalage dans deux contextes : celui de l'implantation d' ́electrodes profondes [29] et le re- calage inter-sujets [92]. L'utilisation de syst`emes dits de neuronavigation est maintenant courante dans les services de neurochirurgie. Les b ́en ́efices, attendus ou report ́es dans la litt ́erature, sont une r ́eduction de la mortalit ́e et de la morbidit ́e, une am ́elio- ration de la pr ́ecision, une r ́eduction de la dur ́ee d'intervention, des couˆts d'hospitalisation. Tous ces b ́en ́efices ne sont pas `a l'heure actuelle d ́emontr ́es `a ma connaissance, mais cette question d ́epasse largement le cadre de ce doc- ument. Ces syst`emes de neuronavigation permettent l'utilisation du planning chirurgical pendant l'intervention, dans la mesure ou` le patient est mis en cor- respondance g ́eom ́etrique avec les images pr ́eop ́eratoires `a partir desquelles est pr ́epar ́ee l'intervention. Ces informations multimodales sont maintenant couramment utilis ́ees, com- prenant des informations anatomiques, vasculaires, fonctionnelles. La fusion de ces informations permet de pr ́eparer le geste chirurgical : ou` est la cible, quelle est la voie d'abord, quelles zones ́eviter. Ces informations peuvent main- tenant ˆetre utilis ́ees en salle d'op ́eration et visualis ́ees dans les oculaires du mi- croscope chirurgical grˆace au syst`eme de neuronavigation. Malheureusement, cela suppose qu'il existe une transformation rigide entre le patient et les im- ages pr ́eop ́eratoires. Alors que cela peut ˆetre consid ́er ́e comme exact avant l'intervention, cette hypoth`ese tombe rapidement sous l'effet de la d ́eformation des tissus mous. Ces d ́eformations, qui doivent ˆetre consid ́er ́ees comme un ph ́enom`ene spatio-temporel, interviennent sous l'effet de plusieurs facteurs, dont la gravit ́e, la perte de liquide c ́ephalo-rachidien, l'administration de pro- duits anesth ́esiants ou diur ́etiques, etc. Ces d ́eformations sont tr`es difficiles `a mod ́eliser et pr ́edire. De plus, il s'agit d'un ph ́enom`ene spatio-temporel, dont l'amplitude peut varier consid ́era- blement en fonction de plusieurs facteurs. Pour corriger ces d ́eformations, l'imagerie intra-op ́eratoire apparait comme la seule piste possible.
72

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
73

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
74

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
75

Self-Similarity of Images and Non-local Image Processing

Glew, Devin January 2011 (has links)
This thesis has two related goals: the first involves the concept of self-similarity of images. Image self-similarity is important because it forms the basis for many imaging techniques such as non-local means denoising and fractal image coding. Research so far has been focused largely on self-similarity in the pixel domain. That is, examining how well different regions in an image mimic each other. Also, most works so far concerning self-similarity have utilized only the mean squared error (MSE). In this thesis, self-similarity is examined in terms of the pixel and wavelet representations of images. In each of these domains, two ways of measuring similarity are considered: the MSE and a relatively new measurement of image fidelity called the Structural Similarity (SSIM) Index. We show that the MSE and SSIM Index give very different answers to the question of how self-similar images really are. The second goal of this thesis involves non-local image processing. First, a generalization of the well known non-local means denoising algorithm is proposed and examined. The groundwork for this generalization is set by the aforementioned results on image self-similarity with respect to the MSE. This new method is then extended to the wavelet representation of images. Experimental results are given to illustrate the applications of these new ideas.
76

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
77

Wavelet-based Outlier Detection And Denoising Of Airborne Laser Scanning Data

Akyay, Tolga 01 December 2008 (has links) (PDF)
The method of airborne laser scanning &ndash / also named as LIDAR &ndash / has recently turned out to be an efficient way for generating high quality digital surface and elevation models. In this work, wavelet-based outlier detection and different wavelet thresholding (wavelet shrinkage) methods for denoising of airborne laser scanning data are discussed. The task is to investigate the effect of wavelet-based outlier detection and find out which wavelet thresholding methods provide best denoising results for post-processing. Data and results are analyzed and visualized by using a MATLAB program which was developed during this work.
78

Αφαίρεση θορύβου από ψηφιακές εικόνες μικροσυστοιχιών DNA

Καπρινιώτης, Αχιλλέας 18 June 2009 (has links)
Στο πείραμα των μικροσυστοιχιών, η απόκτηση εικόνας συνοδεύεται πάντα από θόρυβο, ο οποίος είναι έμφυτος σε τέτοιου είδους διεργασίες. Είναι λοιπόν επιτακτική ανάγκη να χρησιμοποιηθούν τεχνικές προς καταστολή αυτού. Στην παρούσα εργασία αναλύονται μέθοδοι και παρουσιάζονται τα αποτελέσματά τους σε 5 επιλεγμένα παραδείγματα. Ιδιαίτερη έμφαση δίνεται στο wavelet denoising και συγκεκριμένα στους αλγορίθμους soft thresholding, hard thresholding και stationary wavelet transform. / The subject of this diploma thesis is the manufacturing of a driver assistance system. Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this diploma is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as wavelets and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs).
79

Sparse coding for machine learning, image processing and computer vision

Mairal, Julien 30 November 2010 (has links) (PDF)
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
80

Curvelet denoising of 4d seismic

Bayreuther, Moritz, Cristall, Jamin, Herrmann, Felix J. January 2004 (has links)
With burgeoning world demand and a limited rate of discovery of new reserves, there is increasing impetus upon the industry to optimize recovery from already existing fields. 4D, or time-lapse, seismic imaging is an emerging technology that holds great promise to better monitor and optimise reservoir production. The basic idea behind 4D seismic is that when multiple 3D surveys are acquired at separate calendar times over a producing field, the reservoir geology will not change from survey to survey but the state of the reservoir fluids will change. Thus, taking the difference between two 3D surveys should remove the static geologic contribution to the data and isolate the timevarying fluid flow component. However, a major challenge in 4D seismic is that acquisition and processing differences between 3D surveys often overshadow the changes caused by fluid flow. This problem is compounded when 4D effects are sought to be derived from vintage 3D data sets that were not originally acquired with 4D in mind. The goal of this study is to remove the acquisition and imaging artefacts from a 4D seismic difference cube using Curvelet processing techniques.

Page generated in 0.0794 seconds