• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 249
  • 111
  • 52
  • 52
  • 46
  • 42
  • 38
  • 33
  • 29
  • 28
  • 24
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Seismic noise : the good the bad and the ugly

Herrmann, Felix J., Wilkinson, Dave January 2007 (has links)
In this paper, we present a nonlinear curvelet-based sparsity-promoting formulation for three problems related to seismic noise, namely the ’good’, corresponding to noise generated by random sampling; the ’bad’, corresponding to coherent noise for which (inaccurate) predictions exist and the ’ugly’ for which no predictions exist. We will show that the compressive capabilities of curvelets on seismic data and images can be used to tackle these three categories of noise-related problems.
82

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
83

Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

Zhao, Yanjun 18 December 2014 (has links)
Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelets descriptors have been widely used in multi-resolution image analysis. However, making the wavelets transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other theories or information, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling an image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing.
84

Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

Zhao, Yanjun 18 December 2014 (has links)
Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelet descriptors have been widely used in multi-resolution image analysis. However, making the wavelet transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other methods, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling each image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing.
85

Self-Similarity of Images and Non-local Image Processing

Glew, Devin January 2011 (has links)
This thesis has two related goals: the first involves the concept of self-similarity of images. Image self-similarity is important because it forms the basis for many imaging techniques such as non-local means denoising and fractal image coding. Research so far has been focused largely on self-similarity in the pixel domain. That is, examining how well different regions in an image mimic each other. Also, most works so far concerning self-similarity have utilized only the mean squared error (MSE). In this thesis, self-similarity is examined in terms of the pixel and wavelet representations of images. In each of these domains, two ways of measuring similarity are considered: the MSE and a relatively new measurement of image fidelity called the Structural Similarity (SSIM) Index. We show that the MSE and SSIM Index give very different answers to the question of how self-similar images really are. The second goal of this thesis involves non-local image processing. First, a generalization of the well known non-local means denoising algorithm is proposed and examined. The groundwork for this generalization is set by the aforementioned results on image self-similarity with respect to the MSE. This new method is then extended to the wavelet representation of images. Experimental results are given to illustrate the applications of these new ideas.
86

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
87

Débruitage, alignement et reconstruction 3D automatisés en tomographie électronique : applications en sciences des matériaux / Automatic denoising, alignment and reconstruction in electron tomography : materials science applications

Printemps, Tony 24 November 2016 (has links)
La tomographie électronique est une technique de nano-caractérisation 3D non destructive. C’est une technique de choix dans le domaine des nanotechnologies pour caractériser des structures tridimensionnelles complexes pour lesquelles l’imagerie 2D en microscopie électronique en transmission seule n’est pas suffisante. Toutes les étapes nécessaires à la réalisation d’une reconstruction 3D en tomographie électronique sont investiguées dans cette thèse, de la préparation d’échantillon aux algorithmes de reconstruction, en passant par l’acquisition des données et l’alignement. Les travaux entrepris visent en particulier (i) à développer une algorithmie complète incluant débruitage, alignement et reconstruction automatisés afin de rendre la technique plus robuste et donc utilisable en routine (ii) à étendre la tomographie électronique à des échantillons plus épais ou ayant subis une déformation en cours d’acquisition et enfin (iii) à améliorer la tomographie électronique chimique en essayant d’exploiter au maximum toutes les informations disponibles. Toutes ces avancées ont pu être réalisées en s’intéressant particulièrement aux échantillons permettant une acquisition sur une étendue angulaire idéale de 180°. Un logiciel a également été développé au cours de cette thèse synthétisant la majeure partie de ces avancées pour permettre de réaliser simplement toutes les étapes de tomographie électronique post-acquisition. / Electron tomography is a 3D non-destructive nano-characterization technique. It is an essential technique in the field of nanotechnologies to characterize complex structures particularly when 2D projections using a transmission electron microscope (TEM) are inappropriate for understanding the 3D sample morphology. During this thesis each one of the necessary steps of electron tomography have been studied: sample preparation, TEM acquisition, projection alignment and inversion algorithms. The main contributions of this thesis are (i) the development of a new complete procedure of automatic denoising, alignment and reconstruction for a routine use of electron tomography (ii) the extension of the technique to thicker specimen and specimen being damaged during the acquisition and finally (iii) the improvement of chemical tomography reconstructions using as much information as possible. All those contributions are possible taking advantage of the use of needle-shaped samples to acquire projections on an ideal tilt range of 180°. A software has been developed during this thesis to allow users to simply apply most of the contributions proposed in this work.
88

Débruitage de séquences par approche multi-échelles : application à l'imagerie par rayons X / Spatio-temporal denoising using a multi-scale approach : application to fluoroscopic X-ray image sequences

Amiot, Carole 18 December 2014 (has links)
Les séquences fluoroscopiques, acquises à de faibles doses de rayons X, sont utilisées au cours de certaines opérations médicales pour guider le personnel médical dans ces actes. Cependant, la qualité des images obtenues est inversement proportionnelle à cette dose. Nous proposons dans ces travaux un algorithme de réduction de bruit permettant de compenser les effets d'une réduction de la dose d'acquisition et donc garantissant une meilleure protection pour le patient et le personnel médical. Le filtrage développé est un filtre spatio-temporel s'appuyant sur les représentations multi-échelles 2D des images de la séquence pour de meilleures performances. Le filtre temporel récursif d'ordre 1 et compensé en mouvement permet une forte réduction de bruit. Il utilise une détection et un suivi des objets de la séquence. Ces deux étapes déterminent le filtrage spatio-temporel de chaque coefficient multi-échelles. Le filtrage spatial est un seuillage contextuel utilisant le voisinage multi-échelles des coefficients pour éviter l'apparition d'artefacts de forme dans les images reconstruites. La méthode proposée est testée dans deux espaces multi-échelles différents, les curvelets et les ondelettes complexes suivant l'arbre dual. Elle offre des performances supérieures à celles des meilleures méthodes de l'état de l'art. / Acquired with low doses of X-rays, fluoroscopic sequences are used to guide the medical staff during some medical procedures. However, image quality is inversely proportional to acquisition doses. We present here a noise reduction algorithm compensating for the effects of an acquisition at a reduced dose. Such a reduction enables better health protection for the patient as well as for the medical staff. The proposed method is based on a spatio-temporal filter applied on the 2D multi-scales representations of the sequence images to allow for a greater noise reduction. The motion-compensated, recursive filter acccounts for most of the noise reduction. It is composed of a detection and pairing step, which output determines how a coefficient is filtered. Spatial filtering is based on a contextual thresholding to avoid introducing shape-like artifacts. We compare this filtering both in the curvelet and dual-tree complex wavelet domains and show it offers better results than state-of-the-art methods.
89

Exploring Video Denoising using Matrix Completion

January 2013 (has links)
abstract: Video denoising has been an important task in many multimedia and computer vision applications. Recent developments in the matrix completion theory and emergence of new numerical methods which can efficiently solve the matrix completion problem have paved the way for exploration of new techniques for some classical image processing tasks. Recent literature shows that many computer vision and image processing problems can be solved by using the matrix completion theory. This thesis explores the application of matrix completion in video denoising. A state-of-the-art video denoising algorithm in which the denoising task is modeled as a matrix completion problem is chosen for detailed study. The contribution of this thesis lies in both providing extensive analysis to bridge the gap in existing literature on matrix completion frame work for video denoising and also in proposing some novel techniques to improve the performance of the chosen denoising algorithm. The chosen algorithm is implemented for thorough analysis. Experiments and discussions are presented to enable better understanding of the problem. Instability shown by the algorithm at some parameter values in a particular case of low levels of pure Gaussian noise is identified. Artifacts introduced in such cases are analyzed. A novel way of grouping structurally-relevant patches is proposed to improve the algorithm. Experiments show that this technique is useful, especially in videos containing high amounts of motion. Based on the observation that matrix completion is not suitable for denoising patches containing relatively low amount of image details, a framework is designed to separate patches corresponding to low structured regions from a noisy image. Experiments are conducted by not subjecting such patches to matrix completion, instead denoising such patches in a different way. The resulting improvement in performance suggests that denoising low structured patches does not require a complex method like matrix completion and in fact it is counter-productive to subject such patches to matrix completion. These results also indicate the inherent limitation of matrix completion to deal with cases in which noise dominates the structural properties of an image. A novel method for introducing priorities to the ranked patches in matrix completion is also presented. Results showed that this method yields improved performance in general. It is observed that the artifacts in presence of low levels of pure Gaussian noise appear differently after introducing priorities to the patches and the artifacts occur at a wider range of parameter values. Results and discussion suggesting future ways to explore this problem are also presented. / Dissertation/Thesis / M.S. Electrical Engineering 2013
90

Signal subspace identification for epileptic source localization from electroencephalographic data / Suppression du bruit de signaux EEG épileptiques

Hajipour Sardouie, Sepideh 09 October 2014 (has links)
Lorsque l'on enregistre l'activité cérébrale en électroencéphalographie (EEG) de surface, le signal d'intérêt est fréquemment bruité par des activités différentes provenant de différentes sources de bruit telles que l'activité musculaire. Le débruitage de l'EEG est donc une étape de pré-traitement important dans certaines applications, telles que la localisation de source. Dans cette thèse, nous proposons six méthodes permettant la suppression du bruit de signaux EEG dans le cas particulier des activités enregistrées chez les patients épileptiques soit en période intercritique (pointes) soit en période critique (décharges). Les deux premières méthodes, qui sont fondées sur la décomposition généralisée en valeurs propres (GEVD) et sur le débruitage par séparation de sources (DSS), sont utilisées pour débruiter des signaux EEG épileptiques intercritiques. Pour extraire l'information a priori requise par GEVD et DSS, nous proposons une série d'étapes de prétraitement, comprenant la détection de pointes, l'extraction du support des pointes et le regroupement des pointes impliquées dans chaque source d'intérêt. Deux autres méthodes, appelées Temps Fréquence (TF) -GEVD et TF-DSS, sont également proposées afin de débruiter les signaux EEG critiques. Dans ce cas on extrait la signature temps-fréquence de la décharge critique par la méthode d'analyse de corrélation canonique. Nous proposons également une méthode d'Analyse en Composantes Indépendantes (ICA), appelé JDICA, basée sur une stratégie d'optimisation de type Jacobi. De plus, nous proposons un nouvel algorithme direct de décomposition canonique polyadique (CP), appelé SSD-CP, pour calculer la décomposition CP de tableaux à valeurs complexes. L'algorithme proposé est basé sur la décomposition de Schur simultanée (SSD) de matrices particulières dérivées du tableau à traiter. Nous proposons également un nouvel algorithme pour calculer la SSD de plusieurs matrices à valeurs complexes. Les deux derniers algorithmes sont utilisés pour débruiter des données intercritiques et critiques. Nous évaluons la performance des méthodes proposées pour débruiter les signaux EEG (simulés ou réels) présentant des activités intercritiques et critiques épileptiques bruitées par des artéfacts musculaires. Dans le cas des données simulées, l'efficacité de chacune de ces méthodes est évaluée d'une part en calculant l'erreur quadratique moyenne normalisée entre les signaux originaux et débruités, et d'autre part en comparant les résultats de localisation de sources, obtenus à partir des signaux non bruités, bruités, et débruités. Pour les données intercritiques et critiques, nous présentons également quelques exemples sur données réelles enregistrées chez des patients souffrant d'épilepsie partielle. / In the process of recording electrical activity of the brain, the signal of interest is usually contaminated with different activities arising from various sources of noise and artifact such as muscle activity. This renders denoising as an important preprocessing stage in some ElectroEncephaloGraphy (EEG) applications such as source localization. In this thesis, we propose six methods for noise cancelation of epileptic signals. The first two methods, which are based on Generalized EigenValue Decomposition (GEVD) and Denoising Source Separation (DSS) frameworks, are used to denoise interictal data. To extract a priori information required by GEVD and DSS, we propose a series of preprocessing stages including spike peak detection, extraction of exact time support of spikes and clustering of spikes involved in each source of interest. Two other methods, called Time Frequency (TF)-GEVD and TF-DSS, are also proposed in order to denoise ictal EEG signals for which the time-frequency signature is extracted using the Canonical Correlation Analysis method. We also propose a deflationary Independent Component Analysis (ICA) method, called JDICA, that is based on Jacobi-like iterations. Moreover, we propose a new direct algorithm, called SSD-CP, to compute the Canonical Polyadic (CP) decomposition of complex-valued multi-way arrays. The proposed algorithm is based on the Simultaneous Schur Decomposition (SSD) of particular matrices derived from the array to process. We also propose a new Jacobi-like algorithm to calculate the SSD of several complex-valued matrices. The last two algorithms are used to denoise both interictal and ictal data. We evaluate the performance of the proposed methods to denoise both simulated and real epileptic EEG data with interictal or ictal activity contaminated with muscular activity. In the case of simulated data, the effectiveness of the proposed algorithms is evaluated in terms of Relative Root Mean Square Error between the original noise-free signals and the denoised ones, number of required ops and the location of the original and denoised epileptic sources. For both interictal and ictal data, we present some examples on real data recorded in patients with a drug-resistant partial epilepsy.

Page generated in 1.2504 seconds