• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Analysis and Visualization of the Two-Dimensional Blood Flow Velocity Field from Videos

Jun, Yang January 2015 (has links)
We estimate the velocity field of the blood flow in a human face from videos. Our approach first performs spatial preprocessing to improve the signal-to-noise ratio (SNR) and the computational efficiency. The discrete Fourier transform (DFT) and a temporal band-pass filter are then applied to extract the frequency corresponding to the subjects heart rate. We propose multiple kernel based k-NN classification for removing the noise positions from the resulting phase and amplitude maps. The 2D blood flow field is then estimated from the relative phase shift between the pixels. We evaluate our approach about segmentation as well as velocity field on real and synthetic face videos. Our method produces the recall and precision as well as a velocity field with an angular error and magnitude error on the average.
92

Débruitage, séparation et localisation de sources EEG dans le contexte de l'épilepsie / Denoising, separation and localization of EEG sources in the context of epilepsy

Becker, Hanna 24 October 2014 (has links)
L'électroencéphalographie (EEG) est une technique qui est couramment utilisée pour le diagnostic et le suivi de l'épilepsie. L'objectif de cette thèse consiste à fournir des algorithmes pour l'extraction, la séparation, et la localisation de sources épileptiques à partir de données EEG. D'abord, nous considérons deux étapes de prétraitement. La première étape vise à éliminer les artéfacts musculaires à l'aide de l'analyse en composantes indépendantes (ACI). Dans ce contexte, nous proposons un nouvel algorithme par déflation semi-algébrique qui extrait les sources épileptiques de manière plus efficace que les méthodes conventionnelles, ce que nous démontrons sur données EEG simulées et réelles. La deuxième étape consiste à séparer des sources corrélées. A cette fin, nous étudions des méthodes de décomposition tensorielle déterministe exploitant des données espace-temps-fréquence ou espace-temps-vecteur-d'onde. Nous comparons les deux méthodes de prétraitement à l'aide de simulations pour déterminer dans quels cas l'ACI, la décomposition tensorielle, ou une combinaison des deux approches devraient être utilisées. Ensuite, nous traitons la localisation de sources distribuées. Après avoir présenté et classifié les méthodes de l'état de l'art, nous proposons un algorithme pour la localisation de sources distribuées qui s'appuie sur les résultats du prétraitement tensoriel. L'algorithme est évalué sur données EEG simulées et réelles. En plus, nous apportons quelques améliorations à une méthode de localisation de sources basée sur la parcimonie structurée. Enfin, une étude des performances de diverses méthodes de localisation de sources est conduite sur données EEG simulées. / Electroencephalography (EEG) is a routinely used technique for the diagnosis and management of epilepsy. In this context, the objective of this thesis consists in providing algorithms for the extraction, separation, and localization of epileptic sources from the EEG recordings. In the first part of the thesis, we consider two preprocessing steps applied to raw EEG data. The first step aims at removing muscle artifacts by means of Independent Component Analysis (ICA). In this context, we propose a new semi-algebraic deflation algorithm that extracts the epileptic sources more efficiently than conventional methods as we demonstrate on simulated and real EEG data. The second step consists in separating correlated sources that can be involved in the propagation of epileptic phenomena. To this end, we explore deterministic tensor decomposition methods exploiting space-time-frequency or space-time-wave-vector data. We compare the two preprocessing methods using computer simulations to determine in which cases ICA, tensor decomposition, or a combination of both should be used. The second part of the thesis is devoted to distributed source localization techniques. After providing a survey and a classification of current state-of-the-art methods, we present an algorithm for distributed source localization that builds on the results of the tensor-based preprocessing methods. The algorithm is evaluated on simulated and real EEG data. Furthermore, we propose several improvements of a source imaging method based on structured sparsity. Finally, a comprehensive performance study of various brain source imaging methods is conducted on physiologically plausible, simulated EEG data.
93

A Comparison of Standard Denoising Methods for Peptide Identification

Carpenter, Skylar 01 May 2019 (has links)
Peptide identification using tandem mass spectrometry depends on matching the observed spectrum with the theoretical spectrum. The raw data from tandem mass spectrometry, however, is often not optimal because it may contain noise or measurement errors. Denoising this data can improve alignment between observed and theoretical spectra and reduce the number of peaks. The method used by Lewis et. al (2018) uses a combined constant and moving threshold to denoise spectra. We compare the effects of using the standard preprocessing methods baseline removal, wavelet smoothing, and binning on spectra with Lewis et. al’s threshold method. We consider individual methods and combinations, using measures of distance from Lewis et. al's scoring function for comparison. Our findings showed that no single method provided better results than Lewis et. al's, but combining techniques with that of Lewis et. al's reduced the distance measurements and size of the data set for many peptides.
94

Denoising and Segmentation of MCT Slice Images of Leather Fiber

Hua, Yuai, Lu, Jianmei, Zhang, Huayong, Cheng, Jinyong, Liang, Wei, Li, Tianduo 26 June 2019 (has links)
Content: The braiding structure of leather fibers has not been understood clearly and it is very useful and interesting to study it. Microscopic X-ray tomography (MCT) technology can produce cross-sectional images of the leather without destroying its structure. The three-dimensional structure of leather fibers can be reconstructed by using MCT slice images, so as to show the braiding structure and regularity of leather fibers. The denoising and segmentation of MCT slice images of leather fibers is the basic procedure for three-dimensional reconstruction. In order to study the braiding structure of leather fibers in the round, the image of resinembedded leather fibers MCT slices and in situ leather fibers MCT slices were analyzed and processed. It is showed that the resin-embedded leather fiber MCT slices were quite different from that of in situ leather fiber MCT slices. In-situ leather fiber MCT slice image could be denoised relatively easily. But denoising of resin-embedded leather fiber MCT slice image is a challenge because of its strong noise. In addition, some fiber bundles adhere to each other in the slice image, which are difficult to be segmented. There are many methods of image denoising and segmentation, but there is no general method to process all types of images. In this paper, a series of computer-aided denoising and segmentation algorithms are designed for in-situ MCT slice images of leather fibers and resin-embedded MCT slice images. The fiber bundles in wide field MCT images are distributed densely, adherent to each other. Many fiber bundles are separated in one image and tightly bound in another. This brings great difficulties to image segmentation. To solve this problem, the following segmentation methods are used: Grayscale-threshold segmentation method, The region-growing segmentation method, Three-dimensional image segmentation method. The denoising and segmentation algorithm proposed in this paper has remarkable effect in processing a series of original MCT slice images and resin-embedded leather fibers MCT slice images. A series of threedimensional images based on this work demonstrate the fine spatial braiding structure of leather fiber, which would help us to understand the braiding structure of leather fibers better. Take-Away: presentation ppt, Figures
95

Defending against Adversarial Attacks in Speaker Verification Systems

Li-Chi Chang (11178210) 26 July 2021 (has links)
<p>With the advance of the technologies of Internet of things, smart devices or virtual personal assistants at home, such as Google Assistant, Apple Siri, and Amazon Alexa, have been widely used to control and access different objects like door lock, blobs, air conditioner, and even bank accounts, which makes our life convenient. Because of its ease for operations, voice control becomes a main interface between users and these smart devices. To make voice control more secure, speaker verification systems have been researched to apply human voice as biometrics to accurately identify a legitimate user and avoid the illegal access. In recent studies, however, it has been shown that speaker verification systems are vulnerable to different security attacks such as replay, voice cloning, and adversarial attacks. Among all attacks, adversarial attacks are the most dangerous and very challenging to defend. Currently, there is no known method that can effectively defend against such an attack in speaker verification systems.</p> <p>The goal of this project is to design and implement a defense system that is simple, light-weight, and effectively against adversarial attacks for speaker verification. To achieve this goal, we study the audio samples from adversarial attacks in both the time domain and the Mel spectrogram, and find that the generated adversarial audio is simply a clean illegal audio with small perturbations that are similar to white noises, but well-designed to fool speaker verification. Our intuition is that if these perturbations can be removed or modified, adversarial attacks can potentially loss the attacking ability. Therefore, we propose to add a plugin-function module to preprocess the input audio before it is fed into the verification system. As a first attempt, we study two opposite plugin functions: denoising that attempts to remove or reduce perturbations and noise-adding that adds small Gaussian noises to an input audio. We show through experiments that both methods can significantly degrade the performance of a state-of-the-art adversarial attack. Specifically, it is shown that denoising and noise-adding can reduce the targeted attack success rate of the attack from 100% to only 56% and 5.2%, respectively. Moreover, noise-adding can slow down the attack 25 times in speed and has a minor effect on the normal operations of a speaker verification system. Therefore, we believe that noise-adding can be applied to any speaker verification system against adversarial attacks. To the best of our knowledge, this is the first attempt in applying the noise-adding method to defend against adversarial attacks in speaker verification systems.</p><br>
96

Fast and accurate image registration. Applications to on-board satellite imaging. / Recalage rapide et précis des images. Applications pour l'imagerie satellite

Rais, Martin 09 December 2016 (has links)
Cette thèse commence par une étude approfondie des méthodes d’estimation de décalage sous-pixeliques rapides. Une comparaison complète est effectuée prenant en compte problèmes d’estimation de décalage existant dans des applications réelles, à savoir, avec différentes conditions de SNR, différentes grandeurs de déplacement, la non préservation de la contrainte de luminosité constante, l’aliasing et, surtout, la limitation des ressources de calcul. Sur la base de cette étude, en collaboration avec le CNES (l’agence spatiale française), deux problèmes qui sont cruciaux pour l’optique numérique des satellites d’observation de la terre sont analysés. Nous étudions d’abord le problème de correction de front d’onde dans le contexte de l’optique actif. Nous proposons un algorithme pour mesurer les aberrations de front d’onde sur un senseur de type Shack-Hartmann (SHWFS en anglais) en observant la terre. Nous proposons ici une revue de l’état de l’art des méthodes pour le SHWFS utilisé sur des scènes étendues (comme la terre) et concevons une nouvelle méthode pour améliorer l’estimation de front d’onde, en utilisant une approche basée sur l’équation du flot optique. Nous proposons également deux méthodes de validation afin d’assurer une estimation correcte du front d’onde sur les scènes étendues. Tandis que la première est basée sur une adaptation numérique des bornes inférieures (théoriques) pour le recalage d’images, la seconde méthode défausse rapidement les paysages en se basant sur la distribution des gradients. La deuxième application de satellite abordée est la conception numérique d’une nouvelle génération de senseur du type Time Delay Integration (TDI). Dans ce nouveau concept, la stabilisation active en temps réel du TDI est réalisée pour étendre considérablement le temps d’intégration, et donc augmenter le RSB des images. Les lignes du TDI ne peuvent pas être fusionnées directement par addition parce que leur position est modifiée par des microvibrations. Celles-ci doivent être compensées en temps réel avec une précision sous-pixellique. Nous étudions les limites fondamentales théoriques de ce problème et proposons une solution qui s’en approche. Nous présentons un système utilisant la convolution temporelle conjointement à une estimation en ligne du bruit de capteur, à une estimation de décalage basée sur les gradients et à une méthode multiimage non conventionnelle pour mesurer les déplacements globaux. Les résultats obtenus sont concluants sur les fronts de la précision et de la complexité. Pour des modèles de transformation plus complexes, une nouvelle méthode effectuant l’estimation précise et robuste des modèles de mise en correspondance des points d’intérêt entre images est proposée. La difficulté provenant de la présence de fausses correspondances et de mesures bruitées conduit à un échec des méthodes de régression traditionnelles. En vision par ordinateur, RANSAC est certainement la méthode la plus utilisée pour surmonter ces difficultés. RANSAC est capable de discriminer les fausses correspondances en générant de façon aléatoire des hypothèses et en vérifiant leur consensus. Cependant, sa réponse est basée sur la seule itération qui a obtenu le consensus le plus large, et elle ignore toutes les autres hypothèses. Nous montrons ici que la précision peut être améliorée en agrégeant toutes les hypothèses envisagées. Nous proposons également une stratégie simple qui permet de moyenner rapidement des transformations 2D, ce qui réduit le coût supplémentaire de calcul à quantité négligeable. Nous donnons des applications réelles pour estimer les transformations projectives et les transformations homographie + distorsion. En incluant une adaptation simple de LO-RANSAC dans notre cadre, l’approche proposée bat toutes les méthodes de l’état de l’art. Une analyse complète de l’approche proposée est réalisée, et elle démontre un net progrès en précision, stabilité et polyvalence. / This thesis starts with an in-depth study of fast and accurate sub-pixel shift estimationmethods. A full comparison is performed based on the common shift estimation problems occurring in real-life applications, namely, varying SNR conditions, differentdisplacement magnitudes, non-preservation of the brightness constancy constraint, aliasing, and most importantly, limited computational resources. Based on this study, in collaboration with CNES (the French space agency), two problems that are crucial for the digital optics of earth-observation satellites are analyzed.We first study the wavefront correction problem in an active optics context. We propose a fast and accurate algorithm to measure the wavefront aberrations on a Shack-HartmannWavefront Sensor (SHWFS) device observing the earth. We give here a review of state-of-the-art methods for SHWFS used on extended scenes (such as the earth) and devise a new method for improving wavefront estimation, based on a carefully refined approach based on the optical flow equation. This method takes advantage of the small shifts observed in a closed-loop wavefront correction system, yielding improved accuracy using fewer computational resources. We also propose two validation methods to ensure a correct wavefront estimation on extended scenes. While the first one is based on a numerical adaptation of the (theoretical) lower bounds of image registration, the second method rapidly discards landscapes based on the gradient distribution, inferred from the Eigenvalues of the structure tensor.The second satellite-based application that we address is the numerical design of a new generation of Time Delay Integration (TDI) sensor. In this new concept, active real-time stabilization of the TDI is performed to extend considerably the integration time, and therefore to boost the images SNR. The stripes of the TDI cannot be fused directly by addition because their position is altered by microvibrations. These must be compensated in real time using limited onboard computational resources with high subpixel accuracy. We study the fundamental performance limits for this problem and propose a real-time solution that nonetheless gets close to the theoretical limits. We introduce a scheme using temporal convolution together with online noise estimation, gradient-based shift estimation and a non-conventional multiframe method for measuring global displacements. The obtained results are conclusive on the fronts of accuracy and complexity and have strongly influenced the final decisions on the future configurations of Earth observation satellites at CNES.For more complex transformation models, a new image registration method performing accurate robust model estimation through point matches between images is proposed here. The difficulty coming from the presence of outliers causes the failure of traditional regression methods. In computer vision, RANSAC is definitely the most renowned method that overcomes such difficulties. It discriminates outliers by randomly generating minimalist sampled hypotheses and verifying their consensus over the input data. However, its response is based on the single iteration that achieved the largest inlier support, while discarding all other generated hypotheses. We show here that the resulting accuracy can be improved by aggregating all hypotheses. We also propose a simple strategy that allows to rapidly average 2D transformations, leading to an almost negligible extra computational cost. We give practical applications to the estimation of projective transforms and homography+distortion transforms. By including a straightforward adaptation of the locally optimized RANSAC in our framework, the proposed approach improves over every other available state-of-the-art method. A complete analysis of the proposed approach is performed, demonstrating its improved accuracy, stability and versatility.
97

Quelque progrès en débruitage d'images / Advances in Image Denoising

Pierazzo, Nicola 20 September 2016 (has links)
Cette thèse explore les dernières évolutions du débruitage d'images, et elle essaie de développer une vision synthétique des techniques utilisées jusqu'à présent. Elle aboutit à un nouvel algorithme de débruitage d'image évitant les artefacts et avec un meilleur PSNR que tous les algorithmes que nous avons pu évaluer. La première méthode que nous présentons est DA3D, un algorithme de débruitage fréquentiel avec guide, inspiré de DDID. La surprise de cet algorithme, c'est que le débruitage fréquentiel peut battre l'état de l'art sans produire artefacts. Cet algorithme produit des bons résultats non seulement en PSNR, mais aussi (et surtout) en qualité visuelle. DA3D marche particulièrement bien pour améliorer les textures des images et pour enlever les effets de staircasing.DA3D, guidé par un autre algorithme de débruitage améliore presque toujours le résultat de son guide. L'amélioration est particulièrement nette quand le guide est un algorithme à patchs, et alors on combine deux principes différents: auto-similarité suivi de seuillage fréquentiel. Le deuxième résultat présenté est une méthode universelle de débruitage multi-échelle, applicable à tout algorithme. Une analyse qualitative montre en effet que les algorithmes de débruitage à patchs éliminent surtout les hautes fréquences du bruit, à cause de la taille limitée des voisinages traités. Plutôt que d'agrandir ces voisinages nous décomposons l'image en une pyramide basée sur la transformée en cosinus discrète, avec une méthode de recomposition évitant le ringing. Cette méthode traite le bruit à basse fréquence, et améliore la qualité de l'image. Le troisième problème sérieux que nous abordons est l'évaluation des algorithmes de débruitage. Il est bien connu que le PSNR n'est pas un indice suffisant de qualité. Un artefact sur une zone lisse de l'image est bien plus visible qu'une altération en zone texturée. Nous proposons une nouvelle métrique basée sur un Smooth PSNR et un Texture PSNR, pour mesurer les résultats d'un algorithme sur ces deux types des régions. Il apparaît qu'un algorithme de débruitage, pour être considéré acceptable, doit avoir des bons résultats pour les deux métriques. Ces métriques sont finalement utilisées pour comparer les algorithmes de l'état de l'art avec notre algorithme final, qui combine les bénéfices du multi-échelle et du filtrage fréquentiel guidé. Les résultats étant très positifs, nous espérons que la thèse contribue à résoudre un vieux dilemme, pour lequel la méthode DDID avait apporté de précieuses indications : comment choisir entre le seuillage fréquentiel et les méthodes basées sur l'auto-similarité pour le débruitage d'images ? La réponse est qu'il ne faut pas choisir. Cette thèse termine avec quelques perspectives sur la faisabilité du débruitage "externe". Son principe est de débruiter un patch en utilisant une grande base de données externe de patches sans bruit. Un principe bayésien démontré par Levin et Nadler en 2011 implique que le meilleur résultat possible serait atteint avec cette méthode, à condition d'utiliser tous les patches observables. Nous donnons les arguments mathématiques prouvant que l'espace des patches peut être factorisé, ce qui permet de réduire la base de données de patches utilisés d'un facteur au moins 1000. / This thesis explores the last evolutions on image denoising, and attempts to set a new and more coherent background regarding the different techniques involved. In consequence, it also presents a new image denoising algorithm with minimal artifacts and the best PSNR performance known so far.A first result that is presented is DA3D, a frequency-based guided denoising algorithm inspired form DDID [Knaus-Zwicker 2013]. This demonstrates that, contrarily to what was thought, frequency-based denoising can beat state-of-the-art algorithms without presenting artifacts. This algorithm achieves good results not only in terms of PSNR, but also (and especially) with respect to visual quality. DA3D works particularly well on enhancing the textures of the images and removing staircasing effects.DA3D works on top of another denoising algorithm, that is used as a guide, and almost always improve its results. In this way, frequency-based denoising can be applied on top of patch-based denoising algorithms, resulting on a hybrid method that keeps the strengths of both. The second result presented is Multi-Scale Denoising, a framework that allows to apply any denoising algorithm on a multi-scale fashion. A qualitative analysis shows that current denoising algorithms behave better on high-frequency noise. This is due to the relatively small size of patches and search windows currently used. Instead of enlarging those patches, that can cause other sorts of problems, the work proposes to decompose the image on a pyramid, with the aid of the Discrete Cosine Transformation. A quantitative study is performed to recompose this pyramid in order to avoid the appearance of ringing artifacts. This method removes most of the low-frequency noise, and improves both PSNR and visual results for smooth and textured areas.A third main issue addressed in this thesis is the evaluation of denoising algorithms. Experiences indicate that PSNR is not always a good indicator of visual quality for denoising algorithms, since, for example, an artifact on a smooth area can be more noticeable than a subtle change in a texture. A new metric is proposed to improve on this matter. Instead of a single value, a ``Smooth PNSR'' and a ``Texture PSNR'' are presented, to measure the result of an algorithm for those two types of image regions. We claim that a denoising algorithm, in order to be considered acceptable, must at least perform well with respect to both metrics. Following this claim, an analysis of current algorithms is performed, and it is compared with the combined results of the Multi-Scale Framework and DA3D.We found that the optimal solution for image denoising is the application of a frequency shrinkage, applied to regular regions only, while a multiscale patch based method serves as guide. This seems to resolve a long standing question for which DDID gave the first clue: what is the respective role of frequency shrinkage and self-similarity based methods for image denoising? We describe an image denoising algorithm that seems to perform better in quality and PSNR than any other based on the right combination of both denoising principles. In addition, a study on the feasibility of external denoising is carried, where images are denoised by means of a big database of external noiseless patches. This follows a work of Levin and Nadler, in 2011, that claims that state-of-the-art results are achieved with this method if a large enough database is used. In the thesis it is shown that, with some observation, the space of all patches can be factorized, thereby reducing the number of patches needed in order to achieve this result. Finally, secondary results are presented. A brief study of how to apply denoising algorithms on real RAW images is performed. An improved, better performing version of the Non-Local Bayes algorithm is presented, together with a two-step version of DCT Denoising. The latter is interesting for its extreme simplicity and for its speed.
98

Algorithms for Guaranteed Denoising of Data and Their Applications

Wang, Jiayuan 01 October 2020 (has links)
No description available.
99

Accelerated T1 and T2 Parameter Mapping and Data Denoising Methods for 3D Quantitative MRI

Zhao, Nan January 2020 (has links)
No description available.
100

A Curvelet Prescreener for Detection of Explosive Hazards in Handheld Ground-Penetrating

White, Julie 11 August 2017 (has links)
Explosive hazards, above and below ground, are a serious threat to civilians and soldiers. In an attempt to mitigate these threats, different forms of explosive hazard detection (EHD) exist; e.g, multi-sensor hand-held platforms, downward looking and forward looking vehicle mounted platforms, etc. Robust detection of these threats resides in the processing and fusion of different data from multiple sensing modalities, e.g., radar, infrared, electromagnetic induction (EMI), etc. The focus of this thesis is on the implementation of two new algorithms to form a new energy-based prescreener in hand-held ground penetrating radar (GPR). First, B-scan signal data is curvelet filtered using either Reverse- Reconstruction followed by Enhancement (RRE) or selectivity with respect to wedge information in the Curvelet transform, Wedge Selection (WS). Next, the result of a bank of matched filter are aggregated and run a size contrast filter with Bhattacharyya distance. Alarms are then combined using weighted mean shift clustering. Results are demonstrated in the context of receiver operating characteristics (ROC) curve performance on data from a U.S. Army test site that contains multiple target and clutter types, burial depths, and times of the day.

Page generated in 0.1152 seconds