• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Image Formation from a Large Sequence of RAW Images : performance and accuracy / Formation d’image à partir d’une grande séquence d’images RAW : performance et précision

Briand, Thibaud 13 November 2018 (has links)
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées / The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
132

Méthodes par sous-espaces et algorithmes d’optimisation bio-inspirés pour le débruitage de signaux multidimensionnels et applications. / Subspace methods and bio-inspired optimization algorithms for denoising of multidimensionals signals and applications

Zidi, Abir 12 June 2017 (has links)
Cette thèse est consacrée à l’étude des rangs matriciels et tensoriels des données multidimensionnelles, et au développement de méthodes d’estimation de ces rangs dans le cadre de la transformée en ondelettes. Pour cette étude, nous avons eu re-cours à la décomposition en paquets d’ondelettes et à l’algèbre multilinéaire. Une méthode d’optimisation stochastique bio-inspirée a été adaptée, avec pour objectif final de supprimer le bruit dans des images multidimensionnelles. Pour cela nous avons estimé les différentes valeurs des dimensions du sous-espace de tenseur pour tous les modes des coefficients des paquets d’ondelettes. Nous avons appliqué les méthodes de débruitage proposées à diverses images multidimensionnelles : images RGB, images multispectrales extraites d’images hyperspectrales de pièces métalliques, images par fluorescence des plantes, et images RX multispectrales. Finalement, une étude comparative a été réalisée avec trois principaux types d’algorithmes : d’une part, la méthode de Perona-Malik basée sur la diffusion ; deuxièmement, la troncature de HOSVD (Higher-Order Singular Value Decomposition) et MWF (Multiway Wiener Filtering) et troisièmement, un procédé basé sur la dé- composition en paquets d’ondelettes et MWF (Multiway Wiener Filtering), où les dimensions du sous-espace de signal sont estimées par un critère statistique plutôt que par une méthode d’optimisation. Les résultats sont prometteurs en termes de débruitage en réalité terrain. En définitive, nous aboutissons à un gain de temps avantageux durant le traitement des images hyperspectrales. / This thesis is devoted to study matrix and tensor ranks of multidimensional signalsand to the development of methods for estimating these ranks in the frameworkof the wavelet transform. For this study, we used the wavelet packet decompositionand the multilinear algebra. A bio-inspired stochastic optimization methodhas been adapted, with the ultimate objective of suppressing noise in multidimensionalimages. In order to ensure this, we have estimated the different values ofthe dimensions of the tensor subspace for all the modes of the coefficients of thewavelet packets.We have applied the proposed denoising methods to various multidimensionalimages: RGB images, multispectral images extracted from hyperspectralimages of metal parts, plant fluorescence images, and multispectral RX images.Finally, a comparative study was carried out with three main types of algorithms: onthe one hand, the Perona-Malik method based on diffusion; Second, the truncationof HOSVD and MWF, and thirdly, a method based on wavelet packet decompositionand MWF, where the dimensions of the signal subspace are estimated by a statisticalcriterion rather than by an optimization method. The results are promising in termsof denoising in grund truth. Ultimately, we achieve an advantageous time savingduring the acquisition of hyperspectral images.
133

Vision nocturne numérique : restauration automatique et recalage multimodal des images à bas niveau de lumière / Numerical night vision system : Automatic restoration and multimodal registration of low light level images

Sutour, Camille 10 July 2015 (has links)
La vision de nuit des pilotes d’hélicoptère est artificiellement assistée par un dispositif de vision bas niveau de lumière constitué d’un intensificateur de lumière (IL) couplé à une caméra numérique d’une part, et d’une caméra infrarouge (IR) d’autre part. L’objectif de cette thèse est d’améliorer ce dispositif en ciblant les défauts afin de les corriger.Une première partie consiste à réduire le bruit dont souffrent les images IL. Cela nécessite d’évaluer la nature du bruit qui corrompt ces images. Pour cela, une méthode d’estimation automatique du bruit est mise en place. L’estimation repose sur la détection non paramétrique de zones homogènes de l’image. Les statistiques du bruit peuvent être alors être estimées à partir de ces régions homogènes à l’aide d’une méthode d’estimation robuste de la fonction de niveau de bruit par minimisation l1.Grâce à l’estimation du bruit, les images IL peuvent alors débruitées. Nous avons pour cela développé dans la seconde partie un algorithme de débruitage d’images qui associe les moyennes non locales aux méthodes variationnelles en effectuant une régularisation adaptative pondérée parune attache aux données non locale. Une adaptation au débruitage de séquences d’images permet ensuite de tenir compte de la redondance d’information apportée par le flux vidéo, en garantissant stabilité temporelle et préservation des structures fines.Enfin, dans la troisième partie les informations issues des capteurs optique et infrarouge sont recalées dans un même référentiel. Nous proposons pour cela un critère de recalage multimodal basé sur l’alignement des contours des images. Combiné à une résolution par montée de gradient et à un schéma temporel, l’approche proposée permet de recaler de façon robuste les deuxmodalités, en vue d’une ultérieure fusion. / Night vision for helicopter pilots is artificially enhanced by a night vision system. It consists in a light intensifier (LI) coupled with a numerical camera, and an infrared camera. The goal of this thesis is to improve this device by analyzing the defaults in order to correct them.The first part consists in reducing the noise level on the LI images. This requires to evaluate the nature of the noise corrupting these images, so an automatic noise estimation method has been developed. The estimation is based on a non parametric detection of homogeneous areas.Then the noise statistics are estimated using these homogeneous regions by performing a robust l`1 estimation of the noise level function.The LI images can then be denoised using the noise estimation. We have developed in the second part a denoising algorithm that combines the non local means with variational methods by applying an adaptive regularization weighted by a non local data fidelity term. Then this algorithm is adapted to video denoising using the redundancy provided by the sequences, hence guaranteeing temporel stability and preservation of the fine structures.Finally, in the third part data from the optical and infrared sensors are registered. We propose an edge based multimodal registration metric. Combined with a gradient ascent resolution and a temporel scheme, the proposed method allows robust registration of the two modalities for later fusion.
134

Vers la segmentation automatique des organes à risque dans le contexte de la prise en charge des tumeurs cérébrales par l’application des technologies de classification de deep learning / Towards automatic segmentation of the organs at risk in brain cancer context via a deep learning classification scheme

Dolz, Jose 15 June 2016 (has links)
Les tumeurs cérébrales sont une cause majeure de décès et d'invalidité dans le monde, ce qui représente 14,1 millions de nouveaux cas de cancer et 8,2 millions de décès en 2012. La radiothérapie et la radiochirurgie sont parmi l'arsenal de techniques disponibles pour les traiter. Ces deux techniques s’appuient sur une irradiation importante nécessitant une définition précise de la tumeur et des tissus sains environnants. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d’aide à la segmentation. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s’avère donc nécessaire à la segmentation de ces images médicales. L’automatisation du processus doit permettre d’obtenir des ensembles de contours plus rapidement, reproductibles et acceptés par la majorité des oncologues en vue d'améliorer la qualité du traitement. En outre, toute méthode permettant de réduire la part médicale nécessaire à la délinéation contribue à optimiser la prise en charge globale par une utilisation plus rationnelle et efficace des compétences de l'oncologue.De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s’appuient sur des étapes préalables de recalages d’images. Ces techniques sont basées sur l’exploitation d’informations anatomiques annotées en amont par des experts sur un « patient type ». Ces données annotées sont communément appelées « Atlas » et sont déformées afin de se conformer à la morphologie du patient en vue de l’extraction des contours par appariement des zones d’intérêt. La qualité des contours obtenus dépend directement de la qualité de l’algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L’intégration d’outils d’assistance à la délinéation reste donc aujourd’hui un enjeu important pour l’amélioration de la pratique clinique.L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d’une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie.Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'utilisation de l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le «deep learning ». Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d’images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de « deep learning » pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l’efficacité. / Brain cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths only in 2012. Radiotherapy and radiosurgery are among the arsenal of available techniques to treat it. Because both techniques involve the delivery of a very high dose of radiation, tumor as well as surrounding healthy tissues must be precisely delineated. In practice, delineation is manually performed by experts, or with very few machine assistance. Thus, it is a highly time consuming process with significant variation between labels produced by different experts. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. If by automating this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists, this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.Nowadays, automatic segmentation techniques are rarely employed in clinical routine. In case they are, they typically rely on registration approaches. In these techniques, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be deformed and matched on the patient under examination. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, registration techniques encompass regularization models of the deformation field, whose parameters are complex to adjust, and its quality is difficult to evaluate. Integration of tools that assist in the segmentation task is therefore highly expected in clinical practice.The main objective of this thesis is therefore to provide radio-oncology specialists with automatic tools to delineate organs at risk of patients undergoing brain radiotherapy or stereotactic radiosurgery. To achieve this goal, main contributions of this thesis are presented on two major axes. First, we consider the use of one of the latest hot topics in artificial intelligence to tackle the segmentation problem, i.e. deep learning. This set of techniques presents some advantages with respect to classical machine learning methods, which will be exploited throughout this thesis. The second axis is dedicated to the consideration of proposed image features mainly associated with texture and contextual information of MR images. These features, which are not present in classical machine learning based methods to segment brain structures, led to improvements on the segmentation performance. We therefore propose the inclusion of these features into a deep network.We demonstrate in this work the feasibility of using such deep learning based classification scheme for this particular problem. We show that the proposed method leads to high performance, both in accuracy and efficiency. We also show that automatic segmentations provided by our method lie on the variability of the experts. Results demonstrate that our method does not only outperform a state-of-the-art classifier, but also provides results that would be usable in the radiation treatment planning.
135

Characterization and application of analysis methods for ECG and time interval variability data

Tikkanen, P. (Pauli) 09 April 1999 (has links)
Abstract The quantitation of the variability in cardiovascular signals provides information about the autonomic neural regulation of the heart and the circulatory system. Several factors have an indirect effect on these signals as well as artifacts and several types of noise are contained in the recorded signal. The dynamics of RR and QT interval time series have also been analyzed in order to predict a risk of adverse cardiac events and to diagnose them. An ambulatory measurement setting is an important and demanding condition for the recording and analysis of these signals. Sophisticated and robust signal analysis schemes are thus increasingly needed. In this thesis, essential points related to ambulatory data acquisition and analysis of cardiovascular signals are discussed including the accuracy and reproducibility of the variability measurement. The origin of artifacts in RR interval time series is discussed, and consequently their effects and possible correction procedures are concidered. The time series including intervals differing from a normal sinus rhythm which sometimes carry important information, but may not be as such suitable for an analysis performed by all approaches. A significant variation in the results in either intra- or intersubject analysis is unavoidable and should be kept in mind when interpreting the results. In addition to heart rate variability (HRV) measurement using RR intervals, the dy- namics of ventricular repolarization duration (VRD) is considered using the invasively obtained action potential duration (APD) and different estimates for a QT interval taken from a surface electrocardiogram (ECG). Estimating the low quantity of the VRD vari- ability involves obviously potential errors and more strict requirements. In this study, the accuracy of VRD measurement was improved by a better time resolution obtained through interpolating the ECG. Furthermore, RTmax interval was chosen as the best QT interval estimate using simulated noise tests. A computer program was developed for the time interval measurement from ambulatory ECGs. This thesis reviews the most commonly used analysis methods for cardiovascular vari- ability signals including time and frequency domain approaches. The estimation of the power spectrum is presented on the approach using an autoregressive model (AR) of time series, and a method for estimating the powers and the spectra of components is also presented. Time-frequency and time-variant spectral analysis schemes with applica- tions to HRV analysis are presented. As a novel approach, wavelet and wavelet packet transforms and the theory of signal denoising with several principles for the threshold selection is examined. The wavelet packet based noise removal approach made use of an optimized signal decomposition scheme called best tree structure. Wavelet and wavelet packet transforms are further used to test their effciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.ciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.
136

Odšumování obrazu pomocí vážené lokální regrese / Image Denoising Using Weighted Local Regression

Šťasta, Jakub January 2017 (has links)
The problem of accurately simulating light transport using Monte Carlo integration can be very difficult. In particular, scenes with complex illumination effects or complex materials can cause a scene to converge very slowly and demand a lot of computational time. To overcome this problem, image denoising algorithms have become popular in recent years. In this work we first review known approaches to denoising and adaptive rendering. We implement one of the promising algorithm by Moon et al. [2014] in a commercial rendering system Corona Standalone Renderer, evaluate its performance, strengths and weaknesses on 14 test scenes. These include difficult to denoise and converge rendering effects such as fine sub-pixel geometry, participating media, extreme depth of field of highlights, motion blur, and others. We propose corrections which make the algorithm more stable and robust. We show that it is possible to denoise renderings with Linear Weighted Regression only using a CPU. However, still even after our propositions, it is not possible to filter scenes in a consistent manner without over-blurring or not filtering where desired.
137

Faisabilité et intérêt du monitorage de la fatigue ventilatoire en anesthésie et réanimation par la mesure de l'électromyographie diaphragmatique temps réel / Feasibility and advantages of real-time monitoring of diaphragmatic ventilatory fatigue in anesthesiology and in intensive care unit

Morel, Guy Louis 04 September 2014 (has links)
L’activité musculaire peut être caractérisée par la performance et un état de fatigue. Le muscle diaphragmatique est caractérisé par sa résistance à la fatigue, en faisant un témoin de capacité à l'autonomie respiratoire. Bien que cliniquement d'intérêt, la mesure de l'état de fatigue de ce muscle est difficile. Nous avons approché cette mesure en analysant les signaux de son activité électrique recueillis par contact. L'obtention des paramètres requière un traitement du signal. Nous avons développé les outils de recueil et de traitement de ce signal et les avons analysé pendant l'anesthésie. Le recueil a fait l'objet du développement d'une sonde multiélectrodes et des logiciels hardware et software de recueil du signal. L'analyse du signal a été l'objet de différentes méthodes mathématiques de débruitage temps réel sur des processeurs RISC-ARM, comparant des algorithmes de deux types d’ondelettes (MuRw, LiFw), et un filtre morphologique (MoFi), le choix portant finalement sur l'ondelette MuRw offrant le meilleur compromis en temps de calcul et en rapport signal sur bruit. L'évaluation clinique de sujets sains et de patients a montré la pertinence des paramètres fréquentiels de l'activité électrique MuRw du diaphragme comme représentants de son état de fatigue, en particulier par le rapport hautes sur basses fréquences obtenu par analyse spectrale / Muscular activity can be described in terms of performance and fatigue. Diaphragmatic muscle is charactarized by its resistance to fatigue, making of it a good representative of ventilatory autonomy. While of clinical interest, its measurement is difficult. We considered this measurement by analyzing the electrical diaphragmatic signal gathered from direct recordings. To be obtained, the parameters have to be filtered. We developped the tools to record as well as to filter the signal and have validated them in clinical settings during anesthesia and intensive care. A multielectrodes probe and the associated hardware and software were developped for the signal recording. The filtering which followed compared using differnt wavelet analysis algorithms (MuRw, LiFw), and a morphological filter (MoFi), through a RISC-ARM processor for a real-time measurement. MuRw was the best compromise for calcul duration and signal noise ratio. Clinical evaluation on patients and healthy volounteers demonstrated the pertinence of frequential parameters extracted from the filtered signal, particularly the High Low ratio obtained after spectral analysis
138

Eliminação de ruídos e retoque digital em imagens com textura via difusão anisotrópica / Denoising and inpainting on textured images via anisotropic diffusion

Marcos Proença de Almeida 07 December 2016 (has links)
Neste trabalho são apresentadas, complementadas e melhoradas duas técnicas de restauração de imagens: uma abordando o problema de retoque digital/remoção de objetos enquanto a segunda é direcionada ao problema deneliminação de ruído. Em ambas as técnicas, a ideia é trabalhar com imagens contendo texturas e outras características de interesse para um observador humano como a preservação de padrões, bordas, estruturas e regiões de natureza oscilatória. A técnica descrita sobre retoque digital de imagens combina difusão anisotrópica, síntese de texturas, busca dinâmica e um novo termo empregado no mecanismo de atribuição da ordem de prioridade durante o processo de reconstrução. Assim, dada uma imagem com regiões a serem recompostas, uma técnica de difusão anisotrópica é aplicada à imagem afim de se obter um mapa de saliência contendo bordas, estruturas e demais informações de baixa frequência da imagem. Na sequência, um mecanismo de prioridade baseado em um novo termo de confiabilidade regularizado é calculado a partir da combinação do mapa anteriormente gerado com a equação do transporte. Tal mecanismo é utilizado para determinar a ordem de preenchimento das partes faltantes da imagem. Para essa tarefa, a abordagem apresentada utiliza uma nova medida de similaridade entre blocos de pixels(amostrados dinamicamente para acelerar o processo), afim de encontrar os melhores candidatos a serem alocados nas regiões danificadas. A técnica destinada à remoção de ruídos alia a teoria da difusão anisotrópica, técnicas de análise harmônica e modelos numéricos de discretização de EDPs não-lineares em uma equação diferencial parcial regularizada, a qual atua de forma incisiva em regiões mais homogêneas da imagem e de forma mais suave em regiões caracterizadas como textura e bordas, preservando, assim, essas regiões. Além da natureza anisotrópica, a EDP procura recompor partes texturizadas perdidas no processo de eliminação de ruído através da aplicação de técnicas robustas de análise harmônica. Uma validação teórica e experimental para esta EDP e um estudo do ajuste paramétrico do método de eliminação de ruído baseado nesta EDP foram realizados neste trabalho. A eficiência e a performance das técnicas propostas são atestadas por meio das análises experimentais quantitativas e qualitativas com outras abordagens clássicas da literatura. / In this work two techniques of image restoration are presented, complemented and improved: one approaching the problem of image inpainting/object removal problem while the second one dealing with the image denoising problem. In both cases, the core idea is to process images containing textures and other features perceptible to a human observer such as patterns, contours, structures and oscillatory information. The image inpainting technique combines anisotropic diffusion, texture synthesis, dynamic search and a mechanism to set the order of priority during the image completion process. More precisely, given an image and target region to be inpainted, an anisotropic diffusion technique is applied in order to generate a saliency map containing edges, structures and other low frequency parts of the image. Next, apriority mechanism based on a new biased confidence term is computed from the map previously generated with the transport equation to define the level of priority of the pixels during the filling procedure. To accomplish this task, the presented approach employs a novel measure of similarity wich measures the distance between blocks of pixels (sampled dynamically to speed up the process) in order to find the best candidates to be allocated in the damaged regions. The technique devoted to denoising an image combines the theory of anisotropic diffusion, harmonic analysis techniques and numerical models into a regularized partial differential equation, which diffuses the pixels more incisively on homogeneous regions of the image while still seeking to attenuate regions formed by textures and patterns, thus preserving those information. Moreover, the proposed PDE aims at recovering texturized regions which have been degraded during the denoising process by employing harmonic analysis tools. A theoretical and experimental validation for this EDP and a study of the parametric adjustment of the image denoising method based on this EDP were performed in this work. The effectivenss and performance of the proposed approaches are attested through a comprehensive set of comparisons against other representative techniques in the literature.
139

Um algoritmo genético híbrido para supressão de ruídos em imagens / A hybrid genetic algorithm for image denoising

Jônatas Lopes de Paiva 01 December 2015 (has links)
Imagens digitais são utilizadas para diversas finalidades, variando de uma simples foto com os amigos até a identificação de doenças em exames médicos. Por mais que as tecnologias de captura de imagens tenham evoluído, toda imagem adquirida digitalmente possui um ruído intrínseco a ela que normalmente é adquirido durante os processo de captura ou transmissão da imagem. O grande desafio neste tipo de problema consiste em recuperar a imagem perdendo o mínimo possível de características importantes da imagem, como cantos, bordas e texturas. Este trabalho propõe uma abordagem baseada em um Algoritmo Genético Híbrido (AGH) para lidar com este tipo de problema. O AGH combina um algoritmo genético com alguns dos melhores métodos de supressão de ruídos em imagens encontrados na literatura, utilizando-os como operadores de busca local. O AGH foi testado em imagens normalmente utilizadas como benchmark corrompidas com um ruído branco aditivo Gaussiano (N; 0), com diversos níveis de desvio padrão para o ruído. Seus resultados, medidos pelas métricas PSNR e SSIM, são comparados com os resultados obtidos por diferentes métodos. O AGH também foi testado para recuperar imagens SAR (Synthetic Aperture Radar), corrompidas com um ruído Speckle multiplicativo, e também teve seus resultados comparados com métodos especializados em recuperar imagens SAR. Através dessa abordagem híbrida, o AGH foi capaz de obter resultados competitivos em ambos os tipos de testes, chegando inclusive a obter melhores resultados em diversos casos em relação aos métodos da literatura. / Digital images are used for many purposes, ranging from a simple picture with friends to the identification of diseases in medical exams. Even though the technology for acquiring pictures has been evolving, every image digitally acquired has a noise intrinsic to it that is normally gotten during the processes of transmission or capture of the image. A big challenge in this kind of problem consists in recovering the image while losing the minimum amount of important features of the image, such as corners, borders and textures. This work proposes an approach based on a Hybrid Genetic Algorithm (HGA) to deal with this kind of problem. The HGA combines a genetic algorithm with some of the best image denoising methods found in literature, using them as local search operators. The HGA was tested on benchmark images corrupted with an additive white Gaussian noise (N;0) with many levels of standard deviation for the noise. The HGAs results, which were measured by the PSNR and SSIM metrics, were compared to the results obtained by different methods. The HGA was also tested to recover SAR (Synthetic Aperture Radar) images that were corrupted by a multiplicative Speckle noise and had its results compared against the results by other methods specialized in recovering with SAR images. Through this hybrid approach, the HGA was able to obtain results competitive in both types of tests, even being able to obtain the best results in many cases, when compared to the other methods found in the literature.
140

New strategies for the identification and enumeration of macromolecules in 3D images of cryo electron tomography / Nouvelles stratégies pour l'identification et l'énumération de macromolécules dans des images de cryo-tomographie électronique 3D

Moebel, Emmanuel 01 February 2019 (has links)
La cryo-tomographie électronique (cryo-ET) est une technique d'imagerie capable de produire des vues 3D de spécimens biologiques. Cette technologie permet d’imager de larges portions de cellules vitrifiées à une résolution nanométrique. Elle permet de combiner plusieurs échelles de compréhension de la machinerie cellulaire, allant des interactions entre les groupes de protéines à leur structure atomique. La cryo-ET a donc le potentiel d'agir comme un lien entre l'imagerie cellulaire in vivo et les techniques atteignant la résolution atomique. Cependant, ces images sont corrompues par un niveau de bruit élevé et d'artefacts d'imagerie. Leur interprétabilité dépend fortement des méthodes de traitement d'image. Les méthodes computationelles existantes permettent actuellement d'identifier de larges macromolécules telles que les ribosomes, mais il est avéré que ces détections sont incomplètes. De plus, ces méthodes sont limitées lorsque les objets recherchés sont de très petite taille ou présentent une plus grande variabilité structurelle. L'objectif de cette thèse est de proposer de nouvelles méthodes d'analyse d'images, afin de permettre une identification plus robuste des macromolécules d'intérêt. Nous proposons deux méthodes computationelles pour atteindre cet objectif. La première vise à réduire le bruit et les artefacts d'imagerie, et fonctionne en ajoutant et en supprimant de façon itérative un bruit artificiel à l'image. Nous fournissons des preuves mathématiques et expérimentales de ce concept qui permet d'améliorer le signal dans les images de cryo-ET. La deuxième méthode s'appuie sur les progrès récents de l'apprentissage automatique et les méthodes convolutionelles pour améliorer la localisation des macromolécules. La méthode est basée sur un réseau neuronal convolutif et nous montrons comment l'adapter pour obtenir des taux de détection supérieur à l'état de l'art. / Cryo electron tomography (cryo-ET) is an imaging technique capable of producing 3D views of biological specimens. This technology enables to capture large field of views of vitrified cells at nanometer resolution. These features allow to combine several scales of understanding of the cellular machinery, from the interactions between groups of proteins to their atomic structure. Cryo-ET therefore has the potential to act as a link between in vivo cell imaging and atomic resolution techniques. However, cryo-ET images suffer from a high amount of noise and imaging artifacts, and the interpretability of these images heavily depends on computational image analysis methods. Existing methods allow to identify large macromolecules such as ribosomes, but there is evidence that the detections are incomplete. In addition, these methods are limited when searched objects are smaller and have more structural variability. The purpose of this thesis is to propose new image analysis methods, in order to enable a more robust identification of macromolecules of interest. We propose two computational methods to achieve this goal. The first aims at reducing the noise and imaging artifacts, and operates by iteratively adding and removing artificial noise to the image. We provide both mathematical and experimental evidence that this concept allows to enhance signal in cryo-ET images. The second method builds on recent advances in machine learning to improve macromolecule localization. The method is based on a convolutional neural network, and we show how it can be adapted to achieve better detection rates than the current state-of- the-art.

Page generated in 0.0582 seconds