Spelling suggestions: "subject:"1mportance map"" "subject:"dmportance map""
1 |
Methods for large volume image analysis : applied to early detection of Alzheimer's disease by analysis of FDG-PET scans / Méthode d'analyse de grands volumes de données : appliquées à la détection précoce de la maladie d'Alzheimer à partir d'images "FDG-PET scan"Kodewitz, Andreas 18 March 2013 (has links)
Dans cette thèse, nous explorons de nouvelles méthodes d’analyse d’images pour la détection précoce des changements métaboliques cérébraux causés par la maladie d’Alzheimer. Nous introduisons deux apports méthodologiques que nous appliquons à un ensemble de données réelles. Le premier est basé sur l’apprentissage automatique afin de créer une carte des informations pertinentes pour la classification d'un ensemble d’images. Pour cela nous échantillonnons des blocs de Voxels selon un algorithme de Monte-Carlo. La mise en œuvre d’une classification basée sur ces patchs 3d a pour conséquence la réduction significative du volume de patchs à traiter et l’extraction de caractéristiques dont l’importance est statistiquement quantifiable. Cette méthode s’applique à différentes caractéristiques et est adaptée à des types d’images variés. La résolution des cartes produites par cette méthode peut être affinée à volonté et leur contenu informatif est cohérent avec des résultats antérieurs obtenus dans la littérature. Le second apport méthodologique porte sur la conception d’un nouvel algorithme de décomposition de tenseur d’ordre important, adapté à notre application. Cet algorithme permet de réduire considérablement la consommation de mémoire et donc en évite la surcharge. Il autorise la décomposition rapide de tenseurs, y compris ceux de dimensions très déséquilibrées. Nous appliquons cet algorithme en tant que méthode d’extraction de caractéristiques dans une situation où le clinicien doit diagnostiquer des stades précoces de la maladie d'Alzheimer en utilisant la TEP-FDG seule. Les taux de classification obtenus sont souvent au-dessus des niveaux de l’état de l’art. / In this thesis we want to explore novel image analysis methods for the early detection of metabolic changes in the human brain caused by Alzheimer's disease (AD). We will present two methodological contributions and present their application to a real life data set. We present a machine learning based method to create a map of local distribution of classification relevant information in an image set. The presented method can be applied using different image characteristics which makes it possible to adapt the method to many kinds of images. The maps generated by this method are very localized and fully consistent with prior findings based on Voxel wise statistics. Further we preset an algorithm to draw a sample of patches according to a distribution presented by means of a map. Implementing a patch based classification procedure using the presented algorithm for data reduction we were able to significantly reduce the amount of patches that has to be analyzed in order to obtain good classification results. We present a novel non-negative tensor factorization (NTF) algorithm for the decomposition of large higher order tensors. This algorithm considerably reduces memory consumption and avoids memory overhead. This allows the fast decomposition even of tensors with very unbalanced dimensions. We apply this algorithm as feature extraction method in a computer-aided diagnosis (CAD) scheme, designed to recognize early-stage ad and mild cognitive impairment (MCI) using fluorodeoxyglucose (FDG) positron emission tomography (PET) scans only. We achieve state of the art classification rates.
|
2 |
Importance Prioritised Image Coding in JPEG 2000Nguyen, Anthony Ngoc January 2005 (has links)
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
|
3 |
Accelerating Monte Carlo particle transport with adaptively generated importance maps / Accélération de simulations Monte Carlo de transport de particules par génération adaptative de cartes d’importanceNowak, Michel 12 October 2018 (has links)
Les simulations Monte Carlo de transport de particules sont un outil incontournable pour l'étude de problèmes de radioprotection. Leur utilisation implique l'échantillonnage d'événements rares grâce à des méthode de réduction de variance qui reposent sur l'estimation de la contribution d'une particule au détecteur. On construit cette estimation sous forme d'une carte d'importance.L’objet de cette étude est de proposer une stratégie qui permette de générer de manière adaptative des cartes d'importance durant la simulation Monte Carlo elle-même. Le travail a été réalisé dans le code de transport des particules TRIPOLI-4®, développé à la Direction de l’énergie nucléaire du CEA (Salay, France).Le cœur du travail a consisté à estimer le flux adjoint à partir des trajectoires simulées avec l'Adaptive Multilevel Splitting, une méthode de réduction de variance robuste. Ce développement a été validé à l'aide de l'intégration d'un module déterministe dans TRIPOLI-4®.Trois stratégies sont proposés pour la réutilisation de ce score en tant que carte d'importance dans la simulation Monte Carlo. Deux d'entre elles proposent d'estimer la convergence du score adjoint lors de phases d'exploitation.Ce travail conclut sur le lissage du score adjoint avec des méthodes d'apprentissage automatique, en se concentrant plus particulièrement sur les estimateurs de densité à noyaux. / Monte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators.
|
Page generated in 0.0549 seconds