• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 2
  • Tagged with
  • 28
  • 28
  • 7
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.

Nassar, Alaa S.N. January 2018 (has links)
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity. / Higher Committee for Education Development in Iraq
22

Séparation de signaux en mélanges convolutifs : contributions à la séparation aveugle de sources parcimonieuses et à la soustraction adaptative des réflexions multiples en sismique / Signal separation in convolutive mixtures : contributions to blind separation of sparse sources and adaptive subtraction of seismic multiples

Batany, Yves-Marie 14 November 2016 (has links)
La séparation de signaux corrélés à partir de leurs combinaisons linéaires est une tâche difficile et possède plusieurs applications en traitement du signal. Nous étudions deux problèmes, à savoir la séparation aveugle de sources parcimonieuses et le filtrage adaptatif des réflexions multiples en acquisition sismique. Un intérêt particulier est porté sur les mélanges convolutifs : pour ces deux problèmes, des filtres à réponses impulsionnelles finies peuvent être estimés afin de récupérer les signaux désirés.Pour les modèles de mélange instantanés et convolutifs, nous donnons les conditions nécessaires et suffisantes pour l'extraction et la séparation exactes de sources parcimonieuses en utilisant la pseudo-norme L0 comme une fonction de contraste. Des équivalences entre l'analyse en composantes parcimonieuses et l'analyse en composantes disjointes sont examinées.Pour la soustraction adaptative des réflexions sismiques, nous discutons les limites des méthodes basées sur l'analyse en composantes indépendantes et nous soulignons l'équivalence avec les méthodes basées sur les normes Lp. Nous examinons de quelle manière les paramètres de régularisation peuvent être plus décisifs pour l'estimation des primaires. Enfin, nous proposons une amélioration de la robustesse de la soustraction adaptative en estimant les filtres adaptatifs directement dans le domaine des curvelets. Les coûts en calcul et en mémoire peuvent être atténués par l'utilisation de la transformée en curvelet discrète et uniforme. / The recovery of correlated signals from their linear combinations is a challenging task and has many applications in signal processing. We focus on two problems that are the blind separation of sparse sources and the adaptive subtraction of multiple events in seismic processing. A special focus is put on convolutive mixtures: for both problems, finite impulse response filters can indeed be estimated for the recovery of the desired signals.For instantaneous and convolutive mixing models, we address the necessary and sufficient conditions for the exact extraction and separation of sparse sources by using the L0 pseudo-norm as a contrast function. Equivalences between sparse component analysis and disjoint component analysis are investigated.For adaptive multiple subtraction, we discuss the limits of methods based on independent component analysis and we highlight equivalence with Lp-norm-based methods. We investigate how other regularization parameters may have more influence on the estimation of the desired primaries. Finally, we propose to improve the robustness of adaptive subtraction by estimating the extracting convolutive filters directly in the curvelet domain. Computation and memory costs are limited by using the uniform discrete curvelet transform.
23

Multiple prediction from incomplete data with the focused curvelet transform

Herrmann, Felix J., Wang, Deli, Hennenfent, Gilles January 2007 (has links)
Incomplete data represents a major challenge for a successful prediction and subsequent removal of multiples. In this paper, a new method will be represented that tackles this challenge in a two-step approach. During the first step, the recenly developed curvelet-based recovery by sparsity-promoting inversion (CRSI) is applied to the data, followed by a prediction of the primaries. During the second high-resolution step, the estimated primaries are used to improve the frequency content of the recovered data by combining the focal transform, defined in terms of the estimated primaries, with the curvelet transform. This focused curvelet transform leads to an improved recovery, which can subsequently be used as input for a second stage of multiple prediction and primary-multiple separation.
24

Supress?o do ru?do de rolamento superficial utilizando a transformada Curvelet

Oliveira, Nisar Rocha de 08 May 2009 (has links)
Made available in DSpace on 2014-12-17T14:08:36Z (GMT). No. of bitstreams: 1 NisarRO.pdf: 2584049 bytes, checksum: f18a00826204d450659ba7d3316e358e (MD5) Previous issue date: 2009-05-08 / Among the many types of noise observed in seismic land acquisition there is one produced by surface waves called Ground Roll that is a particular type of Rayleigh wave which characteristics are high amplitude, low frequency and low velocity (generating a cone with high dip). Ground roll contaminates the relevant signals and can mask the relevant information, carried by waves scattered in deeper regions of the geological layers. In this thesis, we will present a method that attenuates the ground roll. The technique consists in to decompose the seismogram in a basis of curvelet functions that are localized in time, in frequency, and also, incorporate an angular orientation. These characteristics allow to construct a curvelet filter that takes in consideration the localization of denoise in scales, times and angles in the seismogram. The method was tested with real data and the results were very good / Dentre os diversos tipos de ru?dos existentes nos dados s?smicos terrestres est? o Ru?do de Rolamento Superficial tamb?m conhecido como ground roll que ? um tipo particular de ondas de Rayleigh com amplitude forte, freq??ncia baixa e velocidade baixa que gera um cone de grande mergulho no sismograma. O ru?do de rolamento superficial contamina os sinais relevantes e pode mascarar a informa??o desejada, trazidas por ondas espalhadas em regi?es mais profundas das camadas geol?gicas. Nesta disserta??o ser? apresentada uma ferramenta que atenua o ru?do de rolamento superficial baseada na transformada curvelet. A t?cnica consiste em decompor o sismograma em uma base de fun??es curvelets as quais s?o localizadas no tempo e na freq??ncia, al?m de incorporarem uma orienta??o angular. Tais caracter?sticas permitem a constru??o de um filtro curvelet que leva em considera??o a localiza??o do ru?do em escalas, limiares de corte dos coeficientes curvelets e dos ?ngulos no sismograma. O m?todo foi testado com dados reais e os resultados obtidos foram muito bons
25

Analyse multi échelle et multi observation pour l'imagerie multi modale en oncologie / A multi resolution and multi observation framework for multi modal medical images processing and analysis in oncology

Hanzouli, Houda 15 December 2016 (has links)
Ce travail s’inscrit dans le cadre du développement d’une médecine davantage personnalisée et préventive, pour laquelle la fusion d’informations multi modale et de différentes représentations d'une même modalité sont nécessaires afin d'aboutir à une quantification fiable des images médicales en oncologie. Dans cette étude nous présentons deux applications de traitement et d'analyse des images médicales: le débruitage des images TEP et la détermination des volumes anatomo-fonctionnels des tumeurs en imagerie multi modale TEP/TDM. Pour le débruitage des images TEP, nous avons mis en place une approche intitulée "WCD" permettant de bénéficier des caractéristiques complémentaires de la transformée en ondelettes et la transformée en Curvelets afin de mieux représenter les structures isotropiques et anisotropiques dans ces images, ce qui permet de réduire le bruit tout en minimisant les pertes d'informations utiles dans les images TEP. En ce qui concerne la deuxième application, nous avons proposé une méthode de segmentationTEP/TDM intitulée "WCHMT" permettant d'exploiter la spécificité des arbres de Markov caché de prendre en compte les dépendances statistiques entre l’ensemble des données. Ce modèle permet de gérer simultanément les propriétés complémentaires de l’imagerie fonctionnelle et l’imagerie morphologique dans un cadre unifié où les données sont représentées dans le domaine des Contourlets. Le débruitage en TEP a abouti à une hausse significative du rapport signal sur-bruit (SNR) en garantissant la moindre variation de l'intensité et du contraste local. Quant à la segmentation multimodale TEP/TDM, elle a démontré une bonne précision lors de la détermination du volume tumoral en terme du coefficient de Dice (DSC) avec le meilleur compromis entre la sensibilité (SE) et la valeur prédictive positive (PPV) par rapport à la vérité terrain. / This thesis is a part of the development of more personalized and preventive medicine, for which a fusion of multi modal information and diverse representations of the same modality is needed in order to get accurate and reliable quantification of medical images in oncology. In this study we present two applications for image processing analysis: PET denoising and multimodal PET/CT tumor segmentation. The PET filtering approach called "WCD" take benefit from the complementary features of the wavelet and Curvelets transforms in order to better represent isotropic and anisotropic structures in PET images. This algorithm allows the reduction of the noise while minimizing the loss of useful information in PET images. The PET/CT tumor segmentation application is performed through a Markov model as a probabilistic quadtree graph namely a Hidden Markov Tree (HMT).Our motivation for using such a model is to provide fast computation, improved robustness and an effective interpretational framework for image analysis on oncology. Thanks to two efficient aspects (multi observation and multi resolution), when dealing with Hidden Markov Tree (HMT), we exploit joint statistical dependencies between hidden states to handle the whole data stack. This model called "WCHMT" take advantage of the high resolution of the anatomic imaging (CT) and the high contrast of the functional imaging (PET). The denoising approach led to the best trade-off between denoising quality and structure preservation with the least quantitative bias in absolute intensity recovery. PET/CT segmentation's results performed with WCHMT method has proven a reliable segmentation when providing high Dice Similarity Coeffcient (DSC) with the best trade-off between sensitivity (SE) and positive predictive value (PPV).
26

Ensemble baseado em métodos de Kernel para reconhecimento biométrico multimodal / Ensemble Based on Kernel Methods for Multimodal Biometric Recognition

Costa, Daniel Moura Martins da 31 March 2016 (has links)
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas, de forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar e verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou de impressões digitais. Entretanto, existem outros sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Nos últimos anos, o reconhecimento biométrico obteve avanços na sua confiabilidade e precisão, com algumas modalidades biométricas oferecendo bom desempenho global. No entanto, mesmo os sistemas biométricos mais avançados ainda enfrentam problemas. Recentemente, esforços têm sido realizados visando empregar diversas modalidades biométricas de forma a tornar o processo de identificação menos vulnerável a ataques. Biometria multimodal é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolidar múltiplas modalidades biométricas. A multimodalidade é baseada no conceito de que informações obtidas a partir de diferentes modalidades se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. As principais questões envolvidas na construção de um sistema biométrico unimodal dizem respeito à definição das técnicas de extração de característica e do classificador. Já no caso de um sistema biométrico multimodal, além destas questões, é necessário definir o nível de fusão e a estratégia de fusão a ser adotada. O objetivo desta dissertação é investigar o emprego de ensemble para fusão das modalidades biométricas, considerando diferentes estratégias de fusão, lançando-se mão de técnicas avançadas de processamento de imagens (tais como transformada Wavelet, Contourlet e Curvelet) e Aprendizado de Máquina. Em especial, dar-se-á ênfase ao estudo de diferentes tipos de máquinas de aprendizado baseadas em métodos de Kernel e sua organização em arranjos de ensemble, tendo em vista a identificação biométrica baseada em face e íris. Os resultados obtidos mostraram que a abordagem proposta é capaz de projetar um sistema biométrico multimodal com taxa de reconhecimento superior as obtidas pelo sistema biométrico unimodal. / With the advancement of technology, traditional strategies for identifying people become more susceptible to failure, in order to overcome these difficulties some approaches have been proposed in the literature. Among these approaches highlights the Biometrics. The field of Biometrics encompasses a wide variety of technologies used to identify and verify the person\'s identity through the measurement and analysis of physiological and behavioural aspects of the human body. As a result, biometrics has a wide field of applications in systems that require precise identification of their users. The most popular biometric systems are based on face recognition and fingerprint matching. Furthermore, there are other biometric systems that utilize iris and retinal scan, speech, face, and hand geometry. In recent years, biometrics authentication has seen improvements in reliability and accuracy, with some of the modalities offering good performance. However, even the best biometric modality is facing problems. Recently, big efforts have been undertaken aiming to employ multiple biometric modalities in order to make the authentication process less vulnerable to attacks. Multimodal biometrics is a relatively new approach to biometrics representation that consolidate multiple biometric modalities. Multimodality is based on the concept that the information obtained from different modalities complement each other. Consequently, an appropriate combination of such information can be more useful than using information from single modalities alone. The main issues involved in building a unimodal biometric System concern the definition of the feature extraction technique and type of classifier. In the case of a multimodal biometric System, in addition to these issues, it is necessary to define the level of fusion and fusion strategy to be adopted. The aim of this dissertation is to investigate the use of committee machines to fuse multiple biometric modalities, considering different fusion strategies, taking into account advanced methods in machine learning. In particular, it will give emphasis to the analyses of different types of machine learning methods based on Kernel and its organization into arrangements committee machines, aiming biometric authentication based on face, fingerprint and iris. The results showed that the proposed approach is capable of designing a multimodal biometric System with recognition rate than those obtained by the unimodal biometrics Systems.
27

Ensemble baseado em métodos de Kernel para reconhecimento biométrico multimodal / Ensemble Based on Kernel Methods for Multimodal Biometric Recognition

Daniel Moura Martins da Costa 31 March 2016 (has links)
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas, de forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar e verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou de impressões digitais. Entretanto, existem outros sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Nos últimos anos, o reconhecimento biométrico obteve avanços na sua confiabilidade e precisão, com algumas modalidades biométricas oferecendo bom desempenho global. No entanto, mesmo os sistemas biométricos mais avançados ainda enfrentam problemas. Recentemente, esforços têm sido realizados visando empregar diversas modalidades biométricas de forma a tornar o processo de identificação menos vulnerável a ataques. Biometria multimodal é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolidar múltiplas modalidades biométricas. A multimodalidade é baseada no conceito de que informações obtidas a partir de diferentes modalidades se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. As principais questões envolvidas na construção de um sistema biométrico unimodal dizem respeito à definição das técnicas de extração de característica e do classificador. Já no caso de um sistema biométrico multimodal, além destas questões, é necessário definir o nível de fusão e a estratégia de fusão a ser adotada. O objetivo desta dissertação é investigar o emprego de ensemble para fusão das modalidades biométricas, considerando diferentes estratégias de fusão, lançando-se mão de técnicas avançadas de processamento de imagens (tais como transformada Wavelet, Contourlet e Curvelet) e Aprendizado de Máquina. Em especial, dar-se-á ênfase ao estudo de diferentes tipos de máquinas de aprendizado baseadas em métodos de Kernel e sua organização em arranjos de ensemble, tendo em vista a identificação biométrica baseada em face e íris. Os resultados obtidos mostraram que a abordagem proposta é capaz de projetar um sistema biométrico multimodal com taxa de reconhecimento superior as obtidas pelo sistema biométrico unimodal. / With the advancement of technology, traditional strategies for identifying people become more susceptible to failure, in order to overcome these difficulties some approaches have been proposed in the literature. Among these approaches highlights the Biometrics. The field of Biometrics encompasses a wide variety of technologies used to identify and verify the person\'s identity through the measurement and analysis of physiological and behavioural aspects of the human body. As a result, biometrics has a wide field of applications in systems that require precise identification of their users. The most popular biometric systems are based on face recognition and fingerprint matching. Furthermore, there are other biometric systems that utilize iris and retinal scan, speech, face, and hand geometry. In recent years, biometrics authentication has seen improvements in reliability and accuracy, with some of the modalities offering good performance. However, even the best biometric modality is facing problems. Recently, big efforts have been undertaken aiming to employ multiple biometric modalities in order to make the authentication process less vulnerable to attacks. Multimodal biometrics is a relatively new approach to biometrics representation that consolidate multiple biometric modalities. Multimodality is based on the concept that the information obtained from different modalities complement each other. Consequently, an appropriate combination of such information can be more useful than using information from single modalities alone. The main issues involved in building a unimodal biometric System concern the definition of the feature extraction technique and type of classifier. In the case of a multimodal biometric System, in addition to these issues, it is necessary to define the level of fusion and fusion strategy to be adopted. The aim of this dissertation is to investigate the use of committee machines to fuse multiple biometric modalities, considering different fusion strategies, taking into account advanced methods in machine learning. In particular, it will give emphasis to the analyses of different types of machine learning methods based on Kernel and its organization into arrangements committee machines, aiming biometric authentication based on face, fingerprint and iris. The results showed that the proposed approach is capable of designing a multimodal biometric System with recognition rate than those obtained by the unimodal biometrics Systems.
28

Correction des effets de volume partiel en tomographie d'émission

Le Pogam, Adrien 29 April 2010 (has links)
Ce mémoire est consacré à la compensation des effets de flous dans une image, communément appelés effets de volume partiel (EVP), avec comme objectif d’application l’amélioration qualitative et quantitative des images en médecine nucléaire. Ces effets sont la conséquence de la faible résolutions spatiale qui caractérise l’imagerie fonctionnelle par tomographie à émission mono-photonique (TEMP) ou tomographie à émission de positons (TEP) et peuvent être caractérisés par une perte de signal dans les tissus présentant une taille comparable à celle de la résolution spatiale du système d’imagerie, représentée par sa fonction de dispersion ponctuelle (FDP). Outre ce phénomène, les EVP peuvent également entrainer une contamination croisée des intensités entre structures adjacentes présentant des activités radioactives différentes. Cet effet peut conduire à une sur ou sous estimation des activités réellement présentes dans ces régions voisines. Différentes techniques existent actuellement pour atténuer voire corriger les EVP et peuvent être regroupées selon le fait qu’elles interviennent avant, durant ou après le processus de reconstruction des images et qu’elles nécessitent ou non la définition de régions d’intérêt provenant d’une imagerie anatomique de plus haute résolution(tomodensitométrie TDM ou imagerie par résonance magnétique IRM). L’approche post-reconstruction basée sur le voxel (ne nécessitant donc pas de définition de régions d’intérêt) a été ici privilégiée afin d’éviter la dépendance aux reconstructions propres à chaque constructeur, exploitée et améliorée afin de corriger au mieux des EVP. Deux axes distincts ont été étudiés. Le premier est basé sur une approche multi-résolution dans le domaine des ondelettes exploitant l’apport d’une image anatomique haute résolution associée à l’image fonctionnelle. Le deuxième axe concerne l’amélioration de processus de déconvolution itérative et ce par l’apport d’outils comme les ondelettes et leurs extensions que sont les curvelets apportant une dimension supplémentaire à l’analyse par la notion de direction. Ces différentes approches ont été mises en application et validées par des analyses sur images synthétiques, simulées et cliniques que ce soit dans le domaine de la neurologie ou dans celui de l’oncologie. Finalement, les caméras commerciales actuelles intégrant de plus en plus des corrections de résolution spatiale dans leurs algorithmes de reconstruction, nous avons choisi de comparer de telles approches en TEP et en TEMP avec une approche de déconvolution itérative proposée dans ce mémoire. / Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images andthis PhD work is dedicated to their correction with the objectives of qualitative and quantitativeimprovement of such images. PVE arise from the limited spatial resolution of functional imaging witheither Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography(SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at HalfMaximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity crosscontamination between adjacent structures with different tracer uptakes. This can lead to under or overestimation of the real activity of such analyzed regions. Various methodologies currently exist tocompensate or even correct for PVE and they may be classified depending on their place in theprocessing chain: either before, during or after the image reconstruction process, as well as theirdependency on co-registered anatomical images with higher spatial resolution, for instance ComputedTomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstructionapproach was chosen for this work to avoid regions of interest definition and dependency onproprietary reconstruction developed by each manufacturer, in order to improve the PVE correction.Two different contributions were carried out in this work: the first one is based on a multi-resolutionmethodology in the wavelet domain using the higher resolution details of a co-registered anatomicalimage associated to the functional dataset to correct. The second one is the improvement of iterativedeconvolution based methodologies by using tools such as directional wavelets and curveletsextensions. These various developed approaches were applied and validated using synthetic, simulatedand clinical images, for instance with neurology and oncology applications in mind. Finally, ascurrently available PET/CT scanners incorporate more and more spatial resolution corrections in theirimplemented reconstruction algorithms, we have compared such approaches in SPECT and PET to aniterative deconvolution methodology that was developed in this work.

Page generated in 0.0815 seconds