• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

FACTOR ANALYSIS WITH PRIOR INFORMATION - APPLICATION TO DYNAMIC PET IMAGING

Lee, Dong-Chang Unknown Date
No description available.
2

Segmentation d'images TEP dynamiques par classification spectrale automatique et déterministe / Automatic and deterministic spectral clustering for segmentation of dynamic PET images

Zbib, Hiba 09 December 2013 (has links)
La quantification d’images TEP dynamiques est un outil performant pour l’étude in vivo de la fonctionnalité des tissus. Cependant, cette quantification nécessite une définition des régions d’intérêts pour l’extraction des courbes temps-activité. Ces régions sont généralement identifiées manuellement par un opérateur expert, ce qui renforce leur subjectivité. En conséquent, un intérêt croissant a été porté sur le développement de méthodes de classification. Ces méthodes visent à séparer l’image TEP en des régions fonctionnelles en se basant sur les profils temporels des voxels. Dans cette thèse, une méthode de classification spectrale des profils temporels des voxels est développée. Elle est caractérisée par son pouvoir de séparer des classes non linéaires. La méthode est ensuite étendue afin de la rendre utilisable en routine clinique. Premièrement une procédure de recherche globale est utilisée pour localiser d’une façon déterministe les centres optimaux des données projetées. Deuxièmement, un critère non supervisé de qualité de segmentation est proposé puis optimisé par le recuit simulé pour estimer automatiquement le paramètre d’échelle et les poids temporels associés à la méthode. La méthode de classification spectrale automatique et déterministe proposée est validée sur des images simulées et réelles et comparée à deux autres méthodes de segmentation de la littérature. Elle a présenté une amélioration de la définition des régions et elle paraît un outil prometteur pouvant être appliqué avant toute tâche de quantification ou d’estimation de la fonction d’entrée artérielle. / Quantification of dynamic PET images is a powerful tool for the in vivo study of the functionality of tissues. However, this quantification requires the definition of regions of interest for extracting the time activity curves. These regions are usually identified manually by an expert operator, which reinforces their subjectivity. As a result, there is a growing interest in the development of clustering methods that aim to separate the dynamic PET sequence into functional regions based on the temporal profiles of voxels. In this thesis, a spectral clustering method of the temporal profiles of voxels that has the advantage of handling nonlinear clusters is developed. The method is extended to make it more suited for clinical applications. First, a global search procedure is used to locate in a deterministic way the optimal cluster centroids from the projected data. Second an unsupervised clustering criterion is proposed and optimised by the simulated annealing to automatically estimate the scale parameter and the weighting factors involved in the method. The proposed automatic and deterministic spectral clustering method is validated on simulated and real images and compared to two other segmentation methods from the literature. It improves the ROI definition, and appears as a promising pre-processing tool before ROI-based quantification and input function estimation tasks.
3

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links) (PDF)
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
4

Redefining Dynamic PET to Improve Clinical Feasibility

Liu, Xiaoli January 2016 (has links)
No description available.
5

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links)
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction. / Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
6

Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical Diagnosis

Razifar, Pasha January 2005 (has links)
<p>Positron Emission Tomography, PET, can be used for dynamic studies in humans. In such studies a selected part of the body, often the whole brain, is imaged repeatedly after administration of a radiolabelled tracer. Such studies are performed to provide sequences of images reflecting the tracer’s kinetic behaviour, which may be related to physiological, biochemical and functional properties of tissues. This information can be obtained by analyzing the distribution and kinetic behaviour of the administered tracers in different regions, tissues and organs. Each image in the sequence thus contains part of the kinetic information about the administered tracer. </p><p>Several factors make analysis of PET images difficult, such as a high noise magnitude and correlation between image elements in conjunction with a high level of non-specific binding to the target and a sometimes small difference in target expression between pathological and healthy regions. It is therefore important to understand how these factors affect the derived quantitative measurements when using different methods such as kinetic modelling and multivariate image analysis.</p><p>In this thesis, a new method to explore the properties of the noise in dynamic PET images was introduced and implemented. The method is based on an analysis of the autocorrelation function of the images. This was followed by proposing and implementing three novel approaches for application of Principal Component Analysis, PCA, on dynamic human PET studies. The common underlying idea of these approaches was that the images need to be normalized before application of PCA to ensure that the PCA is signal driven, not noise driven. Different ways to estimate and correct for the noise variance were investigated. Normalizations were carried out Slice-Wise (SW), for the whole volume at once, and in both image domain and sinogram domain respectively. We also investigated the value of masking out and removing the area outside the brain for the analysis. </p><p>The results were very encouraging. We could demonstrate that for phantoms as well as for real image data, the applied normalizations allow PCA to reveal the signal much more clearly than what can be seen in the original image data sets. Using our normalizations, PCA can thus be used as a multivariate analysis technique that without any modelling assumptions can separate important kinetic information into different component images. Furthermore, these images contained optimized signal to noise ratio (SNR), low levels of noise and thus showed improved quality and contrast. This should allow more accurate visualization and better precision in the discrimination between pathological and healthy regions. Hopefully this can in turn lead to improved clinical diagnosis. </p>
7

Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical Diagnosis

Razifar, Pasha January 2005 (has links)
Positron Emission Tomography, PET, can be used for dynamic studies in humans. In such studies a selected part of the body, often the whole brain, is imaged repeatedly after administration of a radiolabelled tracer. Such studies are performed to provide sequences of images reflecting the tracer’s kinetic behaviour, which may be related to physiological, biochemical and functional properties of tissues. This information can be obtained by analyzing the distribution and kinetic behaviour of the administered tracers in different regions, tissues and organs. Each image in the sequence thus contains part of the kinetic information about the administered tracer. Several factors make analysis of PET images difficult, such as a high noise magnitude and correlation between image elements in conjunction with a high level of non-specific binding to the target and a sometimes small difference in target expression between pathological and healthy regions. It is therefore important to understand how these factors affect the derived quantitative measurements when using different methods such as kinetic modelling and multivariate image analysis. In this thesis, a new method to explore the properties of the noise in dynamic PET images was introduced and implemented. The method is based on an analysis of the autocorrelation function of the images. This was followed by proposing and implementing three novel approaches for application of Principal Component Analysis, PCA, on dynamic human PET studies. The common underlying idea of these approaches was that the images need to be normalized before application of PCA to ensure that the PCA is signal driven, not noise driven. Different ways to estimate and correct for the noise variance were investigated. Normalizations were carried out Slice-Wise (SW), for the whole volume at once, and in both image domain and sinogram domain respectively. We also investigated the value of masking out and removing the area outside the brain for the analysis. The results were very encouraging. We could demonstrate that for phantoms as well as for real image data, the applied normalizations allow PCA to reveal the signal much more clearly than what can be seen in the original image data sets. Using our normalizations, PCA can thus be used as a multivariate analysis technique that without any modelling assumptions can separate important kinetic information into different component images. Furthermore, these images contained optimized signal to noise ratio (SNR), low levels of noise and thus showed improved quality and contrast. This should allow more accurate visualization and better precision in the discrimination between pathological and healthy regions. Hopefully this can in turn lead to improved clinical diagnosis.
8

Traitement des images multicomposantes par EDP : application à l'imagerie TEP dynamique / Vector-valued image processing with PDEs : application to dynamic PET imaging

Jaouen, Vincent 26 January 2016 (has links)
Cette thèse présente plusieurs contributions méthodologiques au traitement des images multicomposantes. Nous présentons notre travail dans le contexte applicatif difficile de l’imagerie de tomographie d’émission de positons dynamique (TEPd), une modalité d’imagerie fonctionnelle produisant des images multicomposantes fortement dégradées. Le caractère vectoriel du signal offre des propriétés de redondance et de complémentarité de l’information le long des différentes composantes permettant d’en améliorer le traitement. Notre première contribution exploite cet avantage pour la segmentation robuste de volumes d’intérêt au moyen de modèles déformables. Nous proposons un champ de forces extérieures guidant les modèles déformables vers les contours vectoriels des régions à délimiter. Notre seconde contribution porte sur la restauration de telles images pour faciliter leur traitement ultérieur. Nous proposons une nouvelle méthode de restauration par équations aux dérivées partielles permettant d’augmenter le rapport signal sur bruit d’images dégradées et d’en renforcer la netteté. Appliqués à l’imagerie TEPd, nous montrons l’apport de nos contributions pour un problème ouvert des neurosciences, la quantification non invasive d’un radiotraceur de la neuroinflammation. / This thesis presents several methodological contributions to the processing of vector-valued images, with dynamic positron emission tomography imaging (dPET) as its target application. dPET imaging is a functional imaging modality that produces highly degraded images composed of subsequent temporal acquisitions. Vector-valued images often present some level of redundancy or complementarity of information along the channels, allowing the enhancement of processing results. Our first contribution exploits such properties for performing robust segmentation of target volumes with deformable models.We propose a new external force field to guide deformable models toward the vector edges of regions of interest. Our second contribution deals with the restoration of such images to further facilitate their analysis. We propose a new partial differential equation-based approach that enhances the signal to noise ratio of degraded images while sharpening their edges. Applied to dPET imaging, we show to what extent our methodological contributions can help to solve an open problem in neuroscience : noninvasive quantification of neuroinflammation.
9

Méthode géométrique de séparation de sources non-négatives : applications à l'imagerie dynamique TEP et à la spectrométrie de masse / Geometrical method for non-negative source separation : Application to dynamic PET imaging and mass spectrometry

Ouedraogo, Wendyam 28 November 2012 (has links)
Cette thèse traite du problème de séparation aveugle de sources non-négatives (c'est à dire des grandeurs positives ou nulles). La situation de séparation de mélanges linéaires instantanés de sources non-négatives se rencontre dans de nombreux problèmes de traitement de signal et d'images, comme la décomposition de signaux mesurés par un spectromètre (spectres de masse, spectres Raman, spectres infrarouges), la décomposition d'images (médicales, multi-spectrale ou hyperspectrales) ou encore l'estimation de l'activité d'un radionucléide. Dans ces problèmes, les grandeurs sont intrinsèquement non-négatives et cette propriété doit être préservée lors de leur estimation, car c'est elle qui donne un sens physique aux composantes estimées. La plupart des méthodes existantes de séparation de sources non-négatives requièrent de ``fortes" hypothèses sur les sources (comme l'indépendance mutuelle, la dominance locale ou encore l'additivité totale des sources), qui ne sont pas toujours vérifiées en pratique. Dans ce travail, nous proposons une nouvelle méthode de séparation de sources non-négatives fondée sur la répartition géométrique du nuage des observations. Les coefficients de mélange et les sources sont estimées en cherchant le cône simplicial d'ouverture minimale contenant le nuage des observations. Cette méthode ne nécessite pas l'indépendance mutuelle des sources, ni même leur décorrélation; elle ne requiert pas non plus la dominance locale des sources, ni leur additivité totale. Une seule condition est nécessaire et suffisante: l'orthant positif doit être l'unique cône simplicial d'ouverture minimale contenant le nuage de points des signaux sources. L'algorithme proposé est évalué avec succès dans deux situations de séparation de sources non-négatives de nature très différentes. Dans la première situation, nous effectuons la séparation de spectres de masse mesurés à la sortie d'un chromatographe liquide haute précision, afin d'identifier et quantifier les différents métabolites (petites molécules) présents dans l'urine d'un rat traité au phénobarbital. Dans la deuxième situation, nous estimons les différents compartiments pharmacocinétiques du radio-traceur FluoroDeoxyGlucose marqué au fluor 18 ([18F]-FDG) dans le cerveau d'un patient humain, à partir d'une série d'images 3D TEP de cet organe. Parmi ces pharmacocinétiques, la fonction d'entrée artérielle présente un grand intérêt pour l'évaluation de l'efficacité d'un traitement anti-cancéreux en oncologie. / This thesis addresses the problem of non-negative blind source separation (i.e. positive or zero quantities). The situation of linear instantaneous mixtures of non-negative sources occurs in many problems of signal and image processing, such as decompositions of signals measured by a spectrometer (mass spectra, Raman spectra, infrared spectra), decomposition of images (medical, multi-spectral and hyperspectral) or estimating of the activity of a radionuclide. In these problems, the sources are inherently non-negative and this property should be preserved during their estimation, in order to get physical meaning components. Most of existing non-negative blind source separation methods require ``strong" assumptions on sources (such as mutual independence, local dominance or total additivity), which are not always satisfied in practice. In this work, we propose a new geometrical method for separating non-negative sources. The mixing matrix and the sources are estimated by finding the minimum aperture simplicial cone containing the scatter plot of mixed data. The proposed method does not require the mutual independence of the sources, neither their decorrelation, nor their local dominance, or their total additivity. One condition is necessary and sufficient: the positive orthant must be the unique minimum aperture simplicial cone cone containing the scatter plot of the sources. The proposed algorithm is successfully evaluated in two different problems of non-negative sources separation. In the first situation, we perform the separation of mass spectra measured at the output of a liquid chromatograph to identify and quantify the different metabolites (small molecules) present in the urine of rats treated with phenobarbital . In the second situation, we estimate the different pharmacokinetics compartments of the radiotracer [18F]-FDG in human brain, from a set of 3D PET images of this organ, without blood sampling. Among these pharmacokinetics, arterial input function is of great interest to evaluate the effectiveness of anti-cancer treatment in oncology.

Page generated in 0.0397 seconds