• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Segmentation d'images TEP dynamiques par classification spectrale automatique et déterministe / Automatic and deterministic spectral clustering for segmentation of dynamic PET images

Zbib, Hiba 09 December 2013 (has links)
La quantification d’images TEP dynamiques est un outil performant pour l’étude in vivo de la fonctionnalité des tissus. Cependant, cette quantification nécessite une définition des régions d’intérêts pour l’extraction des courbes temps-activité. Ces régions sont généralement identifiées manuellement par un opérateur expert, ce qui renforce leur subjectivité. En conséquent, un intérêt croissant a été porté sur le développement de méthodes de classification. Ces méthodes visent à séparer l’image TEP en des régions fonctionnelles en se basant sur les profils temporels des voxels. Dans cette thèse, une méthode de classification spectrale des profils temporels des voxels est développée. Elle est caractérisée par son pouvoir de séparer des classes non linéaires. La méthode est ensuite étendue afin de la rendre utilisable en routine clinique. Premièrement une procédure de recherche globale est utilisée pour localiser d’une façon déterministe les centres optimaux des données projetées. Deuxièmement, un critère non supervisé de qualité de segmentation est proposé puis optimisé par le recuit simulé pour estimer automatiquement le paramètre d’échelle et les poids temporels associés à la méthode. La méthode de classification spectrale automatique et déterministe proposée est validée sur des images simulées et réelles et comparée à deux autres méthodes de segmentation de la littérature. Elle a présenté une amélioration de la définition des régions et elle paraît un outil prometteur pouvant être appliqué avant toute tâche de quantification ou d’estimation de la fonction d’entrée artérielle. / Quantification of dynamic PET images is a powerful tool for the in vivo study of the functionality of tissues. However, this quantification requires the definition of regions of interest for extracting the time activity curves. These regions are usually identified manually by an expert operator, which reinforces their subjectivity. As a result, there is a growing interest in the development of clustering methods that aim to separate the dynamic PET sequence into functional regions based on the temporal profiles of voxels. In this thesis, a spectral clustering method of the temporal profiles of voxels that has the advantage of handling nonlinear clusters is developed. The method is extended to make it more suited for clinical applications. First, a global search procedure is used to locate in a deterministic way the optimal cluster centroids from the projected data. Second an unsupervised clustering criterion is proposed and optimised by the simulated annealing to automatically estimate the scale parameter and the weighting factors involved in the method. The proposed automatic and deterministic spectral clustering method is validated on simulated and real images and compared to two other segmentation methods from the literature. It improves the ROI definition, and appears as a promising pre-processing tool before ROI-based quantification and input function estimation tasks.
2

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links) (PDF)
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
3

Improving deep neural network training with batch size and learning rate optimization for head and neck tumor segmentation on 2D and 3D medical images

Douglas, Zachariah 13 May 2022 (has links) (PDF)
Medical imaging is a key tool used in healthcare to diagnose and prognose patients by aiding the detection of a variety of diseases and conditions. In practice, medical image screening must be performed by clinical practitioners who rely primarily on their expertise and experience for disease diagnosis. The ability of convolutional neural networks (CNNs) to extract hierarchical features and determine classifications directly from raw image data makes CNNs a potentially useful adjunct to the medical image analysis process. A common challenge in successfully implementing CNNs is optimizing hyperparameters for training. In this study, we propose a method which utilizes scheduled hyperparameters and Bayesian optimization to classify cancerous and noncancerous tissues (i.e., segmentation) from head and neck computed tomography (CT) and positron emission tomography (PET) scans. The results of this method are compared using CT imaging with and without PET imaging for 2D and 3D image segmentation models.
4

Exploring the Diagnostic Potential of Radiomics-Based PET Image Analysis for T-Stage Tumor Diagnosis

Aderanti, Victor 01 August 2024 (has links) (PDF)
Cancer is a leading cause of death globally, and early detection is crucial for better outcomes. This research aims to improve Region Of Interest (ROI) segmentation and feature extraction in medical image analysis using Radiomics techniques with 3D Slicer, Pyradiomics, and Python. Dimension reduction methods, including PCA, K-means, t-SNE, ISOMAP, and Hierarchical Clustering, were applied to highdimensional features to enhance interpretability and efficiency. The study assessed the ability of the reduced feature set to predict T-staging, an essential component of the TNM system for cancer diagnosis. Multinomial logistic regression models were developed and evaluated using MSE, AIC, BIC, and Deviance Test. The dataset consisted of CT and PET-CT DICOM images from 131 lung cancer patients. Results showed that PCA identified 14 features, Hierarchical Clustering 17, t-SNE 58, and ISOMAP 40, with texture-based features being the most critical. This study highlights the potential of integrating Radiomics and unsupervised learning techniques to enhance cancer prediction from medical images.
5

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links)
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction. / Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
6

Quantificação da dinâmica de estruturas em imagens de medicina nuclear na modalidade PET. / Quantification of dynamic structures in nuclear medicine images in the PET modality.

Flórez Pacheco, Edward 10 February 2012 (has links)
A presença que tem hoje a Medicina Nuclear como modalidade de obtenção de imagens médicas é muito importante e um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de conseguir analisar o comportamento metabólico do paciente, fazendo possíveis diagnósticos precoces. Este projeto está baseado em imagens médicas obtidas através da modalidade PET (Positron Emission Tomography) a qual está tendo uma crescente difusão e aceitação. Para isso, temos desenvolvido uma estrutura integral de processamento de imagens tridimensionais PET, a qual está constituída por etapas consecutivas que se iniciam na obtenção das imagens padrões (gold standard), sendo utilizados volumes simulados ou phantoms do Ventrículo Esquerdo do Coração criadas como parte do projeto, assim como geradas a partir do software NCAT-4D. A seguir, nos volumes simulados, é introduzido ruído quântico tipo Poisson que é o ruído característico das imagens PET e feita uma análise que busca certificar que o ruído utilizado corresponde efetivamente ao ruído Poisson. Em sequência é executada a parte de pré-processamento, utilizando para este fim, um conjunto de filtros tais como o filtro da mediana, o filtro da Gaussiana ponderada e o filtro que mistura os conceitos da Transformada de Anscombe e o filtro pontual de Wiener. Posteriormente é aplicada a etapa de segmentação que é considerada a parte central da sequência de processamento. O processo de segmentação é baseado na teoria de Conectividade Fuzzy e para isso temos implementado quatro diferentes abordagens: Algoritmo Genérico, Algoritmo LIFO, Algoritmo kTetaFOEMS e o Algoritmo utilizando Pesos Dinâmicos. Sendo que os três primeiros algoritmos utilizam pesos específicos selecionados pelo usuário, foi preciso efetuar uma análise para determinar os melhores pesos de segmentação que se reflitam numa segmentação mais eficiente. Finalmente, para terminar a estrutura de processamento, um procedimento de avaliação foi utilizado como métrica para obter quantitativamente três parâmetros (Verdadeiro Positivo, Falso Positivo e Máxima Distância) que permitiram conhecer o nível de eficiência e precisão de nosso processo e do projeto em geral. Constatamos que os algoritmos implementados (filtros e algoritmos de segmentação) são bastante robustos e atingem ótimos resultados chegando-se a obter, para o caso do volume do Ventrículo Esquerdo simulado, taxas de VP e FP na ordem de 98.49 ± 0.27% e 2,19 ± 0.19%, respectivamente. Com o conjunto de procedimentos e escolhas feitas ao longo da estrutura de processamento, encerramos o projeto com a análise de um grupo de volumes produto de um exame PET real, obtendo a quantificação destes volumes. / The usefulness of Nuclear medicine nowadays as a modality to obtain medical images is very important, and it has turned into one of the main procedures utilized in Health Care Centers. Its great advantage is to analyze the metabolic behavior of the patient, by allowing early diagnosis. This project is based on medical images obtained by the PET modality (Positron Emission Tomography), which has won wide acceptance. Thus, we have developed an integral framework for processing Nuclear Medicine three-dimensional images of the PET modality, which is composed of consecutive steps that start with the generation of standard images (gold standard) by using simulated images or phantoms of the Left Ventricular Heart that were generated in this project, such as the ones obtained from the NCAT-4D software. Then Poisson quantum noise is introduced into the whole volume to simulate the characteristic noises in PET images and an analysis is performed in order to certify that the utilized noise is the Poisson noise effectively. Subsequently, the pre-processing is executed by using specific filters, such as the median filter, the weighted Gaussian filter, and the filter that joins the concepts of Anscombe Transformation and the Wiener filter. Then the segmentation, which is considered the most important and central part of the whole process, is implemented. The segmentation process is based on the Fuzzy Connectedness theory and for that purpose four different approaches were implemented: Generic algorithm, LIFO algorithm, kTetaFOEMS algorithm, and Dynamic Weight algorithm. Since the first three algorithms used specific weights that were selected by the user, an extra analysis was performed to determine the best segmentation constants that would reflect an accurate segmentation. Finally, at the end of the processing structure, an assessment procedure was used as a measurement tool to quantify some parameters that determined the level of efficiency and precision of our process and project. We have verified that the implemented algorithms (filters and segmentation algorithms) are fairly robust and achieve optimal results, assist to obtain, in the case of the Left Ventricular simulated, TP and FP rates in the order of 98.49 ± 0.27% and 2.19 ± 0.19%, respectively. With the set of procedures and choices made along of the processing structure, the project was concluded with the analysis of a volumes group from a real PET exam, obtaining the quantification of the volumes.
7

Quantificação da dinâmica de estruturas em imagens de medicina nuclear na modalidade PET. / Quantification of dynamic structures in nuclear medicine images in the PET modality.

Edward Flórez Pacheco 10 February 2012 (has links)
A presença que tem hoje a Medicina Nuclear como modalidade de obtenção de imagens médicas é muito importante e um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de conseguir analisar o comportamento metabólico do paciente, fazendo possíveis diagnósticos precoces. Este projeto está baseado em imagens médicas obtidas através da modalidade PET (Positron Emission Tomography) a qual está tendo uma crescente difusão e aceitação. Para isso, temos desenvolvido uma estrutura integral de processamento de imagens tridimensionais PET, a qual está constituída por etapas consecutivas que se iniciam na obtenção das imagens padrões (gold standard), sendo utilizados volumes simulados ou phantoms do Ventrículo Esquerdo do Coração criadas como parte do projeto, assim como geradas a partir do software NCAT-4D. A seguir, nos volumes simulados, é introduzido ruído quântico tipo Poisson que é o ruído característico das imagens PET e feita uma análise que busca certificar que o ruído utilizado corresponde efetivamente ao ruído Poisson. Em sequência é executada a parte de pré-processamento, utilizando para este fim, um conjunto de filtros tais como o filtro da mediana, o filtro da Gaussiana ponderada e o filtro que mistura os conceitos da Transformada de Anscombe e o filtro pontual de Wiener. Posteriormente é aplicada a etapa de segmentação que é considerada a parte central da sequência de processamento. O processo de segmentação é baseado na teoria de Conectividade Fuzzy e para isso temos implementado quatro diferentes abordagens: Algoritmo Genérico, Algoritmo LIFO, Algoritmo kTetaFOEMS e o Algoritmo utilizando Pesos Dinâmicos. Sendo que os três primeiros algoritmos utilizam pesos específicos selecionados pelo usuário, foi preciso efetuar uma análise para determinar os melhores pesos de segmentação que se reflitam numa segmentação mais eficiente. Finalmente, para terminar a estrutura de processamento, um procedimento de avaliação foi utilizado como métrica para obter quantitativamente três parâmetros (Verdadeiro Positivo, Falso Positivo e Máxima Distância) que permitiram conhecer o nível de eficiência e precisão de nosso processo e do projeto em geral. Constatamos que os algoritmos implementados (filtros e algoritmos de segmentação) são bastante robustos e atingem ótimos resultados chegando-se a obter, para o caso do volume do Ventrículo Esquerdo simulado, taxas de VP e FP na ordem de 98.49 ± 0.27% e 2,19 ± 0.19%, respectivamente. Com o conjunto de procedimentos e escolhas feitas ao longo da estrutura de processamento, encerramos o projeto com a análise de um grupo de volumes produto de um exame PET real, obtendo a quantificação destes volumes. / The usefulness of Nuclear medicine nowadays as a modality to obtain medical images is very important, and it has turned into one of the main procedures utilized in Health Care Centers. Its great advantage is to analyze the metabolic behavior of the patient, by allowing early diagnosis. This project is based on medical images obtained by the PET modality (Positron Emission Tomography), which has won wide acceptance. Thus, we have developed an integral framework for processing Nuclear Medicine three-dimensional images of the PET modality, which is composed of consecutive steps that start with the generation of standard images (gold standard) by using simulated images or phantoms of the Left Ventricular Heart that were generated in this project, such as the ones obtained from the NCAT-4D software. Then Poisson quantum noise is introduced into the whole volume to simulate the characteristic noises in PET images and an analysis is performed in order to certify that the utilized noise is the Poisson noise effectively. Subsequently, the pre-processing is executed by using specific filters, such as the median filter, the weighted Gaussian filter, and the filter that joins the concepts of Anscombe Transformation and the Wiener filter. Then the segmentation, which is considered the most important and central part of the whole process, is implemented. The segmentation process is based on the Fuzzy Connectedness theory and for that purpose four different approaches were implemented: Generic algorithm, LIFO algorithm, kTetaFOEMS algorithm, and Dynamic Weight algorithm. Since the first three algorithms used specific weights that were selected by the user, an extra analysis was performed to determine the best segmentation constants that would reflect an accurate segmentation. Finally, at the end of the processing structure, an assessment procedure was used as a measurement tool to quantify some parameters that determined the level of efficiency and precision of our process and project. We have verified that the implemented algorithms (filters and segmentation algorithms) are fairly robust and achieve optimal results, assist to obtain, in the case of the Left Ventricular simulated, TP and FP rates in the order of 98.49 ± 0.27% and 2.19 ± 0.19%, respectively. With the set of procedures and choices made along of the processing structure, the project was concluded with the analysis of a volumes group from a real PET exam, obtaining the quantification of the volumes.

Page generated in 0.0244 seconds