11 |
Automatic Bayesian Segmentation Of Human Facial Tissue Using 3d Mr-ct Fusion By Incorporating Models Of Measurement Blurring, Noise And Partial VolumeSener, Emre 01 September 2012 (has links) (PDF)
Segmentation of human head on medical images is an important process in a wide array of applications such as diagnosis, facial surgery planning, prosthesis design, and forensic identification. In this study, a new Bayesian method for segmentation of facial tissues is presented. Segmentation classes include muscle, bone, fat, air and skin. The method incorporates a model to account for image blurring during data acquisition, a prior helping to reduce noise as well as a partial
volume model. Regularization based on isotropic and directional Markov Random Field priors are integrated to the algorithm and their effects on segmentation accuracy are investigated. The Bayesian model is solved iteratively yielding tissue class labels at every voxel of an image. Sub-methods as variations of the main method are generated by switching on/off a combination of the models. Testing of the sub-methods are performed on two patients using single modality three-dimensional (3D) images as well as registered multi-modal 3D images (Magnetic Resonance and Computerized Tomography). Numerical, visual and statistical
analyses of the methods are conducted. Improved segmentation accuracy is obtained through the use of the proposed image models and multi-modal data. The methods are also compared with the Level Set method and an adaptive Bayesiansegmentation method proposed in a previous study.
|
12 |
Machine learning methods for brain tumor segmentation / Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveauHavaei, Seyed Mohammad January 2017 (has links)
Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year.
To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process.
Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis.
In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation. / Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain).
Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions.
Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales.
Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
|
13 |
Interactive segmentation of multiple 3D objects in medical images by optimum graph cuts = Segmentação interativa de múltiplos objetos 3D em imagens médicas por cortes ótimos em grafo / Segmentação interativa de múltiplos objetos 3D em imagens médicas por cortes ótimos em grafoMoya, Nikolas, 1991- 03 December 2015 (has links)
Orientador: Alexandre Xavier Falcão / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-27T14:45:13Z (GMT). No. of bitstreams: 1
Moya_Nikolas_M.pdf: 5706960 bytes, checksum: 9304544bfe8a78039de8b62562531865 (MD5)
Previous issue date: 2015 / Resumo: Segmentação de imagens médicas é crucial para extrair medidas de objetos 3D (estruturas anatômicas) que são úteis no diagnóstico e tratamento de doenças. Nestas aplicações, segmentação interativa é necessária quando métodos automáticos falham ou não são factíveis. Métodos por corte em grafo são considerados o estado da arte em segmentação interativa, mas diversas abordagens utilizam o algoritmo min-cut/max-flow, que é limitado à segmentação binária, sendo que segmentação de múltiplos objetos pode economizar tempo e esforço do usuário. Este trabalho revisita a transformada imagem floresta diferencial (DIFT, em inglês) -- uma abordagem por corte em grafo adequada para segmentação de múltiplos objetos -- resolvendo problemas relacionados a ela. O algoritmo da DIFT executa em tempo proporcional ao número de voxels nas regiões modificadas em cada execução da segmentação (sublinear). Tal característica é altamente desejável em segmentação interativa de imagens 3D para responder as ações do usuário em tempo real. O algoritmo da DIFT funciona da seguinte forma: o usuário desenha marcadores (traço com voxels de semente) rotulados dentro de cada objeto e o fundo, enquanto o computador interpreta a imagem como um grafo, cujos nós são os voxels e os arcos são definidos por pixels vizinhos, produzindo como resultado uma floresta de caminhos ótimos (partição na imagem) enraizada nos nós sementes do grafo. Nesta floresta, cada objeto é representado pela floresta de caminhos ótimos enraizado em suas sementes internas. Tais árvores são pintadas com a mesmo cor associada ao rótulo do marcador correspondente. Ao adicionar ou remover marcadores, é possível corrigir a segmentação até o mapa de rótulo de objeto representar o resultado desejado. Para garantir consistência na segmentação, métodos baseados em semente sempre devem manter a conectividade entre os voxels e suas sementes. Entretanto, isto não é mantido em algumas abordagens, como Random Walkers ou quando o mapa de rótulos é filtrado para suavizar a fronteira dos objetos. Esta conectividade é primordial para realizar correções sem recomeçar o processo depois de cada intervenção do usuário. Todavia, foi observado que a DIFT falha em manter consistência da segmentação em alguns casos. Consertamos este problema tanto no algoritmo da DIFT, quanto após a suavização dos objetos. Estes resultados comparam diversas estruturas anatômicas 3D de imagens de ressonância magnética e tomografia computadorizada / Abstract: Medical image segmentation is crucial to extract measures from 3D objects (body anatomical structures) that are useful for diagnosis and treatment of diseases. In such applications, interactive segmentation is necessary whenever automated methods fail or are not feasible. Graph-cut methods are considered the state of the art in interactive segmentation, but most approaches rely on the min-cut/max-flow algorithm, which is limited to binary segmentation while multi-object segmentation can considerably save user time and effort. This work revisits the differential image foresting transform (DIFT) ¿ a graph-cut approach suitable for multi-object segmentation in linear time ¿ and solves several problems related to it. Indeed, the DIFT algorithm can take time proportional to the number of voxels in the regions modified at each segmentation execution (sublinear time). Such a characteristic is highly desirable in 3D interactive segmentation to respond the user's actions as close as possible to real time. Segmentation using the DIFT works as follows: the user draws labeled markers (strokes of connected seed voxels) inside each object and background, while the computer interprets the image as a graph, whose nodes are the voxels and arcs are defined by neighboring voxels, and outputs an optimum-path forest (image partition) rooted at the seed nodes in the graph. In the forest, each object is represented by the optimum-path trees rooted at its internal seeds. Such trees are painted with same color associated to the label of the corresponding marker. By adding/removing markers, the user can correct segmentation until the forest (its object label map) represents the desired result. For the sake of consistency in segmentation, similar seed-based methods should always maintain the connectivity between voxels and seeds that have labeled them. However, this does not hold in some approaches, such as random walkers, or when the segmentation is filtered to smooth object boundaries. That connectivity is also paramount to make corrections without starting over the process at each user intervention. However, we observed that the DIFT algorithm fails in maintaining segmentation consistency in some cases. We have fixed this problem in the DIFT algorithm and when the obtained object boundaries are smoothed. These results are presented and evaluated on several 3D body anatomical structures from MR and CT images / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
14 |
Segmentering av medicinska bilder med inspiration från en quantum walk algoritm / Segmentation of Medical Images Inspired by a Quantum Walk AlgorithmAltuni, Bestun, Aman Ali, Jasin January 2023 (has links)
För närvarande utforskas quantum walk som en potentiell metod för att analysera medicinska bilder. Med inspiration från Gradys random walk-algoritm för bildbehandling har vi utvecklat en metod som bygger på de kvantmekaniska fördelar som quantum walk innehar för att detektera och segmentera medicinska bilder. Vidare har de segmenterade bilderna utvärderats utifrån klinisk relevans. Teoretiskt sett kan quantum walk-algoritmer erbjuda en mer effektiv metod för bildanalys inom medicin jämfört med traditionella metoder för bildsegmentering som exempelvis klassisk random walk, som inte bygger på kvantmekanik. Inom området finns omfattande potential för utveckling, och det är av yttersta vikt att fortsätta utforska och förbättra metoder. För närvarande kan det konstateras att det är en lång väg att vandra innan detta är något som kan appliceras i en klinisk miljö. / Currently, quantum walk is being explored as a potential method for analyzing medical images. Taking inspiration from Grady's random walk algorithm for image processing, we have developed an approach that leverages the quantum mechanical advantages inherent in quantum walk to detect and segment medical images. Furthermore, the segmented images have been evaluated in terms of clinical relevance. Theoretically, quantum walk algorithms have the potential to offer a more efficient method for medical image analysis compared to traditional methods of image segmentation, such as classical random walk, which do not rely on quantum mechanics. Within this field, there is significant potential for development, and it is of utmost importance to continue exploring and refining these methods. However, it should be noted that there is a long way to go before this becomes something that can be applied in a clinical environment.
|
15 |
Self-supervised pre-training of an attention-based model for 3D medical image segmentation / Självövervakad förberedande träning av en attention-baserad model för 3D medicinsk bildsegmenteringSund Aillet, Albert January 2023 (has links)
Accurate segmentation of anatomical structures is crucial for radiation therapy in cancer treatment. Deep learning methods have been demonstrated effective for segmentation of 3D medical images, establishing the current standard. However, they require large amounts of labelled data and suffer from reduced performance on domain shift. A possible solution to these challenges is self-supervised learning, that uses unlabelled data to learn representations, which could possibly reduce the need for labelled data and produce more robust segmentation models. This thesis investigates the impact of self-supervised pre-training on an attention-based model for 3D medical image segmentation, specifically focusing on single-organ semantic segmentation, exploring whether self-supervised pre-training enhances the segmentation performance on CT scans with and without domain shift. The Swin UNETR is chosen as the deep learning model since it has been shown to be a successful attention-based architecture for semantic segmentation. During the pre-training stage, the contracting path is trained for three self-supervised pretext tasks using a large dataset of 5 465 unlabelled CT scans. The model is then fine-tuned using labelled datasets with 97, 142 and 288 segmentations of the stomach, the sternum and the pancreas. The results indicate that a substantial performance gain from self-supervised pre-training is not evident. Parameter freezing of the contracting path suggest that the representational power of the contracting path is not as critical for model performance as expected. Decreasing the amount of supervised training data shows that while the pre-training improves model performance when the amount of training data is restricted, the improvements are strongly decreased when more supervised training data is used. / Noggrann segmentering av anatomiska strukturer är avgörande för strålbehandling inom cancervården. Djupinlärningmetoder har visat sig vara effektiva och utgör standard för segmentering av 3D medicinska bilder. Dessa metoder kräver däremot stora mängder märkt data och kännetecknas av lägre prestanda vid domänskift. Eftersom självövervakade inlärningsmetoder använder icke-märkt data för inlärning, kan de möjligen minska behovet av märkt data och producera mer robusta segmenteringsmodeller. Denna uppsats undersöker effekten av självövervakad förberedande träning av en attention-baserad modell för 3D medicinsk bildsegmentering, med särskilt fokus på semantisk segmentering av enskilda organ. Syftet är att studera om självövervakad förberedande träning förbättrar segmenteringsprestandan utan respektive med domänskift. Swin UNETR har valts som djupinlärningsmodell eftersom den har visat sig vara en framgångsrik attention-baserad arkitektur för semantisk segmentering. Under den förberedande träningsfasen optimeras modellens kontraherande del med 5 465 icke-märkta CT-scanningar. Modellen tränas sedan på märkta dataset med 97, 142 och 288 segmenterade skanningar av magen, bröstbenet och bukspottkörteln. Resultaten visar att prestandaökningen från självövervakad förberedande träning inte är tydlig. Parameterfrysning av den kontraherande delen visar att dess representationer inte lika avgörande för segmenteringsprestandan som förväntat. Minskning av mängden träningsdata tyder på att även om den förberedande träningen förbättrar modellens prestanda när mängden träningsdata är begränsad, minskas förbättringarna betydligt när mer träningsdata används.
|
16 |
Dealing With Speckle Noise in Deep Neural Network Segmentation of Medical Ultrasound Images / Hantering av brus i segmenteing med djupinlärning i medicinska ultraljudsbilderDaniel, Olmo January 2022 (has links)
Segmentation of ultrasonic images is a common task in healthcare that requires time and attention from healthcare professionals. Automation of medical image segmentation using deep learning solutions is fast growing field and has been shown to be capable of near human performance. Ultrasonic images suffer from low signal-to-noise ratio and speckle patterns, noise filtering is a common pre-processing step in non-deep learning image segmentation methods used to improve segmentation results. In this thesis the effect of speckle filtering of echocardiographic images in deep learning segmentation using U-Net is investigated. When trained with speckle reduced and despeckled datasets, a U-Net model with 0.5·106 trainable parameters saw an rage dice score improvement of +0.15 in the 17 out of 32 categories that were found to be statistically different compared to the same network trained with unfiltered images. The U-Net model with 1.9·106 trainable parameters saw a decrease in performance in only 5 out of 32 categories, and the U-Net model with 31·106 trainable parameters saw a decrease in performance in 10 out of 32 categories when trained with the speckle filtered datasets. No definite differences in performance between the use of speckle suppression and full speckle removal were observed. This result shows potential for speckle filtering to be used as a means to reduce the complexity required of deep learning models in ultrasound segmentation tasks. The use of the wavelet transform as a down- and up-sampling layer in U-Net was also investigated. The speckle patterns in ultrasonic images can contain information about the tissue. The wavelet transform is capable of lossless down- and up-sampling in contrast to the commonly used down-sampling methods, which could enable the network to make use textural information and improve segmentations. The U-Net modified with the wavelet transform shows slightly improved results when trained with despeckled datasets compared to the unfiltered dataset, suggesting that it was not capable of extracting any information from the speckle. The experiments with the wavelet transform were far from exhaustive and more research is needed for proper assessment. / Segmentering av ultraljudsbilder är en vanlig uppgift inom vården som kräver tid och uppmärksamhet från vårdpersonal. Automatisering av medicinsk bildsegmentering med djupinlärning är ett snabbt växande område och har visat kunna nå prestanda nära mänsklig nivå. Ultraljudsbilder har dålig signal-brusförhållande och speckle mönster, ofta bearbetas bilder med brusfiltrering när icke djupinlärningsmetoder används för segmentering för att förbättra resultat. Effekten av speckle-filtrering i ultraljudsbilder i djupinlärnings segmentering med U-Net undersöks i den här masterexamensuppsatsen. U-Net nätverket med 0.5·106 träningsbara parametrar presterade bättre när den tränades med speckle filtrerade dataset jämfört för med ofiltrerade bilder, men en ökning i dice-koefficienten av +0.15 i medel i de 17 kategorier av 32 som var statistikst signifikanta. En försämring av resultaten för U-Net nätverket med 1.9·106 träningsbara parametrar observerades i 5 av 32 kategorier, och en försämring av resultaten för U-Net nätverket med 31·106 träningsbara parametrar observerardes när de tränades med speckle filtrerade dataset i 10 av 32 kategorier. Inga skillnader i prestanda mellan användning av minskning av speckle och fullständig speckle borttagning observerades. Detta resultat visar att det finns potential för att använda speckle filtrering som en metod för att minska komplexiteten som kan krävas hos djupinlärningsnätverk inom ultraljudssegmentering. Användning av wavelet transformen som ett ned- och uppsamplings lager i U-Net undersöktes också. Speckle mönstren i ultraljudsbilder kan innehålla information om vävnaden. Wavelet transformen möjliggör ned- och uppsamplings av bilden utan informationsförlust till skillnad från de vanliga metoderna, vilket skulle kunna göra det möjligt för nätverket att utnyttja information om vävnadstexturen och förbättra segmenteringarna. U-Net nätverket som modifierades med wavelet transformen visar någorlunda bättre prestanda när den tränas med speckle filtrerade dataset jämfört med ofiltrerade dataset. Det tyder på att nätverket inte kunde utnyttja någon information från speckle mönstren. Wavelet transform experimenten var ej uttömmande och mer forskning behövs för en korrekt bedömning.
|
17 |
GAN-based Automatic Segmentation of Thoracic Aorta from Non-contrast-Enhanced CT Images / GAN-baserad automatisk segmentering avthoraxorta från icke-kontrastförstärkta CT-bilderXu, Libo January 2021 (has links)
The deep learning-based automatic segmentation methods have developed rapidly in recent years to give a promising performance in the medical image segmentation tasks, which provide clinical medicine with an accurate and fast computer-aided diagnosis method. Generative adversarial networks and their extended frameworks have achieved encouraging results on image-to-image translation problems. In this report, the proposed hybrid network combined cycle-consistent adversarial networks, which transformed contrast-enhanced images from computed tomography angiography to the conventional low-contrast CT scans, with the segmentation network and trained them simultaneously in an end-to-end manner. The trained segmentation network was tested on the non-contrast-enhanced CT images. The synthetic process and the segmentation process were also implemented in a two-stage manner. The two-stage process achieved a higher Dice similarity coefficient than the baseline U-Net did on test data, but the proposed hybrid network did not outperform the baseline due to the field of view difference between the two training data sets.
|
18 |
Efficient hierarchical layered graph approach for multi-region segmentation / Abordagem eficiente baseada em grafo hierárquico em camadas para a segmentação de múltiplas regiõesLeon, Leissi Margarita Castaneda 15 March 2019 (has links)
Image segmentation refers to the process of partitioning an image into meaningful regions of interest (objects) by assigning distinct labels to their composing pixels. Images are usually composed of multiple objects with distinctive features, thus requiring distinct high-level priors for their appropriate modeling. In order to obtain a good segmentation result, the segmentation method must attend all the individual priors of each object, as well as capture their inclusion/exclusion relations. However, many existing classical approaches do not include any form of structural information together with different high-level priors for each object into a single energy optimization. Consequently, they may be inappropriate in this context. We propose a novel efficient seed-based method for the multiple object segmentation of images based on graphs, named Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). It uses a tree of the relations between the image objects, being each object represented by a node. Each tree node may contain different individual high-level priors and defines a weighted digraph, named as layer. The layer graphs are then integrated into a hierarchical graph, considering the hierarchical relations of inclusion and exclusion. A single energy optimization is performed in the hierarchical layered weighted digraph leading to globally optimal results satisfying all the high-level priors. The experimental evaluations of HLOIFT and its extensions, on medical, natural and synthetic images, indicate promising results comparable to the state-of-the-art methods, but with lower computational complexity. Compared to hierarchical segmentation by the min cut/max-flow algorithm, our approach is less restrictive, leading to globally optimal results in more general scenarios, and has a better running time. / A segmentação de imagem refere-se ao processo de particionar uma imagem em regiões significativas de interesse (objetos), atribuindo rótulos distintos aos seus pixels de composição. As imagens geralmente são compostas de vários objetos com características distintas, exigindo, assim, restrições de alto nível distintas para a sua modelagem apropriada. Para obter um bom resultado de segmentação, o método de segmentação deve atender a todas as restrições individuais de cada objeto, bem como capturar suas relações de inclusão/ exclusão. No entanto, muitas abordagens clássicas existentes não incluem nenhuma forma de informação estrutural, juntamente com diferentes restrições de alto nível para cada objeto em uma única otimização de energia. Consequentemente, elas podem ser inapropriadas nesse contexto. Estamos propondo um novo método eficiente baseado em sementes para a segmentação de múltiplos objetos em imagens baseado em grafos, chamado Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). Ele usa uma árvore das relações entre os objetos de imagem, sendo cada objeto representado por um nó. Cada nó da árvore pode conter diferentes restrições individuais de alto nível, que são usadas para definir um dígrafo ponderado, nomeado como camada. Os grafos das camadas são então integrados em um grafo hierárquico, considerando as relações hierárquicas de inclusão e exclusão. Uma otimização de energia única é realizada no dígrafo hierárquico em camadas, levando a resultados globalmente ótimos, satisfazendo todas as restrições de alto nível. As avaliações experimentais do HLOIFT e de suas extensões, em imagens médicas, naturais e sintéticas,indicam resultados promissores comparáveis aos métodos do estado-da-arte, mas com menor complexidade computacional. Comparada à segmentação hierárquica pelo algoritmo min-cut/max-flow, nossa abordagem é menos restritiva, levando a resultados globalmente ótimo sem cenários mais gerais e com melhor tempo de execução.
|
19 |
Image Processing Methods for Myocardial Scar Analysis from 3D Late-Gadolinium Enhanced Cardiac Magnetic Resonance ImagesUsta, Fatma 25 July 2018 (has links)
Myocardial scar, a non-viable tissue which occurs on the myocardium due to the insufficient blood supply to the heart muscle, is one of the leading causes of life-threatening heart disorders, including arrhythmias. Analysis of myocardial scar is important for predicting the risk of arrhythmia and locations of re-entrant circuits in patients’ hearts. For applications, such as computational modeling of cardiac electrophysiology aimed at stratifying patient risk for post-infarction arrhythmias, reconstruction of the intact geometry of scar is required.
Currently, 2D multi-slice late gadolinium-enhanced magnetic resonance imaging (LGEMRI) is widely used to detect and quantify myocardial scar regions of the heart. However, due to the anisotropic spatial dimensions in 2D LGE-MR images, creating scar geometry from these images results in substantial reconstruction errors. For applications requiring reconstructing the intact geometry of scar surfaces, 3D LGE-MR images are more suited as they are isotropic in voxel dimensions and have a higher resolution.
While many techniques have been reported for segmentation of scar using 2D LGEMR images, the equivalent studies for 3D LGE-MRI are limited. Most of these 2D and
3D techniques are basic intensity threshold-based methods. However, due to the lack of optimum threshold (Th) value, these intensity threshold-based methods are not robust in dealing with complex scar segmentation problems. In this study, we propose an algorithm for segmentation of myocardial scar from 3D LGE-MR images based on Markov random field based continuous max-flow (CMF) method. We utilize the segmented myocardium as the region of interest for our algorithm.
We evaluated our CMF method for accuracy by comparing its results to manual delineations using 3D LGE-MR images of 34 patients. We also compared the results of the CMF technique to ones by conventional full-width-at-half-maximum (FWHM) and signal-threshold-to-reference-mean (STRM) methods. The CMF method yields a Dice similarity coefficient (DSC) of 71 +- 8.7% and an absolute volume error (|VE|) of 7.56 +- 7 cm3. Overall, the CMF method outperformed the conventional methods for almost all reported metrics in scar segmentation. We present a comparison study for scar geometries obtained from 2D vs 3D LGE-MRI.
As the myocardial scar geometry greatly influences the sensitivity of risk prediction in
patients, we compare and understand the differences in reconstructed geometry of scar generated using 2D versus 3D LGE-MR images beside providing a scar segmentation study. We use a retrospectively acquired dataset of 24 patients with a myocardial scar who underwent both 2D and 3D LGE-MR imaging. We use manually segmented scar volumes from 2D and 3D LGE-MRI. We then reconstruct the 2D scar segmentation boundaries to 3D surfaces using a LogOdds-based interpolation method. We use numerous metrics to quantify and analyze the scar geometry including fractal dimensions, the number-of-connected-components, and mean volume difference. The higher 3D fractal dimension results indicate that the 3D LGE-MRI produces a more complex surface geometry by better capturing the sparse nature of the scar. Finally, 3D LGE-MRI produces a larger scar surface volume (27.49 +- 20.38 cm3) than 2D-reconstructed LGE-MRI (25.07 +- 16.54 cm3).
|
20 |
Construção e aplicação de atlas de pontos salientes 3D na inicialização de modelos geométricos deformáveis em imagens de ressonância magnéticaPinto, Carlos Henrique Villa 10 March 2016 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-09-30T13:54:49Z
No. of bitstreams: 1
DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-14T14:06:37Z (GMT) No. of bitstreams: 1
DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-14T14:06:48Z (GMT) No. of bitstreams: 1
DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Made available in DSpace on 2016-10-14T14:06:58Z (GMT). No. of bitstreams: 1
DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5)
Previous issue date: 2016-03-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / The magnetic resonance (MR) imaging has become an indispensable tool for the diagnosis and study of various diseases and syndromes of the central nervous system, such as Alzheimer’s disease (AD). In order to perform the precise diagnosis of a disease, as well as the evolutionary monitoring of a certain treatment, the neuroradiologist doctor often needs to measure and assess volume and shape changes in certain brain structures along a series of MR images. For that, the previous delineation of the structures of interest is necessary. In general, such task is manually done, with limited help from a computer, and therefore it has several problems. For this reason, many researchers have turned their efforts towards the development of automatic techniques for segmentation of brain structures in MR images. Among the various approaches proposed in the literature, techniques based on deformable
models and anatomical atlases are among those which present the best results. However, one of the main difficulties in applying geometric deformable models is the initial positioning of the model. Thus, this research aimed to develop an atlas of 3D salient points (automatically detected from a set of MR images) and to investigate the applicability of such atlas in guiding the initial positioning of geometric deformable models representing brain structures, with the purpose of helping the automatic segmentation of such structures in MR images. The processing pipeline included the use of a 3D salient point detector based on the phase congruency measure, an adaptation of the shape contexts technique to create point descriptors and the estimation of a B-spline transform to map pairs of matching points. The results, evaluated using the Jaccard and Dice metrics before and after the model initializations, showed a significant gain in the tests involving synthetically deformed images of
normal patients, but for images of clinical patients with AD the gain was marginal and can still be improved in future researches. Some ways to do such improvements are discussed in this work. / O imageamento por ressonância magnética (RM) tornou-se uma ferramenta indispensável no diagnóstico e estudo de diversas doenças e síndromes do sistema nervoso central, tais como a doença de Alzheimer (DA). Para que se possa realizar o diagnóstico preciso de uma doença, bem como o acompanhamento evolutivo de um determinado tratamento, o médico neurorradiologista frequentemente precisa medir e avaliar alterações de volume e forma em determinadas estruturas do cérebro ao longo de uma série de imagens de RM. Para isso, a delineação prévia das estruturas de interesse nas imagens é necessária. Em geral, essa tarefa é realizada manualmente, com ajuda limitada de um computador, e portanto possui diversos problemas. Por esse motivo, vários pesquisadores têm voltado seus esforços para o desenvolvimento de técnicas automáticas de segmentação de estruturas cerebrais em imagens de RM. Dentre as várias abordagens propostas na literatura, técnicas baseadas em modelos deformáveis e atlas anatômicos estão entre as que apresentam os melhores resultados. No entanto, uma das principais dificuldades na aplicação de modelos geométricos deformáveis é o posicionamento inicial do modelo. Assim, esta pesquisa teve por objetivo desenvolver um atlas de pontos salientes 3D (automaticamente detectados em um
conjunto de imagens de RM) e investigar a aplicabilidade de tal atlas em guiar o posicionamento inicial de modelos geométricos deformáveis representando estruturas cerebrais, com o propósito de auxiliar a segmentação automática de tais estruturas em imagens de RM. O arcabouço de processamento incluiu o uso de um detector de pontos salientes 3D baseado
na medida de congruência de fase, uma adaptação da técnica shape contexts para a criação de descritores de pontos e a estimação de uma transformação B-spline para mapear pares de pontos correspondentes. Os resultados, avaliados com as métricas Jaccard e Dice antes e após a inicialização dos modelos, mostraram um ganho significativo em testes envolvendo
imagens sinteticamente deformadas de pacientes normais, mas em imagens de pacientes clínicos com DA o ganho foi marginal e ainda pode ser melhorado em pesquisas futuras. Algumas maneiras de se realizar tais melhorias são discutidas neste trabalho. / FAPESP: 2015/02232-1 / CAPES: 2014/11988-0
|
Page generated in 0.1519 seconds