• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 12
  • 12
  • 9
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Positron Emission Tomography (PET) Tumor Segmentation and Quantification: Development of New Algorithms

Bhatt, Ruchir N 09 November 2012 (has links)
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.
2

Deep Learning based 3D Image Segmentation Methods and Applications

Chen, Yani 05 June 2019 (has links)
No description available.
3

Multi Planar Conditional Generative Adversarial Networks

Somosmita Mitra (11197152) 30 July 2021 (has links)
<div>Brain tumor sub region segmentation is a challenging problem in Magnetic Resonance imaging. The tumor regions tend to suffer from lack of homogeneity, textural differences, variable location, and their ability to proliferate into surrounding tissue. </div><div> The segmentation task thus requires an algorithm which can be indifferent to such influences and robust to external interference. In this work we propose a conditional generative adversarial network which learns off multiple planes of reference. Using this learning, we evaluate the quality of the segmentation and back propagate the loss for improving the learning. The results produced by the network show competitive quality in both the training and the testing data-set.</div><div><br></div>
4

Federated Learning for Brain Tumor Segmentation

Evaldsson, Benjamin January 2024 (has links)
This thesis investigates the potential of federated learning (FL) in medical image analysis, addressing the challenges posed by data privacy regulations in accessing medical datasets. The motivation stems from the increasing interest in artificial intelligence (AI)research, particularly in medical imaging for tumor detection using magnetic resonance imaging (MRI) and computer tomography (CT) scans. However, data accessibility remains a significant hurdle due to privacy regulations like the General Data Protection Regulation (GDPR). FL emerges as a solution by focusing on sharing network parameters instead of raw medical data, thus ensuring patient confidentiality. The aims of the study are to understand the requirements for FL models to perform comparably to centrally trained models, explore the impact of different aggregation functions, assess dataset heterogeneity, and evaluate the generalization of FL models. To achieve these goals, this thesis uses the BraTS 2021 dataset, which contains 1251 cases of brain tumor volumes from 23 distinct sites, with different distributions of the data across 3-8 nodes in a federation. The federation is set up to perform brain tumor segmentation, using different forms of aggregationfunctions (FedAvg. FedOpt, and FedProx) to finalize a global model. The final FL models demonstrate similar performance to that of centralized and local models, with minor variations. However, FL models’ performance varies depending on the dataset distribution and aggregation method used. Additionally, this study explores the impact of privacy-preserving techniques, such as differential privacy (DP), on FL model performance. While DP methods generally result in lower performance compared to non-DP methods, their effectiveness varies across different data distributions, and aggregation functions.
5

Liver Tumor Segmentation Using Level Sets and Region Growing

Thomasson, Viola January 2011 (has links)
Medical imaging is an important tool for diagnosis and treatment planning today. However as the demand for efficiency increases at the same time as the data volumes grow immensely, the need for computer assisted analysis, such as image segmentation, to help and guide the practitioner increases. Medical image segmentation could be used for various different tasks, the localization and delineation of pathologies such as cancer tumors is just one example. Numerous problems with noise and image artifacts in the generated images make the segmentation a difficult task, and the developer is forced to choose between speed and performance. In clinical practise, however, this is impossible as both speed and performance are crucial. One solution to this problem might be to involve the user more in the segmentation, using interactivite algorithms where the user might influence the segmentation for an improved result. This thesis has concentrated on finding a fast and interactive segmentation method for liver tumor segmentation. Various different methods were explored, and a few were chosen for implementation and further development. Two methods appeared to be the most promising, Bayesian Region Growing (BRG) and Level Set. An interactive Level Set algorithm emerged as the best alternative for the interactivity of the algorithm, and could be used in combination with both BRG and Level Set. A new data term based on a probability model instead of image edges was also explored for the Level Set-method, and proved to be more promising than the original one. The probability based Level Set and the BRG method both provided good quality results, but the fastest of the two was the BRG-method, which could segment a tumor present in 25 CT image slices in less than 10 seconds when implemented in Matlab and mex-C++ code on an ACPI x64-based PC with two 2.4 GHz Intel(R) Core(TM) 2CPU and 8 GB RAM memory. The interactive Level Set could be succesfully used as an interactive addition to the automatic method, but its usefulness was somewhat reduced by its slow processing time ( 1.5 s/slice) and the relative complexity of the needed user interactions.
6

Segmentace nádorových lézí ledvin v CT datech / Segmentation of kidney tumor in CT data

Urbanová, Hedvika January 2020 (has links)
This diploma thesis deals with the kidney tumor segmentation in CT data. First kidney anatomy and pathology is discussed. Following topics are the conventional segmentation techniques and segmentation techniques using machine learning. In the final part, the convolutional neural network is discussed as its algoritm was used for segmentation in the practical part, in which algoritm for segmentation was designed in Python programming language. This algoritm was tested and evaluated using databaze KiTS19.
7

Exploring Radiomics and Unveiling Novel Qualitative Imaging Biomarkers for Glioma Diagnosis in Dogs

Garcia Mora, Josefa Karina 07 January 2025 (has links)
Radiomics integrates machine learning (ML) and radiology to extract and analyze quantitative features from medical imaging modalities such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), ultrasound (US) and digital radiographs (DX). By extracting pixel/voxel-level data, followed by standardization and feature selection, radiomics enables ML algorithms to assist in diagnosis and prognosis. While extensively researched in human medicine its application in veterinary medicine remains limited. Radiomics offers objective, data-driven insights, surpassing qualitative evaluations by revealing micromolecular disease features invisible to the human eye. Radiomics holds significant promise for diagnosing gliomas (GM), a challenging brain tumor where histopathology, the diagnostic gold standard, is seldom performed in veterinary medicine due to logistical and financial barriers, and it is also limited by inherent pathologist subjectivity and disagreement. Additionally, qualitative MRI demonstrates limited accuracy in identifying GM type and grade. By offering non-invasive and reproducible diagnostic and prognostic solutions, radiomics has the potential to overcome these challenges, enhancing brain tumor evaluation in both veterinary and human medicine. The primary goal of this study is to enhance the diagnosis and prognosis of GM by exploring both conventional and innovative non-invasive imaging techniques, with a focus on qualitative and quantitative MRI approaches. We hypothesize that quantitative and novel qualitative methods will surpass conventional expert qualitative assessments in accurately diagnosing GM type, grade, and progression. By doing so, we aim to improve the precision of GM imaging diagnoses, offering clinicians a more accessible and reliable tool to support their diagnostic and treatment decisions. Chapter 1 of this dissertation presents a comprehensive review of the challenges associated with diagnosing GM using MRI. It also introduces principles of radiomics, a novel and relatively underexplored field in veterinary medicine centered on quantitative imaging analysis for diagnostic and prognostic purposes. This includes an in-depth discussion of the radiomics workflow and associated ML methods. Chapter 2 demonstrates the use and efficacy of quantitative MRI for determination of GM size and therapeutic response assessments using both linear and volumetric techniques. Chapter 3 investigates the T2-weighted–FLAIR mismatch sign (T2FMM) in dogs, a well-established imaging biomarker of human low-grade astrocytomas, and demonstrates that the T2FMM is a highly specific biomarker for oligodendrogliomas —the first such imaging biomarker for GM to be discovered in veterinary research. Finally, Chapter 4 illustrates a structured radiomics pipeline for the standardized quantitative analysis of brain tumors on MRI and demonstrates that the use of radiomics ML models results in superior ability to diagnose canine GM subtypes and grades and discriminate GM from non-neoplastic intra-axial lesions when compared to expert rater opinions derived from qualitative MRI evaluations. / Doctor of Philosophy / Radiomics is a cutting-edge approach that combines advanced computer algorithms with medical imaging techniques like Magnetic resonance imaging (MRI), Computed tomography (CT), Positron emission tomography (PET), ultrasound, and X-rays to uncover patterns invisible to the human eye. By analyzing detailed image data and using artificial intelligence (AI), radiomics provides new ways to diagnose and predict diseases. While this field has been widely studied in human medicine, its use in veterinary medicine is just beginning to be explored. Radiomics could transform how we diagnose gliomas (GM), a type of brain tumor that is particularly hard to identify in medical imaging studies in animals due to cost, logistical issues, and shared features with other diseases. Additionally, conventional MRI techniques often fail to accurately determine GM type and aggressiveness. This research aims to enhance GM diagnosis by using advanced imaging methods, combining both traditional visual and innovative quantitative MRI techniques. We believe that objective, measurable approaches and novel qualitative imaging features will be more effective than relying solely on radiologist' conventional visual assessments. The goal is to develop a more accurate, accessible, and objective tool to assist veterinary clinicians in diagnosing and treating their patients. Chapter 1 reviews the challenges in diagnosing GM with conventional MRI and introduces radiomics as a promising solution, discussing how it integrates AI with quantitative imaging analysis. Chapter 2 demonstrates how tumor size can be effectively assessed to predict response to treatments using simple quantitative measurement methods. Chapter 3 explores the T2-weighted–FLAIR mismatch sign (T2FMM), a key imaging biologic marker in human brain tumors, and evaluates its application in dogs—a pioneering effort in veterinary science. Finally, Chapter 4 outlines a radiomics-based pipeline for analyzing brain tumors, focusing on identifying GM type and aggressiveness, distinguishing tumors from non-tumor conditions, and comparing the performance of AI against expert diagnoses. This work has the potential to revolutionize veterinary brain tumor diagnostics and advance care for both animals and humans.
8

Machine learning methods for brain tumor segmentation / Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveau

Havaei, Seyed Mohammad January 2017 (has links)
Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year. To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process. Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis. In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation. / Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
9

Comparative Analysis of Transformer and CNN Based Models for 2D Brain Tumor Segmentation

Träff, Henrik January 2023 (has links)
A brain tumor is an abnormal growth of cells within the brain, which can be categorized into primary and secondary tumor types. The most common type of primary tumors in adults are gliomas, which can be further classified into high-grade gliomas (HGGs) and low-grade gliomas (LGGs). Approximately 50% of patients diagnosed with HGG pass away within 1-2 years. Therefore, the early detection and prompt treatment of brain tumors are essential for effective management and improved patient outcomes.  Brain tumor segmentation is a task in medical image analysis that entails distinguishing brain tumors from normal brain tissue in magnetic resonance imaging (MRI) scans. Computer vision algorithms and deep learning models capable of analyzing medical images can be leveraged for brain tumor segmentation. These algorithms and models have the potential to provide automated, reliable, and non-invasive screening for brain tumors, thereby enabling earlier and more effective treatment. For a considerable time, Convolutional Neural Networks (CNNs), including the U-Net, have served as the standard backbone architectures employed to address challenges in computer vision. In recent years, the Transformer architecture, which already has firmly established itself as the new state-of-the-art in the field of natural language processing (NLP), has been adapted to computer vision tasks. The Vision Transformer (ViT) and the Swin Transformer are two architectures derived from the original Transformer architecture that have been successfully employed for image analysis. The emergence of Transformer based architectures in the field of computer vision calls for an investigation whether CNNs can be rivaled as the de facto architecture in this field.  This thesis compares the performance of four model architectures, namely the Swin Transformer, the Vision Transformer, the 2D U-Net, and the 2D U-Net which is implemented with the nnU-Net framework. These model architectures are trained using increasing amounts of brain tumor images from the BraTS 2020 dataset and subsequently evaluated on the task of brain tumor segmentation for both HGG and LGG together, as well as HGG and LGG individually. The model architectures are compared on total training time, segmentation time, GPU memory usage, and on the evaluation metrics Dice Coefficient, Jaccard Index, precision, and recall. The 2D U-Net implemented using the nnU-Net framework performs the best in correctly segmenting HGG and LGG, followed by the Swin Transformer, 2D U-Net, and Vision Transformer. The Transformer based architectures improve the least when going from 50% to 100% of training data. Furthermore, when data augmentation is applied during training, the nnU-Net outperforms the other model architectures, followed by the Swin Transformer, 2D U-Net, and Vision Transformer. The nnU-net benefited the least from employing data augmentation during training, while the Transformer based architectures benefited the most.  In this thesis we were able to perform a successful comparative analysis effectively showcasing the distinct advantages of the four model architectures under discussion. Future comparisons could incorporate training the model architectures on a larger set of brain tumor images, such as the BraTS 2021 dataset. Additionally, it would be interesting to explore how Vision Transformers and Swin Transformers, pre-trained on either ImageNet- 21K or RadImageNet, compare to the model architectures of this thesis on brain tumor segmentation.
10

Deep Brain Dynamics and Images Mining for Tumor Detection and Precision Medicine

Lakshmi Ramesh (16637316) 30 August 2023 (has links)
<p>Automatic brain tumor segmentation in Magnetic Resonance Imaging scans is essential for the diagnosis, treatment, and surgery of cancerous tumors. However, identifying the hardly detectable tumors poses a considerable challenge, which are usually of different sizes, irregular shapes, and vague invasion areas. Current advancements have not yet fully leveraged the dynamics in the multiple modalities of MRI, since they usually treat multi-modality as multi-channel, and the early channel merging may not fully reveal inter-modal couplings and complementary patterns. In this thesis, we propose a novel deep cross-attention learning algorithm that maximizes the subtle dynamics mining from each of the input modalities and then boosts feature fusion capability. More specifically, we have designed a Multimodal Cross-Attention Module (MM-CAM), equipped with a 3D Multimodal Feature Rectification and Feature Fusion Module. Extensive experiments have shown that the proposed novel deep learning architecture, empowered by the innovative MM- CAM, produces higher-quality segmentation masks of the tumor subregions. Further, we have enhanced the algorithm with image matting refinement techniques. We propose to integrate a Progressive Refinement Module (PRM) and perform Cross-Subregion Refinement (CSR) for the precise identification of tumor boundaries. A Multiscale Dice Loss was also successfully employed to enforce additional supervision for the auxiliary segmentation outputs. This enhancement will facilitate effectively matting-based refinement for medical image segmentation applications. Overall, this thesis, with deep learning, transformer-empowered pattern mining, and sophisticated architecture designs, will greatly advance deep brain dynamics and images mining for tumor detection and precision medicine.</p>

Page generated in 0.1166 seconds