• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Coronary Artery Plaque Segmentation with CTA Images Based on Deep Learning / Segmentering baserad på djupinlärning i CTA-bilder av plack i kransartärer

Shuli, Zhang January 2022 (has links)
Atherosclerotic plaque is currently the leading cause of coronary artery disease (CAD). With the help of CT images, we can identify the size and type of plaque, which can help doctors make a correct diagnosis. To do this, we need to segment coronary plaques from CT images. However, plaque segmentation is still challenging because it takes a lot of energy and time of the radiologists. With the development of technology, some segmentation algorithms based on deep learning are applied in this field. These deep learning algorithms tend to be fully automated and have high segmentation accuracy, showing great potential. In this paper, we try to use deep learning method to segment plaques from 3D cardiac CT images. This work is implemented in two steps. The first part is to extract coronary artery from the CT image with the help of UNet. In the second part, a fully convolutional network is used to segment the plaques from the artery. In each part, the algorithm undergoes 5-fold cross validation. In the first part, we achieve a dice coefficient of 0.8954. In the second part, we achieve the AUC score of 0.9202 which is higher than auto-encoder method and is very close to state-of-the-art method. / Aterosklerotisk plack är för närvarande den främsta orsaken till kranskärlssjukdom (CAD). Med hjälp av CT-bilder kan vi identifiera storlek och typ av plack, vilket kan hjälpa läkare att ställa en korrekt diagnos. För att göra detta måste vi segmentera koronarplack från CT-bilder. Emellertid är placksegmentering fortfarande utmanande eftersom det tar mycket energi och tid av radiologerna. Med utvecklingen av teknik tillämpas vissa segmenteringsalgoritmer baserade på djupinlärning inom detta område. Dessa djupinlärningsalgoritmer tenderar att vara helt automatiserade och har hög segmenteringsnoggrannhet, vilket visar stor potential. I detta dokument försöker vi använda djupinlärningsmetoden för att segmentera plack från 3D-hjärt-CT-bilder. Detta arbete genomförs i två steg. Den första delen är att extrahera kranskärlen från CT-bilden med hjälp av UNet. I den andra delen används ett helt konvolutionerande nätverk för att segmentera placken från artären. I varje del genomgår algoritmen 5-faldig korsvalidering. I den första delen uppnår vi en tärningskoefficient på 0,8954. I den andra delen uppnår vi AUC-poängen 0,9202, vilket är högre än den automatiska kodarmetoden och är mycket nära den senaste metoden.
2

Optic nerve sheath diameter semantic segmentation and feature extraction / Semantisk segmentering och funktionsextraktion med diameter på synnerven

Bonato, Simone January 2023 (has links)
Traumatic brain injury (TBI) affects millions of people worldwide, leading to significant mortality and disability rates. Elevated intracranial pressure (ICP) resulting from TBI can cause severe complications and requires early detection to improve patient outcomes. While invasive methods are commonly used to measure ICP accurately, non-invasive techniques such as optic nerve sheath diameter (ONSD) measurement show promise. This study aims at the creation of a tool that can automatically perform a segmentation of the ONS from a head computed tomography (CT) scan, and extracts meaningful measures from the segmentation mask, that can be used by radiologists and medics when treating people affected by TBI. This has been achieved using a deep learning model called ”nnU-Net”, commonly adopted for semantic segmentation in medical contexts. The project makes use of manually labeled head CT scans from a public dataset named CQ500, to train the aforementioned segmentation model, using an iterative approach. The initial training using 33 manually segmented samples demonstrated highly satisfactory segmentations, with good performance indicated by Dice scores. A subsequent training, combined with manual corrections of 44 unseen samples, further improved the segmentation quality. The segmentation masks enabled the development of an automatic tool to extract and straighten optic nerve volumes, facilitating the extraction of relevant measures. Correlation analysis with a binary label indicating potential raised ICP showed a stronger correlation when measurements were taken closer to the eyeball. Additionally, a comparison between manual and automated measures of optic nerve sheath diameter (ONSD), taken at a 3mm distance from the eyeball, revealed similarity between the two methods. Overall, this thesis lays the foundation for the creation of an automatic tool whose purpose is to make faster and more accurate diagnosis, by automatically segmenting the optic nerve and extracting useful prognostic predictors. / Traumatisk hjärnskada (TBI) drabbar miljontals människor över hela världen, vilket leder till betydande dödlighet och funktionshinder. Förhöjt intrakraniellt tryck (ICP) till följd av TBI kan orsaka allvarliga komplikationer och kräver tidig upptäckt för att förbättra patientens resultat. Medan invasiva metoder vanligtvis används för att mäta ICP exakt, icke-invasiva tekniker som synnervens höljediameter (ONSD) mätning ser lovande ut. Denna studie syftar till att skapa ett verktyg som automatiskt kan utföra en segmentering av ONS från en datortomografi skanning av huvudet, och extraherar meningsfulla åtgärder från segmenteringsmasken, som kan användas av radiologer och läkare vid behandling av personer som drabbats av TBI. Detta har uppnåtts med hjälp av en deep learning modell som kallas ”nnU-Net”, som vanligtvis används för semantisk segmentering i medicinska sammanhang. Projektet använder sig av manuellt märkta datortomografi skanningar från en offentlig datauppsättning som heter CQ500, för att träna den tidigare nämnda segmenteringsmodellen, med hjälp av en iterativ metod. Den inledande träningen med 33 manuellt segmenterade prov visade tillfredsställande segmentering, med god prestation indikerad av Dice-poäng. En efterföljande utbildning, i kombination med manuella korrigeringar av 44 osedda prover, förbättrade segmenteringskvaliteten ytterligare. Segmenteringsmaskerna möjliggjorde utvecklingen av ett automatiskt verktyg för att extrahera och räta ut optiska nervvolymer, vilket underlättade utvinningen av relevanta mått. Korrelationsanalys med en binär märkning som indikerar potentiellt förhöjd ICP visade en starkare korrelation när mätningar gjordes närmare ögongloben. Dessutom avslöjade en jämförelse mellan manuella och automatiserade mätningar av optisk nervmanteldiameter (ONSD), tagna på ett avstånd på 3 mm från ögongloben, likheten mellan de två metoderna. Sammantaget lägger denna avhandling grunden för skapandet av ett automatiskt verktyg vars syfte är att göra snabbare och mer exakta diagnoser, genom att automatiskt segmentera synnerven och extrahera användbara prognostiska prediktorer.
3

Biological Semantic Segmentation on CT Medical Images for Kidney Tumor Detection Using nnU-Net Framework

Bergsneider, Andres 01 March 2021 (has links) (PDF)
Healthcare systems are constantly challenged with bottlenecks due to human-reliant operations, such as analyzing medical images. High precision and repeatability is necessary when performing a diagnostics on patients with tumors. Throughout the years an increasing number of advancements have been made using various machine learning algorithms for the detection of tumors helping to fast track diagnosis and treatment decisions. “Black Box” systems such as the complex deep learning networks discussed in this paper rely heavily on hyperparameter optimization in order to obtain the most ideal performance. This requires a significant time investment in the tuning of such networks to acquire cutting-edge results. The approach of this paper relies on implementing a state of the art deep learning framework, the nn-UNet, in order to label computed tomography (CT) images from patients with kidney cancer through semantic segmentation by feeding raw CT images through a deep architecture and obtaining pixel-wise mask classifications. Taking advantage of nn-UNet’s framework versatility, various configurations of the architecture are explored and applied, benchmarking and assorting resulting performance, including variations of 2D and 3D convolutions as well as the use of distinct cost functions such as the Sørensen-Dice coefficient, Cross Entropy, and a compound of them. 79% is the accuracy currently reported for the detection of benign and malign tumors using CT imagery performed by medical practitioners. The best iteration and mixture of parameters in this work resulted in an accuracy of 83% for tumor labelling. This study has further exposed the performance of a versatile and groundbreaking approach to deep learning framework designed for biomedical image segmentation.
4

Comparative Analysis of Transformer and CNN Based Models for 2D Brain Tumor Segmentation

Träff, Henrik January 2023 (has links)
A brain tumor is an abnormal growth of cells within the brain, which can be categorized into primary and secondary tumor types. The most common type of primary tumors in adults are gliomas, which can be further classified into high-grade gliomas (HGGs) and low-grade gliomas (LGGs). Approximately 50% of patients diagnosed with HGG pass away within 1-2 years. Therefore, the early detection and prompt treatment of brain tumors are essential for effective management and improved patient outcomes.  Brain tumor segmentation is a task in medical image analysis that entails distinguishing brain tumors from normal brain tissue in magnetic resonance imaging (MRI) scans. Computer vision algorithms and deep learning models capable of analyzing medical images can be leveraged for brain tumor segmentation. These algorithms and models have the potential to provide automated, reliable, and non-invasive screening for brain tumors, thereby enabling earlier and more effective treatment. For a considerable time, Convolutional Neural Networks (CNNs), including the U-Net, have served as the standard backbone architectures employed to address challenges in computer vision. In recent years, the Transformer architecture, which already has firmly established itself as the new state-of-the-art in the field of natural language processing (NLP), has been adapted to computer vision tasks. The Vision Transformer (ViT) and the Swin Transformer are two architectures derived from the original Transformer architecture that have been successfully employed for image analysis. The emergence of Transformer based architectures in the field of computer vision calls for an investigation whether CNNs can be rivaled as the de facto architecture in this field.  This thesis compares the performance of four model architectures, namely the Swin Transformer, the Vision Transformer, the 2D U-Net, and the 2D U-Net which is implemented with the nnU-Net framework. These model architectures are trained using increasing amounts of brain tumor images from the BraTS 2020 dataset and subsequently evaluated on the task of brain tumor segmentation for both HGG and LGG together, as well as HGG and LGG individually. The model architectures are compared on total training time, segmentation time, GPU memory usage, and on the evaluation metrics Dice Coefficient, Jaccard Index, precision, and recall. The 2D U-Net implemented using the nnU-Net framework performs the best in correctly segmenting HGG and LGG, followed by the Swin Transformer, 2D U-Net, and Vision Transformer. The Transformer based architectures improve the least when going from 50% to 100% of training data. Furthermore, when data augmentation is applied during training, the nnU-Net outperforms the other model architectures, followed by the Swin Transformer, 2D U-Net, and Vision Transformer. The nnU-net benefited the least from employing data augmentation during training, while the Transformer based architectures benefited the most.  In this thesis we were able to perform a successful comparative analysis effectively showcasing the distinct advantages of the four model architectures under discussion. Future comparisons could incorporate training the model architectures on a larger set of brain tumor images, such as the BraTS 2021 dataset. Additionally, it would be interesting to explore how Vision Transformers and Swin Transformers, pre-trained on either ImageNet- 21K or RadImageNet, compare to the model architectures of this thesis on brain tumor segmentation.

Page generated in 0.0335 seconds