• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Self-learning for 3D segmentation of medical images from single and few-slice annotation

Lassarat, Côme January 2023 (has links)
Training deep-learning networks to segment a particular region of interest (ROI) in 3D medical acquisitions (also called volumes) usually requires annotating a lot of data upstream because of the predominant fully supervised nature of the existing stateof-the-art models. To alleviate this annotation burden for medical experts and the associated cost, leveraging self-learning models, whose strength lies in their ability to be trained with unlabeled data, is a natural and straightforward approach. This work thus investigates a self-supervised model (called “self-learning” in this study) to segment the liver as a whole in medical acquisitions, which is very valuable for doctors as it provides insights for improved patient care. The self-learning pipeline utilizes only a single-slice (or a few-slice) groundtruth annotation to propagate the annotation iteratively in 3D and predict the complete segmentation mask for the entire volume. The segmentation accuracy of the tested models is evaluated using the Dice score, a metric commonly employed for this task. Conducting this study on Computed Tomography (CT) acquisitions to annotate the liver, the initial implementation of the self-learning framework achieved a segmentation accuracy of 0.86 Dice score. Improvements were explored to address the drifting of the mask propagation, which eventually proved to be of limited benefits. The proposed method was then compared to the fully supervised nnU-Net baseline, the state-of-the-art deep-learning model for medical image segmentation, using fully 3D ground-truth (Dice score ∼ 0.96). The final framework was assessed as an annotation tool. This was done by evaluating the segmentation accuracy of the state-of-the-art nnU-Net trained with annotation predicted by the self-learning pipeline for a given expert annotation budget. While the self-learning framework did not generate accurate enough annotation from a single slice annotation yielding an average Dice score of ∼ 0.85, it demonstrated encouraging results when two ground-truth slice annotations per volume were provided for the same annotation budget (Dice score of ∼ 0.90). / Att träna djupinlärningsnätverk för att segmentera en viss region av intresse (ROI) i medicinska 3D-bilder (även kallade volymer) kräver vanligtvis att en stor mängd data kommenteras uppströms på grund av den dominerande helt övervakade karaktären hos de befintliga toppmoderna modellerna. För att minska annoteringsbördan för medicinska experter samt den associerade kostnaden är det naturligt och enkelt att utnyttja självlärande modeller, vars styrka ligger i förmågan att tränas med omärkta data. Detta arbete undersöker således en självövervakad modell (“kallas ”självlärande” i denna studie) för att segmentera levern som helhet i medicinska skanningar, vilket är mycket värdefullt för läkare eftersom det ger insikter för förbättrad patientvård. Den självlärande pipelinen använder endast en enda skiva (eller några få skivor) för att sprida annotationen iterativt i 3D och förutsäga den fullständiga segmenteringsmasken för hela volymen. Segmenteringsnoggrannheten hos de testade modellerna utvärderas med hjälp av Dice-poängen, ett mått som vanligtvis används för denna uppgift. Vid genomförandet av denna studie på CT-förvärv för att annotera levern uppnådde den initiala implementeringen av det självlärande ramverket en segmenteringsnoggrannhet på 0,86 Dice-poäng. Förbättringar undersöktes för att hantera driften av maskutbredningen, vilket så småningom visade sig ha begränsade fördelar. Den föreslagna metoden jämfördes sedan med den helt övervakade nnU-Net-baslinjen, den toppmoderna djupinlärningsmodellen för medicinsk bildsegmentering, med hjälp av helt 3D-baserad sanning (Dice-poäng ∼ 0, 96). Det slutliga ramverket bedömdes som ett annoteringsverktyg. Detta gjordes genom att utvärdera segmenteringsnoggrannheten hos det toppmoderna nnU-Net som tränats med annotering som förutspåtts av den självlärande pipelinen för en given budget för expertannotering. Det självlärande ramverket genererade inte tillräckligt noggranna annoteringar från baserat på endast en snittannotering och resulterade i en genomsnittlig Dice-poäng på ∼ 0, 85, men uppvisade uppmuntrande resultat när två verkliga snittannoteringar per volym tillhandahölls för samma anteckningsbudget (Dice-poäng på ∼ 0, 90).
2

Methods for automatic analysis of glucose uptake in adipose tissue using quantitative PET/MRI data

Andersson, Jonathan January 2014 (has links)
Brown adipose tissue (BAT) is the main tissue involved in non-shivering heat production. A greater understanding of BAT could possibly lead to new ways of prevention and treatment of obesity and type 2 diabetes. The increasing prevalence of these conditions and the problems they cause society and individuals make the study of the subject important. An ongoing study performed at the Turku University Hospital uses images acquired using PET/MRI with 18F-FDG as the tracer. Scans are performed on sedentary and athlete subjects during normal room temperature and during cold stimulation. Sedentary subjects then undergo scanning during cold stimulation again after a six weeks long exercise training intervention. This degree project used images from this study. The objective of this degree project was to examine methods to automatically and objectively quantify parameters relevant for activation of BAT in combined PET/MRI data. A secondary goal was to create images showing glucose uptake changes in subjects from images taken at different times. Parameters were quantified in adipose tissue directly without registration (image matching), and for neck scans also after registration. Results for the first three subjects who have completed the study are presented. Larger registration errors were encountered near moving organs and in regions with less information. The creation of images showing changes in glucose uptake seem to be working well for the neck scans, and somewhat well for other sub-volumes. These images can be useful for identification of BAT. Examples of these images are shown in the report.

Page generated in 0.0476 seconds