Spelling suggestions: "subject:"amedical image"" "subject:"amedical lmage""
121 |
Estimation of Height, Weight, Sex and Age from Magnetic Resonance Images using 3D Convolutional Neural Networks / Estimering av längd, vikt, kön och ålder från MR-bilder med 3D neurala faltningsnätNimhed, Carl January 2022 (has links)
Magnetic resonance imagining is a non-invasive 3D imaging technology widely used in the medical field for partial and full body scans. AMRA Medical AB is a medical company which combines MRI images with additional patient attributes such as height, weight, sex and age to perform analysis such as body composition profiling. However, the additional information required is not always available or accurate. Manual measurements are proneto human error, and retrieving them from patient journals is complicated due to the sensitivenature of the information. This thesis investigates technologies to instead estimate height, weight, sex and age automatically from only the MR images by the use of deep learning. If successful, such methods could eliminate the reliance on additional subject information. Alternatively they could serve as an error detection mechanism to flag possibleinaccuracies in the data, which could be of use for both AMRA as well as for operators in a clinical scenario.
|
122 |
Evaluating Segmentation of MR Volumes Using Predictive Models and Machine LearningKantedal, Simon January 2020 (has links)
A reliable evaluation system is essential for every automatic process. While techniques for automatic segmentation of images have been extensively researched in recent years, evaluation of the same has not received an equal amount of attention. Amra Medical AB has developed a system for automatic segmentation of magnetic resonance (MR) images of human bodies using an atlas-based approach. Through their software, Amra is able to derive body composition measurements, such as muscle and fat volumes, from the segmented MR images. As of now, the automatic segmentations are quality controlled by clinical experts to ensure their correctness. This thesis investigates the possibilities to leverage predictive modelling to reduce the need for a manual quality control (QC) step in an otherwise automatic process. Two different regression approaches have been implemented as a part of this study: body composition measurement prediction (BCMP) and manual correction prediction (MCP). BCMP aims at predicting the derived body composition measurements and comparing the predictions to actual measurements. The theory is that large deviations between the predictions and the measurements signify an erroneously segmented sample. MCP instead tries to directly predict the amount of manual correction needed for each sample. Several regression models have been implemented and evaluated for the two approaches. Comparison of the regression models shows that local linear regression (LLR) is the most performant model for both BCMP and MCP. The results show that the inaccuracies in the BCMP-models, in practice, renders this approach useless. MCP proved to be a far more viable approach; using MCP together with LLR achieves a high true positive rate with a reasonably low false positive rate for several body composition measurements. These results suggest that the type of system developed in this thesis has the potential to reduce the need for manual inspections of the automatic segmentation masks.
|
123 |
Prostate Segmentation according to the PI-RADS standard using a 3D-CNNHolmlund, William January 2022 (has links)
No description available.
|
124 |
Designing an AI-driven System at Scale for Detection of Abusive Head Trauma using Domain ModelingJanuary 2020 (has links)
abstract: Traumatic injuries are the leading cause of death in children under 18, with head trauma being the leading cause of death in children below 5. A large but unknown number of traumatic injuries are non-accidental, i.e. inflicted. The lack of sensitivity and specificity required to diagnose Abusive Head Trauma (AHT) from radiological studies results in putting the children at risk of re-injury and death. Modern Deep Learning techniques can be utilized to detect Abusive Head Trauma using Computer Tomography (CT) scans. Training models using these techniques are only a part of building AI-driven Computer-Aided Diagnostic systems. There are challenges in deploying the models to make them highly available and scalable.
The thesis models the domain of Abusive Head Trauma using Deep Learning techniques and builds an AI-driven System at scale using best Software Engineering Practices. It has been done in collaboration with Phoenix Children Hospital (PCH). The thesis breaks down AHT into sub-domains of Medical Knowledge, Data Collection, Data Pre-processing, Image Generation, Image Classification, Building APIs, Containers and Kubernetes. Data Collection and Pre-processing were done at PCH with the help of trauma researchers and radiologists. Experiments are run using Deep Learning models such as DCGAN (for Image Generation), Pretrained 2D and custom 3D CNN classifiers for the classification tasks. The trained models are exposed as APIs using the Flask web framework, contained using Docker and deployed on a Kubernetes cluster.
The results are analyzed based on the accuracy of the models, the feasibility of their implementation as APIs and load testing the Kubernetes cluster. They suggest the need for Data Annotation at the Slice level for CT scans and an increase in the Data Collection process. Load Testing reveals the auto-scalability feature of the cluster to serve a high number of requests. / Dissertation/Thesis / Masters Thesis Software Engineering 2020
|
125 |
Using convolutional neural network to generate neuro image templateQian, Songyue January 2018 (has links)
No description available.
|
126 |
Deep Learning Based Multi-Label Classification of Radiotherapy Target Volumes for Prostate Cancer / Djupinlärningsbaserad fler-etikett klassificering av målvolymer för prostatacancer inom strålterapiWelander, Lina January 2019 (has links)
An initiative to standardize the nomenclature in Sweden started in 2016 along with the creation of the local database Medical Information Quality Archive (MIQA) and a national radiotherapy register on Information Network for CAncercare (INCA). A problem of identifying the clinical tumor volume (CTV) structures and prescribed dose arose when the consecutive number, which is added to the CTV-name, was made inconsistently in MIQA and INCA. Deep neural networks (DNN) were promising tools to solve the multi-label classification task of the CTV to enable automatic labeling in the database. Prostate cancer patients that often have more than one type of organ in the same CTV structure were chosen for proof of concept. The DNN used supervised training in a 2D fashion where the radiation therapy (RT) structures along with the CT image were fed, slice by slice, to AlexNet and VGGNet to label the CTV structures in the local database system MIQA and INCA. The study also includes three methods to classify a final label for the CTV structure since the model makes the predictions on each slice. The three methods were maximum method by taking the maximum prediction for each class, minimum method by taking the minimum prediction for each class and occurrence method. The occurrence method chooses the maximum prediction if the network has predicted the class over 0.5 at least two times and the minimum prediction if not. The DNN and volume classification methods performed well where the maximum and occurrence method performed the best and can be used to interpret RT volumes in MIQA and INCA for prostate cancer patients. This novel study gives promising results for the future development of deep neural networks classifying RT structures for more than one type of cancer patient. / Ett initiativ för att standardisera nomenklaturen i Sverige startade 2016 tillsammans med skapandet av den lokala databasen Medical Information Quality Archive (MIQA) och ett nationellt radioterapikvalitetsregister på plattformen Information Network for CAncercare (INCA). Ett problem med att identifiera kliniska tumörvolymstrukturer (CTV-strukturer) och ordinerad dos uppstod när de på varandra följande siffrorna, som adderas till CTV-namnet för att skilja de olika CTV:erna från varandra, gjordes inkonsekvent i MIQA och INCA. Djupa neurala nätverk (DNN) är lovande verktyg för att lösa klassificeringen av CTV för att möjliggöra automatisk annotering för multippla etiketter i databasen. Prostatacancerpatienter vars radioterapistrukturer (RT-strukturer) ofta innehåller fler än ett organ användes därför för att bevisa konceptet för fleretikettsklassificering. DNN:et använde övervakad inlärning av 2D-bilder där RT-strukturerna tillsammans med CT-bilderna matades in, snitt för snitt, till AlexNet och VGGNet för att namnge CTV-strukturerna i det lokala databassystemet MIQA och sedan i INCA. Studien inkluderar även tre metoder för en slutlig strukturetikett eftersom modellen gör sina förutsägelser på varje snitt. Metoderna var maximum där den högsta förutsägelsen noteras för varje klass, minimum där den lägsta förutsägelsen noteras för varje klass och förekomst där den högsta förutsägelsen noteras om klassen har fått minst två förutsägelser över 0.5 annars noteras den lägsta förutsägelsen. DNN:en och volymetikettmetoderna gav bra resultat där maximum- och förekomstmetoden gav bäst resultat och kan användas för att tolka RT-volymer i MIQA och INCA för prostatacancerpatienter. Denna nya studie ger lovande resultat för framtida utveckling av djupa neurala nätverk som klassificerar strukturer från mer än en typ av cancerpatient.
|
127 |
A Segmentation Network with a Class-Agnostic Loss Function for Training on Incomplete Data / Ett segmenteringsnätverk med en klass-agnostisk förlustfunktion för att träna på inkomplett dataNorman, Gabriella January 2020 (has links)
The use of deep learning methods is increasing in medical image analysis, e.g., segmentation of organs in medical images. Deep learning methods are highly dependent on a large amount of training data, a common obstacle for medical image analysis. This master thesis proposes a class-agnostic loss function as a method to train on incomplete data. The project used CT images from 1587 breast cancer patients, with a variety of available segmentation masks for each patient. The class-agnostic loss function is given labels for each class for each sample, in this project, for each segmentation mask for each CT slice. The label tells the loss function if the mask is an actual mask or just a placeholder. If it is a placeholder, the comparison of the predicted mask and the placeholder will not contribute to the loss value. The results show that it was possible to use the class-agnostic loss function to train a segmentation model with eight output masks, with data that never had all eight masks present at the same time and gain approximately the same performance as single-mask models.
|
128 |
Deep Learning Based Deformable Image Registration of Pelvic Images / Bildregistrering av bäckenbilder baserade på djupinlärningCabrera Gil, Blanca January 2020 (has links)
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
|
129 |
Automation of Kidney Perfusion Analysis from Dynamic Phase-Contrast MRI using Deep Learning / Automatisering av analys av njurperfusion från faskontrast MRT med djupinlärningMartínez Mora, Andrés January 2020 (has links)
Renal phase-contrast magnetic resonance imaging (PC-MRI) is an MRI modality where the phase component of the MR signal is made sensitive to the velocity of water molecules in the kidneys. PC-MRI is able to assess the Renal Blood Flow (RBF), which is an important biomarker in the development of kidney disease. RBF is analyzed with the manual or semi-automatic delineation by experts of the renal arteries in PC-MRI. This is a time-consuming and operator-dependent process. We have therefore trained, validated and tested a fully-automated deep learning model for faster and more objective renal artery segmentation. The PC-MRI data used in model training, validation and testing come from four studies (N=131 subjects). Images were acquired from three manufacturers with different imaging parameters. The best deep learning model found consists of a deeply-supervised 2D attention U-Net with residual skip connections. The output of this model was re-introduced as an extra channel in a second iteration to refine the segmentation result. The flow values in the segmented regions were integrated to provide a quantification of the mean arterial flow in the segmented renal arteries. The automated segmentation was evaluated in all the images that had manual segmentation ground-truths that come from a single operator. The evaluation was completed in terms of a segmentation accuracy metric called Dice Coefficient. The mean arterial flow values that were quantified from the auto-mated segmentation were also evaluated against ground-truth flow values from semi-automatic software. The deep learning model was trained and validated on images with segmentation ground-truths with 4-fold cross-validation. A Dice segmentation accuracy of 0.71±0.21 was achieved (N=73 subjects). Although segmentation results were accurate for most arteries, the algorithm failed in ten out of 144arteries. The flow quantification from the segmentation was highly correlated without significant bias in comparison to the ground-truth flow measurements. This method shows promise for supporting RBF measurements from PC-MRI and can likely be used to save analysis time in future studies. More training data has to be used for further improvement, both in terms of accuracy and generalizability.
|
130 |
Off-resonance correction for magnetic resonance imaging with spiral trajectoriesNylund, Andreas January 2014 (has links)
The procedure of cardiographic magnetic resonance imaging requires patients to hold their breath for up to twenty seconds, creating an uncomfortable situation for many patients. It is proposed that an acquisition scheme using spiral trajectories is preferable due to their much shorter total scan time; however, spiral trajectories suffer from a blurring effect caused by off-resonance frequencies in the image area. There are several methods for reconstructing images with reduced blur and Conjugate Phase Reconstruction has been chosen as a method for implementation into Matlab-script for evaluation in regards to image reconstruction quality and computation time. This method finds a conjugate to the off-resonance from a field map to demodulate the image and an algorithm for frequency‑segmented Conjugate Phase Reconstruction is implemented along with an improvement called Multi-frequency Interpolation. The implementation is tested through simulation of spiral magnetic resonance imaging using a Shepp‑Logan phantom. Different off-resonance frequencies and field maps are used to provide a broad view of the functionality of the code. The two algorithms are then compared to each other in terms of computation speed and image quality. It is concluded that this implementation might reconstruct images well but that further testing on actual scan sequences is required to determine the usefulness. The Multi-frequency Interpolation algorithm yields images that are not useful in a clinical context. Further study of other methods not requiring a field map is suggested for comparison.
|
Page generated in 0.0393 seconds