91 |
Automated methods in the diagnosing of retinal imagesJönsson, Marthina January 2012 (has links)
This report contains a summation of a variety of articles that have been read and analysed. Each article describes different methods that can be used to detect lesions, optic disks, drusen and exudates in retinal images. I.e. diagnose e.g. Diabetic Retinopathy and Age-Related Macular Degeneration. A general approach is presented, which all methods more or less is based on. Methods to locate the optic disk The PCA kNN Regression Hough Transform Fuzzy Convergence Vessel Direction Matched Filter Etc. The best method based on result, reliability, number of images and publisher is kNN regression. The result of this method is remarkably good and that brings some doubt about its reliability. Though the method was published at IEEE and that gives the method a more trustful look. A next best method which also is very useful is Vessel Direction Matched Filter. Methods to detect drusen – diagnose Age-Related Macular Degeneration PNN classifier Histogram approach Etc. The best method based on result, reliability, number of images and publisher is the PNN classifier. The method had a sensitivity of 94 % and a specificity of 95 %. 300 images were used in the experiment which was published by the IEEE in 2011. Methods to detect exudates – diagnose Diabetic Retinopathy Morphological techniques Luv colour space, Wiener filter an Canny edge detector. The best method based on result, reliability, number of images and publisher is an experiment called “Feature Extraction”. The method includes the Luv colour space, Wiener filter (remove noise) and the Canny edge detector. / Den här rapporten innehåller en sammanfattning av ett flertal artiklar som har blivit studerade. Varje artikel har beskrivit en metod som kan användas för att upptäcka sjuka förändringar i ögonbottenbilder, det vill säga, åldersförändringar i gula fläcken och diabetisk retinopati. Metoder för att lokalisera blinda fläcken PCA kNN regression Hough omvandling Suddig konvergens Filtrering beroende på kärlens riktning Mm. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är kNN regression. De förvånansvärt goda resultaten kan inbringa lite tvivel på huruvida resultaten stämmer. Artikeln publicerades dock av IEEE och det gör artikeln mer trovärdig. Den näst bästa metoden är filtrering beroende på kärlens riktning. Metoder för att diagnosticera åldersförändringar i gula fläcken PNN klassificeraren Histogram Mm. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är PNN klassificeraren. Metoden hade en sensitivitet på 94 % och en specificitet på 95 %. 300 bilder användes i experimentet som publicerades av IEEE år 2011. Metoder att diagnosticera diabetisk retinopati Morfologiska tekniker Luv colour space, Wiener filter and Canny edge detector. Den bästa metoden baserat på resultat, pålitlighet, antal bilder och utgivare är ett experimentet som heter ”Feature Extraction”. Experimentet inkluderar Luv colour space, Wiener filter (brus borttagning) och Canny edge detector
|
92 |
The use of a body-wide automatic anatomy recognition system in image analysis of kidneysMohammadianrasanani, Seyedmehrdad January 2013 (has links)
No description available.
|
93 |
Combining Register Data and X-Ray Images for a Precision Medicine Prediction Model of Thigh Bone FracturesNilsson, Alva, Andlid, Oliver January 2022 (has links)
The purpose of this master thesis was to investigate if using both X-ray images and patient's register data could increase the performance of a neural network in discrimination of two types of fractures in the thigh bone, called atypical femoral fractures (AFF) and normal femoral fractures (NFF). We also examined and evaluated how the fusion of the two data types could be done and how different types of fusion affect the performance. Finally, we evaluated how the number of variables in the register data affect a network's performance. Our image dataset consisted of 1,442 unique images from 580 patients (16.85% of the images were labelled AFF corresponding to 15.86% of the patients). Since the dataset is very imbalanced, sensitivity is a prioritized evaluation metric. The register data network was evaluated using five different versions of register data parameters: two (age and sex), seven (binary and non-binary) and 44 (binary and non-binary). Having only age and sex as input resulted in a classifier predicting all samples to class 0 (NFF), for all tested network architectures. Using a certain network structure (celled register data model 2), in combination with the seven non-binary parameters outperforms using both two and 44 (both binary and non-binary) parameters regarding mean AUC and sensitivity. Highest mean accuracy is obtained by using 44 non-binary parameters. The seven register data parameters have a known connection to AFF and includes age and sex. The network with X-ray images as input uses a transfer learning approach with a pre-trained ResNet50-base. This model performed better than all the register data models, regarding all considered evaluation metrics. Three fusion architectures were implemented and evaluated: probability fusion (PF), feature fusion (FF) and learned feature fusion (LFF). PF concatenates the prediction provided from the two separate baseline models. The combined vector is fed into a shallow neural network, which are the only trainable part in this architecture. FF fuses a feature vector provided from the image baseline model, with the raw register data parameters. Prior to the concatenation both vectors were normalized and the fused vector is then fed into a shallow trainable network. The final architecture, LFF, does not have completely frozen baseline models but instead learns two separate feature vectors. These feature vectors are then concatenated and fed into a shallow neural network to obtain a final prediction. The three fusion architectures were evaluated twice: using seven non-binary register data parameters, or only age and sex. When evaluated patient-wise, all three fusion architectures using the seven non-binary parameters obtain higher mean AUC and sensitivity than the single modality baseline models. All fusion architectures with only age and sex as register data parameters results in higher mean sensitivity than the baseline models. Overall, probability fusion with the seven non-binary parameters results in the highest mean AUC and sensitivity, and learned feature fusion with the seven non-binary parameters results in the highest mean accuracy.
|
94 |
Kamerakalibrering i MATLAB : Komplement till studier av kompression av sulor i kolfiberskor / Camera Calibration in MATLAB : Complement to Studies of Compression in Carbon Fiber Shoe SolesHagberg, Lina, Hed, Linnéa January 2023 (has links)
Syftet med aktuellt arbete var att kalibrera en kamera i MATLAB, med hjälp av tillägget Computer Vision Toolbox, samt utforma ett skript som kan konvertera pixelkoordinater i MATLAB till rumskoordinater. Resultatet av arbetet agerar som ett komplement till en studie på Gymnastik- och idrottshögskolan, där kompression av skosulor undersöks med hjälp av kamera och MATLAB-skript. Flertalet tester utfördes för att säkerställa kalibreringens tillförlitlighet samt kompatibiliteten mellan de två MATLAB-skripten. Kalibreringen anses tillförlitlig, och kompatibiliteten mellan skripten anses god. Vidare utfördes en mindre kompressionsstudie på löpband. Resultaten från denna studie är ej tillförlitliga, eftersom väldigt stora fel behövde tillåtas för att möjliggöra skriptets så kallade pixelspårning. Detta anses vara på grund av ljussättningen vid testerna, samt att kamerans videoupptagning ej hade hög nog bildupptagning per sekund till att följa skornas höga hastighet på löpbandet. Vidare studier av kompression rekommenderas att utföras på stilla underlag, där foten är huvudsakligen stilla i bildens synfält under markkontakt. / The purpose of this work was to calibrate a camera in MATLAB using the Computer Vision Toolbox add-on and to design a script to convert pixel coordinates in MATLAB into spatial coordinates in the room. The result of this work serves as a complement to a study conducted at Gymnastik- och idrottshögskolan in Stockholm, where they are investigating the compression of shoe soles with a camera and a MATLAB-script. Several tests were conducted to ensure the reliability of the calibration along with the compatibility of the two MATLAB scripts. The calibration is considered reliable, and the compatibility of the two scripts is considered satisfactory. Furthermore, a smaller compression study was performed on a treadmill. The results from this study are considered unreliable, as large errors were allowed in the so-called pixel tracking of the MATLAB script. This is considered to be due to bad lighting, and because the video recording did not have a high enough frames-per-second to follow the high velocity of the shoes on the treadmill. Further compression studies are recommended to be performed on stable, non-moving surfaces, where the foot is principally still in the camera’s field of view during ground contact.
|
95 |
Multiclass Brain Tumour Tissue Classification on Histopathology Images Using Vision TransformersSpyretos, Christoforos January 2023 (has links)
Histopathology refers to inspecting and analysing tissue samples under a microscope to identify and examine signs of diseases. The manual investigation procedure of histology slides by pathologists is time-consuming and susceptible to misconceptions. Deep learning models have demonstrated outstanding performance in digital histopathology, providing doctors and clinicians with immediate and reliable decision-making assistance in their workflow. In this study, deep learning models, including vision transformers (ViT) and convolutional neural networks (CNN), were employed to compare their performance in patch-level classification task on feature annotations of glioblastoma multiforme in H\&E histology whole slide images (WSI). The dataset utilised in this study was obtained from the Ivy Glioblastoma Atlas Project (IvyGAP). The pre-processing steps included stain normalisation of the images, and patches of size 256x256 pixels were extracted from the WSIs. In addition, the per-subject split method was implemented to prevent data leakage between the training, validation and test sets. Three models were employed to perform the classification task on the IvyGAP data image, two scratch-trained models, a ViT and a CNN (variant of VGG16), and a pre-trained ViT. The models were assessed using various metrics such as accuracy, f1-score, confusion matrices, Matthews correlation coefficient (MCC), area under the curve (AUC) and receiver operating characteristic (ROC) curves. In addition, experiments were conducted to calibrate the models to reflect the ground truth of the task using the temperature scale technique, and their uncertainty was estimated through the Monte Carlo dropout approach. Lastly, the models were statistically compared using the Wilcoxon signed-rank test. Among the evaluated models, the scratch-trained ViT exhibited the best test accuracy of 67%, with an MCC of 0.45. The scratch-trained CNN obtained a test accuracy of 49% and an MCC of 0.15. However, the pre-trained ViT only achieved a test accuracy of 28% and an MCC of 0.034. The reliability diagrams and metrics indicated that the scratch-trained ViT demonstrated better calibration. After applying temperature scaling, only the scratch-trained CNN showed improved calibration. Therefore, the calibrated CNN was used for subsequent experiments. The scratch-trained ViT and calibrated CNN illustrated different uncertainty levels. The scratch-trained ViT had moderate uncertainty, while the calibrated CNN exhibited modest to high uncertainty across classes. The pre-trained ViT had an overall high uncertainty. Finally, the results of the statistical tests reported that the scratch-trained ViT model performed better among the three models at a significant level of approximately 0.0167 after applying the Bonferroni correction. In conclusion, the scratch-trained ViT model achieved the highest test accuracy and better class discrimination. In contrast, the scratch-trained CNN and pre-trained ViT performed poorly and were comparable to random classifiers. The scratch-trained ViT demonstrated better calibration, while the calibrated CNN showed varying levels of uncertainty. The statistical tests demonstrated no statistical difference among the models.
|
96 |
Optic nerve sheath diameter semantic segmentation and feature extraction / Semantisk segmentering och funktionsextraktion med diameter på synnervenBonato, Simone January 2023 (has links)
Traumatic brain injury (TBI) affects millions of people worldwide, leading to significant mortality and disability rates. Elevated intracranial pressure (ICP) resulting from TBI can cause severe complications and requires early detection to improve patient outcomes. While invasive methods are commonly used to measure ICP accurately, non-invasive techniques such as optic nerve sheath diameter (ONSD) measurement show promise. This study aims at the creation of a tool that can automatically perform a segmentation of the ONS from a head computed tomography (CT) scan, and extracts meaningful measures from the segmentation mask, that can be used by radiologists and medics when treating people affected by TBI. This has been achieved using a deep learning model called ”nnU-Net”, commonly adopted for semantic segmentation in medical contexts. The project makes use of manually labeled head CT scans from a public dataset named CQ500, to train the aforementioned segmentation model, using an iterative approach. The initial training using 33 manually segmented samples demonstrated highly satisfactory segmentations, with good performance indicated by Dice scores. A subsequent training, combined with manual corrections of 44 unseen samples, further improved the segmentation quality. The segmentation masks enabled the development of an automatic tool to extract and straighten optic nerve volumes, facilitating the extraction of relevant measures. Correlation analysis with a binary label indicating potential raised ICP showed a stronger correlation when measurements were taken closer to the eyeball. Additionally, a comparison between manual and automated measures of optic nerve sheath diameter (ONSD), taken at a 3mm distance from the eyeball, revealed similarity between the two methods. Overall, this thesis lays the foundation for the creation of an automatic tool whose purpose is to make faster and more accurate diagnosis, by automatically segmenting the optic nerve and extracting useful prognostic predictors. / Traumatisk hjärnskada (TBI) drabbar miljontals människor över hela världen, vilket leder till betydande dödlighet och funktionshinder. Förhöjt intrakraniellt tryck (ICP) till följd av TBI kan orsaka allvarliga komplikationer och kräver tidig upptäckt för att förbättra patientens resultat. Medan invasiva metoder vanligtvis används för att mäta ICP exakt, icke-invasiva tekniker som synnervens höljediameter (ONSD) mätning ser lovande ut. Denna studie syftar till att skapa ett verktyg som automatiskt kan utföra en segmentering av ONS från en datortomografi skanning av huvudet, och extraherar meningsfulla åtgärder från segmenteringsmasken, som kan användas av radiologer och läkare vid behandling av personer som drabbats av TBI. Detta har uppnåtts med hjälp av en deep learning modell som kallas ”nnU-Net”, som vanligtvis används för semantisk segmentering i medicinska sammanhang. Projektet använder sig av manuellt märkta datortomografi skanningar från en offentlig datauppsättning som heter CQ500, för att träna den tidigare nämnda segmenteringsmodellen, med hjälp av en iterativ metod. Den inledande träningen med 33 manuellt segmenterade prov visade tillfredsställande segmentering, med god prestation indikerad av Dice-poäng. En efterföljande utbildning, i kombination med manuella korrigeringar av 44 osedda prover, förbättrade segmenteringskvaliteten ytterligare. Segmenteringsmaskerna möjliggjorde utvecklingen av ett automatiskt verktyg för att extrahera och räta ut optiska nervvolymer, vilket underlättade utvinningen av relevanta mått. Korrelationsanalys med en binär märkning som indikerar potentiellt förhöjd ICP visade en starkare korrelation när mätningar gjordes närmare ögongloben. Dessutom avslöjade en jämförelse mellan manuella och automatiserade mätningar av optisk nervmanteldiameter (ONSD), tagna på ett avstånd på 3 mm från ögongloben, likheten mellan de två metoderna. Sammantaget lägger denna avhandling grunden för skapandet av ett automatiskt verktyg vars syfte är att göra snabbare och mer exakta diagnoser, genom att automatiskt segmentera synnerven och extrahera användbara prognostiska prediktorer.
|
97 |
Automatic Detection of Common Signal Quality Issues in MRI Data using Deep Neural NetworksAx, Erika, Djerf, Elin January 2023 (has links)
Magnetic resonance imaging (MRI) is a commonly used non-invasive imaging technique that provides high resolution images of soft tissue. One problem with MRI is that it is sensitive to signal quality issues. The issues can arise for various reasons, for example by metal located either inside or outside of the body. Another common signal quality issue is caused by the patient being partly placed outside field of view of the MRI scanner. This thesis aims to investigate the possibility to automatically detect these signal quality issues using deep neural networks. More specifically, two different 3D CNN network types were studied, a classification-based approach and a reconstruction-based approach. The datasets used consist of MRI volumes from UK Biobank which have been processed and manually annotated by operators at AMRA Medical. For the classification method four different network architectures were explored utilising supervised learning with multi-label classification. The classification method was evaluated using accuracy and label-based evaluation metrics, such as macro-precision, macro-recall and macro-F1. The reconstruction method was based on anomaly detection using an autoencoder which was trained to reconstruct volumes without any artefacts. A mean squared prediction error was calculated for the reconstructed volume and compared against a threshold in order to classify a volume with or without artefacts. The idea was that volumes containing artefacts should be more difficult to reconstruct and thus, result in a higher prediction error. The reconstruction method was evaluated using accuracy, precision, recall and F1-score. The results show that the classification method has overall higher performance than the reconstruction method. The achieved accuracy for the classification method was 98.0% for metal artefacts and 97.5% for outside field of view artefacts. The best architecture for the classification method proved to be DenseNet201. The reconstruction method worked for metal artefacts with an achieved accuracy of 75.7%. Furthermore, it was concluded that reconstruction method did not work for detection of outside field of view artefacts. The results from the classification method indicate that there is a possibility to automatically detect artefacts with deep neural networks. However, it is needed to further improve the method in order to completely replace a manual quality control step before using the volumes for calculation of biomarkers.
|
98 |
Incremental Learning of Deep Convolutional Neural Networks for Tumour Classification in Pathology ImagesJohansson, Philip January 2019 (has links)
Medical doctors understaffing is becoming a compelling problem in many healthcare systems. This problem can be alleviated by utilising Computer-Aided Diagnosis (CAD) systems to substitute doctors in different tasks, for instance, histopa-thological image classification. The recent surge of deep learning has allowed CAD systems to perform this task at a very competitive performance. However, a major challenge with this task is the need to periodically update the models with new data and/or new classes or diseases. These periodical updates will result in catastrophic forgetting, as Convolutional Neural Networks typically requires the entire data set beforehand and tend to lose knowledge about old data when trained on new data. Incremental learning methods were proposed to alleviate this problem with deep learning. In this thesis, two incremental learning methods, Learning without Forgetting (LwF) and a generative rehearsal-based method, are investigated. They are evaluated on two criteria: The first, capability of incrementally adding new classes to a pre-trained model, and the second is the ability to update the current model with an new unbalanced data set. Experiments shows that LwF does not retain knowledge properly for the two cases. Further experiments are needed to draw any definite conclusions, for instance using another training approach for the classes and try different combinations of losses. On the other hand, the generative rehearsal-based method tends to work for one class, showing a good potential to work if better quality images were generated. Additional experiments are also required in order to investigating new architectures and approaches for a more stable training.
|
99 |
Generating Synthetic CT Images Using Diffusion Models / Generering av sCT bilder med en generativ diffusionsmodellSaleh, Salih January 2023 (has links)
Magnetic resonance (MR) images together with computed tomography (CT) images are used in many medical practices, such as radiation therapy. To capture those images, patients have to undergo two separate scans: one for the MR image, which involves using strong magnetic fields, and one for the CT image which involves using radiation (x-rays). Another approach is to generate synthetic CT (sCT) images from MR images, thus the patients only have to take one image (the MR image), making the whole process easier and more effcient. One way of generating sCT images is by using generative diffusion models which are a relatively new class in generative models. To this end, this project aims to enquire whether generative diffusion models are capable of generating viable and realistic sCT images from MR images. Firstly, a denoising diffusion probabilistic model (DDPM) with a U-Net backbone neural network is implemented and tested on the MNIST dataset, then it is implemented on a pelvis dataset consisting of 41600 pairs of images, where each pair is made up of an MR image with its respective CT image. The MR images were added at each sampling step in order to condition the sampled sCT images on the MR images. After successful implementation and training, the developed diffusion model got a Fréchet inception distance (FID) score of 14.45, and performed as good as the current state-of-the-art model without any major optimizations to the hyperparameters or to the model itself. The results are very promising and demonstrate the capabilities of this new generative modelling framework.
|
100 |
Coil Sensitivity Estimation and Intensity Normalisation for Magnetic Resonance Imaging / Spolkänslighetsbestämning och intensitetsnormalisering för magnetresonanstomografiHerterich, Rebecka, Sumarokova, Anna January 2019 (has links)
The quest for improved efficiency in magnetic resonance imaging has motivated the development of strategies like parallel imaging where arrays of multiple receiver coils are operated simultaneously in parallel. The objective of this project was to find an estimation of phased-array coil sensitivity profiles of magnetic resonance images of the human body. These sensitivity maps can then be used to perform an intensity inhomogeneity correction of the images. Through investigative work in Matlab, a script was developed that uses data embedded in raw data from a magnetic resonance scan, to generate coil sensitivities for each voxel of the volume of interest and recalculate them to two-dimensional sensitivity maps of the corresponding diagnostic images. The resulting mapped sensitivity profiles can be used in Sensitivity Encoding where a more exact solution can be obtained using the carefully estimated sensitivity maps of the images. / Inom magnetresonanstomografi eftersträvas förbättrad effektivitet, villket bidragit till utvecklingen av strategier som parallell imaging, där arrayer av flera mottagarspolar andvänds samtidigt. Syftet med detta projekt var att uppskattamottagarspolarnas känslighetskarta för att utnyttja dem till i metoder inom magnetresonansavbildning. Dessa känslighetskartor kan användas för att utföra intensitetsinhomogenitetskorrigering av bilderna. Genom utforskande arbete i Matlab utvecklades ett skript som tillämpar inbyggd rådata, från en magnetiskresonansavbildning för att generera spolens känslighet för varje voxel av volymen och omberäkna dem till tvådimensionella känslighetskartor av motsvarande diagnostiska bilder. De resulterande kartlagda känslighetsprofilerna kan användas i känslighetskodning, där en mer exakt lösning kan erhållas med hjälp av de noggrant uppskattade känslighetskartorna.
|
Page generated in 0.1189 seconds