• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 12
  • 11
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 186
  • 186
  • 186
  • 131
  • 129
  • 40
  • 38
  • 31
  • 31
  • 31
  • 30
  • 26
  • 23
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Neural Networks for Material Decomposition in Photon-Counting Spectral CT / Neurala Nätverk för Materialnedbrytning i Spektral CT med Fotonräkning

Charrier, Hugo January 2022 (has links)
Photon counting computed tomography scanners constitute a major improvement of the field of computed tomography, opening various prospective and enabling the decomposition of computed tomography images into different materials. The material decomposition algorithm, mapping photon counts to material pathlengths, relies on a forward model with Poisson statistics. This model though suffers from noise and residual bias due to its sensitivity to calibration errors and specificities in single-pixel responses that are not captured by the material decomposition model.           This study proposes a pixel-specific and projection-based correction of the residual bias in the material decomposition estimates using artificial neural networks trained for each pixel of the detector. The neural network models were trained under supervised learning using material decomposition calibration data, scans of PE and PVC slabs of various thicknesses acquired for the calibration of the model. This method aims at the mapping of the singularities of the pixels’ responses and correct them in the projection domain. The trained models were evaluated on a set of evaluation slabs and on scans of a water phantom, in order to assess performances of homogeneity and bias correction.           The implemented solution exhibited promising results for the correction of residual bias in single pixels without impairment of the noise levels. An array of trained neural networks demonstrates its ability to correct calibration and evaluation slab data while conserving pixel-to-pixel difference. The application of the correction to the water phantom however offered nuanced results which call for further investigation of the identified issues and induced improvements of the model.
112

4D-Flow MRI Reconstruction using Locally Low Rank Regularized Compressed Sensing : Implementation and Evaluation of initial conditions

Vigren Näslund, Viktor January 2024 (has links)
4D-Flow MRI is a non-invasive imaging technique that can measure temporally resolved 3D images, capturing the flow/velocity in each pixel. The quality of the images and the temporal resolution largely depend on two factors. The acquisition protocol the MRI scanner uses and the reconstruction method used to go from signal to images. In MRI, the signal samples measured are the Fourier coefficients of the sought-after image, and reconstruction is an inverse problem, classically requiring sampling on at least Nyquist rate. Compressed sensing is a framework that allows for reconstruction from fewer samples than the Nyquist rate by incorporating other known information about the images. In this thesis, we evaluate the efficiency of Compressed Sensing for 4D-Flow MRI reconstruction for undersampled signals on synthetic data and compare it to classical reconstruction methods (Gridding and Viewshared Gridding). We specifically focus on the Locally Low Rank (LLR) regularization. The importance of initial-guess, or if it can be beneficial to estimate the temporal images by solving from the difference to the mean, is investigated. After calculating velocity profiles in vessels, we compare the reconstructed velocity profiles to the actual velocity profiles. We look at relative errors and pixel-wise maximum errors, as well as visual inspection. We introduce a velocity error metric aiming at capturing how accurate the reconstructed velocity profile is compared to our synthetic truth. We show that for good choices of regularization strength, the relative, maximum and velocity errors are significantly lower for the Compressed Sensing LLR method compared to the classical methods. We conclude that Compressed sensing with LLR regularization can significantly improve the reconstruction quality of 4D-Flow MRI data.
113

Deep Learning-Based Pipeline for Acanthamoeba Keratitis Cyst Detection : Image Processing and Classification Utilizing In Vivo Confocal Microscopy Images

Ji, Meichen, Song, Yan January 2024 (has links)
The aim of this work is to enhance the detection and classification pipelines of an artificial intelligence (AI)-based decision support system (DSS) for diagnosing acanthamoeba keratitis (AK), a vision-threatening disease. The images used are taken with the in vivo confocal microscopy (IVCM) technique, a complementary tool for clinical assessment of the cornea that requires manual human analysis to support diagnosis. The DSS facilitates automated image analysis and currently aids in diagnosing AK. However, the accuracy of AK detection needs improvements in order to use it in clinical practice. To address this challenge, we utilize image brightness processing through multiscale retinex (MSR), and develop a custom-built image processing pipeline with deep learning model and rule-based strategies. The proposed pipeline replaces two deep learning models in original DSS, resulting in an overall accuracy improvement of 10.23% on average. Additionally, our improved pipeline not only enhances the original system’s ability to aid AK diagnosis, but also provides a versatile set of functions that can be used to create pipelines for detecting similar keratitis diseases.
114

Improving Semi-Automated Segmentation Using Self-Supervised Learning

Blomlöf, Alexander January 2024 (has links)
DeepPaint is a semi-automated segmentation tool that utilises a U-net architecture to performbinary segmentation. To maximise the model’s performance and minimise user time, it isadvisable to apply Transfer Learning (TL) and reuse a model trained on a similar segmentationtask. However, due to the sensitivity of medical data and the unique properties of certainsegmentation tasks, TL is not feasible for some applications. In such circumstances, SelfSupervised Learning (SSL) emerges as the most viable option to minimise the time spent inDeepPaint by a user. Various pretext tasks, exploring both corruption segmentation and corruption restoration, usingsuperpixels and square patches, were designed and evaluated. With a limited number ofiterations in both the pretext and downstream tasks, significant improvements across fourdifferent datasets were observed. The results reveal that SSL models, particularly those pretrained on corruption segmentation tasks where square patches were corrupted, consistentlyoutperformed models without pre-training, with regards to a cumulative Dice SimilarityCoefficient (DSC). To examine whether a model could learn relevant features from a pretext task, Centred KernelAlignment (CKA) was used to measure the similarity of feature spaces across a model's layersbefore and after fine-tuning on the downstream task. Surprisingly, no significant positivecorrelation between downstream DSC and CKA was observed in the encoder, likely due to thelimited fine-tuning allowed. Furthermore, it was examined whether pre-training on the entiredataset, as opposed to only the training subset, yielded different downstream results. Asexpected, significantly higher DSC in the downstream task is more likely if the model hadaccess to all data during the pretext task. The differences in downstream segmentationperformance between models that accessed different data subsets during pre-training variedacross datasets.
115

Digital image processing via combination of low-level and high-level approaches

Wang, Dong January 2011 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. There is no clear definition how to divide the digital image processing, but normally, digital image processing includes three main steps: low-level, mid-level and highlevel processing. Low-level processing involves primitive operations, such as: image preprocessing to reduce the noise, contrast enhancement, and image sharpening. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. Finally, higher-level processing involves "making sense" of an ensemble of recognised objects, as in image analysis. Based on the theory just described in the last paragraph, this thesis is organised in three parts: Colour Edge and Face Detection; Hand motion detection; Hand Gesture Detection and Medical Image Processing. II In Colour Edge Detection, two new images G-image and R-image are built through colour space transform, after that, the two edges extracted from G-image and R-image respectively are combined to obtain the final new edge. In Face Detection, a skin model is built first, then the boundary condition of this skin model can be extracted to cover almost all of the skin pixels. After skin detection, the knowledge about size, size ratio, locations of ears and mouth is used to recognise the face in the skin regions. In Hand Motion Detection, frame differe is compared with an automatically chosen threshold in order to identify the moving object. For some special situations, with slow or smooth object motion, the background modelling and frame differencing are combined in order to improve the performance. In Hand Gesture Recognition, 3 features of every testing image are input to Gaussian Mixture Model (GMM), and then the Expectation Maximization algorithm (EM)is used to compare the GMM from testing images and GMM from training images in order to classify the results. In Medical Image Processing (mammograms), the Artificial Neural Network (ANN) and clustering rule are applied to choose the feature. Two classifier, ANN and Support Vector Machine (SVM), have been applied to classify the results, in this processing, the balance learning theory and optimized decision has been developed are applied to improve the performance.
116

Fast Methods for Vascular Segmentation Based on Approximate Skeleton Detection

Lidayová, Kristína January 2017 (has links)
Modern medical imaging techniques have revolutionized health care over the last decades, providing clinicians with high-resolution 3D images of the inside of the patient's body without the need for invasive procedures. Detailed images of the vascular anatomy can be captured by angiography, providing a valuable source of information when deciding whether a vascular intervention is needed, for planning treatment, and for analyzing the success of therapy. However, increasing level of detail in the images, together with a wide availability of imaging devices, lead to an urgent need for automated techniques for image segmentation and analysis in order to assist the clinicians in performing a fast and accurate examination. To reduce the need for user interaction and increase the speed of vascular segmentation,  we propose a fast and fully automatic vascular skeleton extraction algorithm. This algorithm first analyzes the volume's intensity histogram in order to automatically adapt the internal parameters to each patient and then it produces an approximate skeleton of the patient's vasculature. The skeleton can serve as a seed region for subsequent surface extraction algorithms. Further improvements of the skeleton extraction algorithm include the expansion to detect the skeleton of diseased arteries and the design of a convolutional neural network classifier that reduces false positive detections of vascular cross-sections. In addition to the complete skeleton extraction algorithm, the thesis presents a segmentation algorithm based on modified onion-kernel region growing. It initiates the growing from the previously extracted skeleton and provides a rapid binary segmentation of tubular structures. To provide the possibility of extracting precise measurements from this segmentation we introduce a method for obtaining a segmentation with subpixel precision out of the binary segmentation and the original image. This method is especially suited for thin and elongated structures, such as vessels, since it does not shrink the long protrusions. The method supports both 2D and 3D image data. The methods were validated on real computed tomography datasets and are primarily intended for applications in vascular segmentation, however, they are robust enough to work with other anatomical tree structures after adequate parameter adjustment, which was demonstrated on an airway-tree segmentation.
117

Computer-assisted volumetric tumour assessment for the evaluation of patient response in malignant pleural mesothelioma

Chen, Mitchell January 2011 (has links)
Malignant pleural mesothelioma (MPM) is a form of aggressive tumour that is almost always associated with prior exposure to asbestos. Currently responsible for over 47,000 deaths worldwide each year and rising, it poses a serious threat to global public health. Many clinical studies of MPM, including its diagnosis, prognostic planning, and the evaluation of a treatment, necessitate the accurate quantification of tumours based on medical image scans, primarily computed tomography (CT). Currently, clinical best practice requires application of the MPM-adapted Response Evaluation Criteria in Solid Tumours (MPM-RECIST) scheme, which provides a uni-dimensional measure of the tumour's size. However, the low CT contrast between the tumour and surrounding tissues, the extensive elongated growth pattern characteristic of MPM, and, as a consequence, the pronounced partial volume effect, collectively contribute to the significant intra- and inter-observer variations in MPM-RECIST values seen in clinical practice, which in turn greatly affect clinical judgement and outcome. In this thesis, we present a novel computer-assisted approach to evaluate MPM patient response to treatments, based on the volumetric segmentation of tumours (VTA) on CT. We have developed a 3D segmentation routine based on the Random Walk (RW) segmentation framework by L. Grady, which is notable for its good performance in handling weak tissue boundaries and the ability to segment any arbitrary shapes with appropriately placed initialisation points. Results also show its benefit with regard to computation time, as compared to other candidate methods such as level sets. We have also added a boundary enhancement regulariser to RW, to improve its performance with smooth MPM boundaries. The regulariser is inspired by anisotropic diffusion. To reduce the required level of user supervision, we developed a registration-assisted segmentation option. Finally, we achieved effective and highly manoeuvrable partial volume correction by applying a reverse diffusion-based interpolation. To assess its clinical utility, we applied our method to a set of 48 CT studies from a group of 15 MPM patients and compared the findings to the MPM-RECIST observations made by a clinical specialist. Correlations confirm the utility of our algorithm for assessing MPM treatment response. Furthermore, our 3D algorithm found applications in monitoring the patient quality of life and palliative care planning. For example, segmented aerated lungs demonstrated very good correlation with the VTA-derived patient responses, suggesting their use in assessing the pulmonary function impairment caused by the disease. Likewise, segmented fluids highlight sites of pleural effusion and may potentially assist in intra-pleural fluid drainage planning. Throughout this thesis, to meet the demands of probabilistic analyses of data, we have used the Non-Parametric Windows (NPW) probability density estimator. NPW outperforms the histogram in terms of its smoothness and kernel density estimator in its parameter setting, and preserves signal properties such as the order of occurrence and band-limitedness of the sample, which are important for tissue reconstruction from discrete image data. We have also worked on extending this estimator to analysing vector-valued quantities; which are essential for multi-feature studies involving values such as image colour, texture, heterogeneity and entropy.
118

Generative Adversarial Networks to enhance decision support in digital pathology

De Biase, Alessia January 2019 (has links)
Histopathological evaluation and Gleason grading on Hematoxylin and Eosin(H&E) stained specimens is the clinical standard in grading prostate cancer. Recently, deep learning models have been trained to assist pathologists in detecting prostate cancer. However, these predictions could be improved further regarding variations in morphology, staining and differences across scanners. An approach to tackle such problems is to employ conditional GANs for style transfer. A total of 52 prostatectomies from 48 patients were scanned with two different scanners. Data was split into 40 images for training and 12 images for testing and all images were divided into overlapping 256x256 patches. A segmentation model was trained using images from scanner A, and the model was tested on images from both scanner A and B. Next, GANs were trained to perform style transfer from scanner A to scanner B. The training was performed using unpaired training images and different types of Unsupervised Image to Image Translation GANs (CycleGAN and UNIT). Beside the common CycleGAN architecture, a modified version was also tested, adding Kullback Leibler (KL) divergence in the loss function. Then, the segmentation model was tested on the augmented images from scanner B.The models were evaluated on 2,000 randomly selected patches of 256x256 pixels from 10 prostatectomies. The resulting predictions were evaluated both qualitatively and quantitatively. All proposed methods outperformed in AUC, in the best case the improvement was of 16%. However, only CycleGAN trained on a large dataset demonstrated to be capable to improve the segmentation tool performance, preserving tissue morphology and obtaining higher results in all the evaluation measurements. All the models were analyzed and, finally, the significance of the difference between the segmentation model performance on style transferred images and on untransferred images was assessed, using statistical tests.
119

Super-Resolution for Fast Multi-Contrast Magnetic Resonance Imaging

Nilsson, Erik January 2019 (has links)
There are many clinical situations where magnetic resonance imaging (MRI) is preferable over other imaging modalities, while the major disadvantage is the relatively long scan time. Due to limited resources, this means that not all patients can be offered an MRI scan, even though it could provide crucial information. It can even be deemed unsafe for a critically ill patient to undergo the examination. In MRI, there is a trade-off between resolution, signal-to-noise ratio (SNR) and the time spent gathering data. When time is of utmost importance, we seek other methods to increase the resolution while preserving SNR and imaging time. In this work, I have studied one of the most promising methods for this task. Namely, constructing super-resolution algorithms to learn the mapping from a low resolution image to a high resolution image using convolutional neural networks. More specifically, I constructed networks capable of transferring high frequency (HF) content, responsible for details in an image, from one kind of image to another. In this context, contrast or weight is used to describe what kind of image we look at. This work only explores the possibility of transferring HF content from T1-weighted images, which can be obtained quite quickly, to T2-weighted images, which would take much longer for similar quality. By doing so, the hope is to contribute to increased efficacy of MRI, and reduce the problems associated with the long scan times. At first, a relatively simple network was implemented to show that transferring HF content between contrasts is possible, as a proof of concept. Next, a much more complex network was proposed, to successfully increase the resolution of MR images better than the commonly used bicubic interpolation method. This is a conclusion drawn from a test where 12 participants were asked to rate the two methods (p=0.0016) Both visual comparisons and quality measures, such as PSNR and SSIM, indicate that the proposed network outperforms a similar network that only utilizes images of one contrast. This suggests that HF content was successfully transferred between images of different contrasts, which improves the reconstruction process. Thus, it could be argued that the proposed multi-contrast model could decrease scan time even further than what its single-contrast counterpart would. Hence, this way of performing multi-contrast super-resolution has the potential to increase the efficacy of MRI.
120

Dual Energy CT as a Foundation for Proton Therapy Treatmen Planning - A pilot study

Näsmark, Torbjörn January 2019 (has links)
The treatment plan for radiation therapy with protons is based on images from a computed tomography (CT) scanner. This is problematic since the photons in the x-ray beam from the CT scanner and the protons are affected differently by the tissue in the patient, which introduce an uncertainty in the track length of the protons. The hypothesis of this study is that a new generation of CT scanners (DECT), with the capacity to simultaneously scan the patient with two photon spectra of different mean energy, will improve the tissue characterisation and which in turn reduce the uncertainty in the track length of the protons. In this study, the accuracy and precision of a DECT-based method from the literature is compared to the conventional calibration method used today at the University clinics in Sweden to relate the attenuation of the photon beam to the slowing down of the protons. The methods are tested on CT images of a phantom, a plastic body containing tissue equivalent plastic inserts of known elemental composition. The results turned out to be inconclusive as there were large uncertainties in the measurements. The method has potential, as has been shown in the literature, but there are many questions that need to be answered before the method is ready to be implemented at the clinic. / En proton som färdas genom människokroppen deponerar endast en liten del av sin energi längs vägen innan den plötsligt deponerar allt i slutet på dess bana. Hur lång dess bana är beror på protonens ursprungliga energi och den atomära sammansättningen hos vävnaden den passerar igenom. Om sammansättningen är känd går det genom att justera den initiala energin bestämma banlängden. Denna egenskap gör protonen väldigt attraktiv för strålterpi, då det innbär möjligheten att behandla med hög precision samt bespara frisk vävnad onödig dos. Strålterapi med protoner planeras idag med bilder från en skiktröntgen (CT) som underlag. Ett problem med det är att röntgenstrålarna från CT-skannern påverkas annorlunda än protonerna av vävnaden, vilket introducerar en osäkerhet i protonernas banlängd. Hypotesen i denna studie är att en ny generation av CT-scanner (DECT), med möjlighet att simultant skanna patienten med två fotonspektran av olika medelenergi, på ett bättre sätt ska kunna bestämma den atomära sammansättningen för vävnaden och därmed reducera osäkerheten i protonernas banlängd. Noggrannhet och precision för en DECT-baserad metod från litteraturen jämförs med den SECT-baserade kalibreringsmetoden, som idag används på Universitetssjukhusen i Sverige för att relatera fotonstrålens dämpning i vävnaden till protonernas inbromsning. Metoderna testas på CT bilder av ett fantom, en plastkropp innehållandes olika cylindrar av vävnadsekvivalent plast med känd atomär sammansättning. Resultatet av den här studien är inte starkt nog för att bevisa hypotesen för studien. Det insamlade bildmaterialet innehåller höga brusnivåer jämfört med de som rapporteras i literaturen. Brusnivåer är så höga att det mesta av resultatet inte kan anses som statistiskt signifikant. Det är dessutom svårt att göra en direkt jämförelse av prestanda med befintlig teori för vävnadskaraktärisering, då bildmaterialet från de CT skanners som jämfördes är av olika typer. De resultat som publicerats i litteraturen visar att den DECT-baserade metoden har potential, men den här studien gör tydligt att det fortfarande finns frågor som måste besvaras innan metoden är redo att implementeras kliniskt.

Page generated in 0.2142 seconds