• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 2
  • 1
  • Tagged with
  • 57
  • 57
  • 57
  • 23
  • 15
  • 15
  • 13
  • 12
  • 10
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Analysis of Context Channel Integration Strategies for Deep Learning-Based Medical Image Segmentation / Strategier för kontextkanalintegrering inom djupinlärningsbaserad medicinsk bildsegmentering

Stoor, Joakim January 2020 (has links)
This master thesis investigates different approaches for integrating prior information into a neural network for segmentation of medical images. In the study, liver and liver tumor segmentation is performed in a cascading fashion. Context channels in the form of previous segmentations are integrated into a segmentation network at multiple positions and network depths using different integration strategies. Comparisons are made with the traditional integration approach where an input image is concatenated with context channels at a network’s input layer. The aim is to analyze if context information is lost in the upper network layers when the traditional approach is used, and if better results can be achieved if prior information is propagated to deeper layers. The intention is to support further improvements in interactive image segmentation where extra input channels are common. The results that are achieved are, however, inconclusive. It is not possible to differentiate the methods from each other based on the quantitative results, and all the methods show the ability to generalize to an unseen object class after training. Compared to the other evaluated methods there are no indications that the traditional concatenation approach is underachieving, and it cannot be declared that meaningful context information is lost in the deeper network layers.
12

Pattern recognition systems design on parallel GPU architectures for breast lesions characterisation employing multimodality images

Sidiropoulos, Konstantinos January 2014 (has links)
The aim of this research was to address the computational complexity in designing multimodality Computer-Aided Diagnosis (CAD) systems for characterising breast lesions, by harnessing the general purpose computational potential of consumer-level Graphics Processing Units (GPUs) through parallel programming methods. The complexity in designing such systems lies on the increased dimensionality of the problem, due to the multiple imaging modalities involved, on the inherent complexity of optimal design methods for securing high precision, and on assessing the performance of the design prior to deployment in a clinical environment, employing unbiased system evaluation methods. For the purposes of this research, a Pattern Recognition (PR)-system was designed to provide highest possible precision by programming in parallel the multiprocessors of the NVIDIA’s GPU-cards, GeForce 8800GT or 580GTX, and using the CUDA programming framework and C++. The PR-system was built around the Probabilistic Neural Network classifier and its performance was evaluated by a re-substitution method, for estimating the system’s highest accuracy, and by the external cross validation method, for assessing the PR-system’s unbiased accuracy to new, “unseen” by the system, data. Data comprised images of patients with histologically verified (benign or malignant) breast lesions, who underwent both ultrasound (US) and digital mammography (DM). Lesions were outlined on the images by an experienced radiologist, and textural features were calculated. Regarding breast lesion classification, the accuracies for discriminating malignant from benign lesions were, 85.5% using US-features alone, 82.3% employing DM-features alone, and 93.5% combining US and DM features. Mean accuracy to new “unseen” data for the combined US and DM features was 81%. Those classification accuracies were about 10% higher than accuracies achieved on a single CPU, using sequential programming methods, and 150-fold faster. In addition, benign lesions were found smoother, more homogeneous, and containing larger structures. Additionally, the PR-system design was adapted for tackling other medical problems, as a proof of its generalisation. These included classification of rare brain tumours, (achieving 78.6% for overall accuracy (OA) and 73.8% for estimated generalisation accuracy (GA), and accelerating system design 267 times), discrimination of patients with micro-ischemic and multiple sclerosis lesions (90.2% OA and 80% GA with 32-fold design acceleration), classification of normal and pathological knee cartilages (93.2% OA and 89% GA with 257-fold design acceleration), and separation of low from high grade laryngeal cancer cases (93.2% OA and 89% GA, with 130-fold design acceleration). The proposed PR-system improves breast-lesion discrimination accuracy, it may be redesigned on site when new verified data are incorporated in its depository, and it may serve as a second opinion tool in a clinical environment.
13

Methods and models for 2D and 3D image analysis in microscopy, in particular for the study of muscle cells / Metoder och modeller för två- och tredimensionell bildanalys inom mikroskopi, speciellt med inrikting mot muskelceller

Karlsson Edlund, Patrick January 2008 (has links)
<p>Many research questions in biological research lead to numerous microscope images that need to be evaluated. Here digital image cytometry, i.e., quantitative, automated or semi-automated analysis of the images is an important rapidly growing discipline. This thesis presents contributions to that field. The work has been carried out in close cooperation with biomedical research partners, successfully solving real world problems.</p><p>The world is 3D and modern imaging methods such as confocal microscopy provide 3D images. Hence, a large part of the work has dealt with the development of new and improved methods for quantitative analysis of 3D images, in particular fluorescently labeled skeletal muscle cells.</p><p>A geometrical model for robust segmentation of skeletal muscle fibers was developed. Images of the multinucleated muscle cells were pre-processed using a novel spatially modulated transform, producing images with reduced complexity and facilitating easy nuclei segmentation. Fibers from several mammalian species were modeled and features were computed based on cell nuclei positions. Features such as myonuclear domain size and nearest neighbor distance, were shown to correlate with body mass, and femur length. Human muscle fibers from young and old males, and females, were related to fiber type and extracted features, where myonuclear domain size variations were shown to increase with age irrespectively of fiber type and gender.</p><p>A segmentation method for severely clustered point-like signals was developed and applied to images of fluorescent probes, quantifying the amount and location of mitochondrial DNA within cells. A synthetic cell model was developed, to provide a controllable golden standard for performance evaluation of both expert manual and fully automated segmentations. The proposed method matches the correctness achieved by manual quantification. </p><p>An interactive segmentation procedure was successfully applied to treated testicle sections of boar, showing how a common industrial plastic softener significantly affects testosterone concentrations.</p>
14

Automatic measurements of femoral characteristics using 3D ultrasound images in utero

Yaqub, Mohammad January 2011 (has links)
Vitamin D is very important for endochondral ossification and it is commonly insufficient during pregnancy (Javaid et al., 2006). Insufficiency of vitamin D during pregnancy predicts bone mass and hence predicts adult osteoporosis (Javaid et al., 2006). The relationship between maternal vitamin D and manually measured fetal biometry has been studied (Mahon et al., 2009). However, manual fetal biometry especially volumetric measurements are subjective, time-consuming and possibly irreproducible. Computerised measurements can overcome or at least reduce such problems. This thesis concerns the development and evaluation of novel methods to do this. This thesis makes three contributions. Firstly, we have developed a novel technique based on the Random Forests (RF) classifier to segment and measure several fetal femoral characteristics from 3D ultrasound volumes automatically. We propose a feature selection step in the training stage to eliminate irrelevant features and utilise the "good" ones. We also develop a weighted voting mechanism to weight tree probabilistic decisions in the RF classifier. We show that the new RF classifier is more accurate than the classic method (Yaqub et al., 2010b, Yaqub et al., 2011b). We achieved 83% segmentation precision using the proposed technique compared to manually segmented volumes. The proposed segmentation technique was also validated on segmenting adult brain structures in MR images and it showed excellent accuracy. The second contribution is a wavelet-based image fusion technique to enhance the quality of the fetal femur and to compensate for missing information in one volume due to signal attenuation and acoustic shadowing. We show that using image fusion to increase the image quality of ultrasound images of bony structures leads to a more accurate and reproducible assessment and measurement qualitatively and quantitatively (Yaqub et al., 2010a, Yaqub et al., 2011a). The third contribution concerns the analysis of data from a cohort study of 450 fetal femoral ultrasound volumes (18-21 week gestation). The femur length, cross-sectional areas, volume, splaying indices and angles were automatically measured using the RF method. The relationship between these measurements and the fetal gestational age and maternal vitamin D was investigated. Segmentation of a fetal femur is fast (2.3s/volume), thanks to the parallel implementation. The femur volume, length, splaying index were found to significantly correlate with fetal gestational age. Furthermore, significant correlations between the automatic measurements and 10 nmol increment in maternal 25OHD during second trimester were found.
15

Segmentation and sizing of breast cancer masses with ultrasound elasticity imaging

von Lavante, Etienne January 2009 (has links)
Uncertainty in the sizing of breast cancer masses is a major issue in breast screening programs, as there is a tendency to severely underestimate the sizing of malignant masses, especially with ultrasound imaging as part of the standard triple assessment. Due to this issue about 20% of all surgically treated women have to undergo a second resection, therefore the aim of this thesis is to address this issue by developing novel image analysis methods. Ultrasound elasticity imaging has been proven to have a better ability to differentiate soft tissues compared to standard B-mode. Thus a novel segmentation algorithm is presented, employing elasticity imaging to improve the sizing of malignant breast masses in ultrasound. The main contributions of this work are the introduction of a novel filtering technique to significantly improve the quality of the B-mode image, the development of a segmentation algorithm and their application to an ongoing clinical trial. Due to the limitations of the employed ultrasound device, the development of a method to improve the contrast and signal to noise ratio of B-mode images was required. Thus, an autoregressive model based filter on the radio-frequency signal is presented which is able to reduce the misclassification error on a phantom by up to 90% compared to the employed device, achieving similar results to a state-of-the art ultrasound system. By combining the output of this filter with elasticity data into a region based segmentation framework, a computationally highly efficient segmentation algorithm using Graph-cuts is presented. This method is shown to successfully and reliably segment objects on which previous highly cited methods have failed. Employing this method on 18 cases from a clinical trial, it is shown that the mean absolute error is reduced by 2 mm, and the bias of the B-Mode sizing to underestimate the size was overcome. Furthermore, the ability to detect widespread DCIS is demonstrated.
16

Respiratory motion correction in positron emission tomography

Bai, Wenjia January 2010 (has links)
In this thesis, we develop a motion correction method to overcome the degradation of image quality introduced by respiratory motion in positron emission tomography (PET), so that diagnostic performance for lung cancer can be improved. Lung cancer is currently the most common cause of cancer death both in the UK and in the world. PET/CT, which is a combination of PET and CT, providing clinicians with both functional and anatomical information, is routinely used as a non-invasive imaging technique to diagnose and stage lung cancer. However, since a PET scan normally takes 15-30 minutes, respiration is inevitable in data acquisition. As a result, thoracic PET images are substantially degraded by respiratory motion, not only by being blurred, but also by being inaccurately attenuation corrected due to the mismatch between PET and CT. If these challenges are not addressed, the diagnosis of lung cancer may be misled. The main contribution of this thesis is to propose a novel process for respiratory motion correction, in which non-attenuation corrected PET images (PET-NAC) are registered to a reference position for motion correction and then multiplied by a voxel-wise attenuation correction factor (ACF) image for attenuation correction. The ACF image is derived from a CT image which matches the reference position, so that no attenuation correction artefacts would occur. In experiments, the motion corrected PET images show significant improvements over the uncorrected images, which represent the acquisitions typical of current clinical practice. The enhanced image quality means that our method has the potential to improve diagnostic performance for lung cancer. We also develop an automatic lesion detection method based on motion corrected images. A small lung lesion is only 2 or 3 voxels in diameter and of marginal contrast. It could easily be missed by human observers. Our method aims to provide radiologists with a map of potential lesions for decision so that diagnostic efficiency can be improved. It utilises both PET and CT images. The CT image provides a lung mask, to which lesion detection is confined, whereas the PET image provides distribution of glucose metabolism, according to which lung lesions are detected. Experimental results show that respiratory motion correction significantly increases the success of lesion detection, especially for small lesions, and most of the lung lesions can be detected by our method. The method can serve as a useful computer-aided image analysing tool to help radiologists read images and find malignant lung lesions. Finally, we explore the possibility of incorporating temporal information into respiratory motion correction. Conventionally, respiratory gated PET images are individually registered to the reference position. Temporal continuity across the respiratory period is not considered. We propose a spatio-temporal registration algorithm, which models temporally smooth deformation in order to improve the registration performance. However, we discover that the improvement introduced by temporal information is relatively small at the cost of a much longer computation time. Spatial registration with regularisation yields similar results but is superior in speed. Therefore, it is preferable for respiratory motion correction.
17

Quantitative analysis and segmentation of knee MRI using layered optimal graph segmentation of multiple objects and surfaces

Kashyap, Satyananda 01 December 2016 (has links)
Knee osteoarthritis is one of the most debilitating aging diseases as it causes loss of cartilage of the knee joint. Knee osteoarthritis affects the quality of life and increases the burden on health care costs. With no disease-modifying osteoarthritis drug currently available there is an immediate need to understand the factors triggering the onset and progression of the disease. Developing robust segmentation techniques and quantitative analysis helps identify potential imaging-based biomarkers that indicate the onset and progression of osteoarthritis. This thesis work developed layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) framework based knee MRI segmentation algorithms in 3D and longitudinal 3D (4D). A hierarchical random forest classifier algorithm was developed to improve cartilage costs functions for the LOGISMOS framework. The new cost function design significantly improved the segmentation accuracy over the existing state of the art methods. Disease progression results in more artifacts appearing similar to cartilage in MRI. 4D LOGISMOS segmentation was developed to simultaneously segment multiple time-points of a single patient by incorporating information from earlier time points with a relatively healthier knee in the early stage of the disease. Our experiments showed consistently higher segmentation accuracy across all the time-points over 3D LOGISMOS segmentation of each time-point. Fully automated segmentation algorithms proposed are not 100% accurate especially for patient MRI's having severe osteoarthritis and require interactive correction. An interactive technique called just-enough interaction (JEI) was developed which added a fast correction step to the automated LOGISMOS, speeding up the interactions substantially over the current slice-by-slice manual editing while maintaining high accuracy. JEI editing modifies the graph nodes instead of the boundary surfaces of the bones and cartilages providing globally optimally corrected results. 3D JEI was extended to 4D JEI allowing for simultaneous visualization and interaction of multiple time points of the same patients. Further quantitative analysis tools were developed to study the thickness losses. Nomenclature compliant sub-plate detection algorithm was developed to quantify thickness in the smaller load bearing regions of the knee to help understand the varying rates of thickness losses in the different regions. Regression models were developed to predict the thickness accuracy on a patient MRI at a later follow-up using the available thickness information from the LOGISMOS segmentation of the current set of MRI scans of the patient. Further non-cartilage based imaging biomarker quantification was developed to analyze bone shape changes between progressing and non-progressing osteoarthritic populations. The algorithm quantified statistically significant local shape changes between the two populations. Overall this work improved the state of the art in the segmentation of the bones and cartilage of the femur and tibia. Interactive 3D and 4D JEI were developed allowing for fast corrections of the segmentations and thus significantly improving the accuracy while performing many times faster. Further, the quantitative analysis tools developed robustly analyzed the segmentation providing measurable metrics of osteoarthritis progression.
18

Segmentation Methods for Medical Image Analysis : Blood vessels, multi-scale filtering and level set methods

Läthén, Gunnar January 2010 (has links)
<p>Image segmentation is the problem of partitioning an image into meaningful parts, often consisting of an object and background. As an important part of many imaging applications, e.g. face recognition, tracking of moving cars and people etc, it is of general interest to design robust and fast segmentation algorithms. However, it is well accepted that there is no general method for solving all segmentation problems. Instead, the algorithms have to be highly adapted to the application in order to achieve good performance. In this thesis, we will study segmentation methods for blood vessels in medical images. The need for accurate segmentation tools in medical applications is driven by the increased capacity of the imaging devices. Common modalities such as CT and MRI generate images which simply cannot be examined manually, due to high resolutions and a large number of image slices. Furthermore, it is very difficult to visualize complex structures in three-dimensional image volumes without cutting away large portions of, perhaps important, data. Tools, such as segmentation, can aid the medical staff in browsing through such large images by highlighting objects of particular importance. In addition, segmentation in particular can output models of organs, tumors, and other structures for further analysis, quantification or simulation.</p><p>We have divided the segmentation of blood vessels into two parts. First, we model the vessels as a collection of lines and edges (linear structures) and use filtering techniques to detect such structures in an image. Second, the output from this filtering is used as input for segmentation tools. Our contributions mainly lie in the design of a multi-scale filtering and integration scheme for de- tecting vessels of varying widths and the modification of optimization schemes for finding better segmentations than traditional methods do. We validate our ideas on synthetical images mimicking typical blood vessel structures, and show proof-of-concept results on real medical images.</p>
19

Co-dimension 2 Geodesic Active Contours for MRA Segmentation

Lorigo, Liana M., Faugeras, Olivier, Grimson, W.E.L., Keriven, Renaud, Kikinis, Ron, Westin, Carl-Fredrik 11 August 1999 (has links)
Automatic and semi-automatic magnetic resonance angiography (MRA)s segmentation techniques can potentially save radiologists larges amounts of time required for manual segmentation and cans facilitate further data analysis. The proposed MRAs segmentation method uses a mathematical modeling technique whichs is well-suited to the complicated curve-like structure of bloods vessels. We define the segmentation task as ans energy minimization over all 3D curves and use a level set methods to search for a solution. Ours approach is an extension of previous level set segmentations techniques to higher co-dimension.
20

Visualization and Haptics for Interactive Medical Image Analysis / Visualisering och Haptik för Interaktiv Medicinsk Bildanalys

Vidholm, Erik January 2008 (has links)
Modern medical imaging techniques provide an increasing amount of high-dimensional and high-resolution image data that need to be visualized, analyzed, and interpreted for diagnostic and treatment planning purposes. As a consequence, efficient ways of exploring these images are needed. In order to work with specific patient cases, it is necessary to be able to work directly with the medical image volumes and to generate the relevant 3D structures directly as they are needed for visualization and analysis. This requires efficient tools for segmentation, i.e., separation of objects from each other and from the background. Segmentation is hard to automate due to, e.g., high shape variability of organs and limited contrast between tissues. Manual segmentation, on the other hand, is tedious and error-prone. An approach combining the merits from automatic and manual methods is semi-automatic segmentation, where the user interactively provides input to the methods. For complex medical image volumes, the interactive part can be highly 3D oriented and is therefore dependent on the user interface. This thesis presents methods for interactive segmentation and visualization where true 3D interaction with haptic feedback and stereo graphics is used. Well-known segmentation methods such as fast marching, fuzzy connectedness, live-wire, and deformable models, have been tailored and extended for implementation in a 3D environment where volume visualization and haptics are used to guide the user. The visualization is accelerated with graphics hardware and therefore allows for volume rendering in stereo at interactive rates. The haptic feedback is rendered with constraint-based direct volume haptics in order to convey information about the data that is hard to visualize and thereby facilitate the interaction. The methods have been applied to real medical images, e.g., 3D liver CT data and 4D breast MR data with good results. To provide a tool for future work in this area, a software toolkit containing the implementations of the developed methods has been made publicly available.

Page generated in 0.0942 seconds