Spelling suggestions: "subject:"image egmentation"" "subject:"image asegmentation""
101 |
Methods and models for 2D and 3D image analysis in microscopy, in particular for the study of muscle cells / Metoder och modeller för två- och tredimensionell bildanalys inom mikroskopi, speciellt med inrikting mot muskelcellerKarlsson Edlund, Patrick January 2008 (has links)
<p>Many research questions in biological research lead to numerous microscope images that need to be evaluated. Here digital image cytometry, i.e., quantitative, automated or semi-automated analysis of the images is an important rapidly growing discipline. This thesis presents contributions to that field. The work has been carried out in close cooperation with biomedical research partners, successfully solving real world problems.</p><p>The world is 3D and modern imaging methods such as confocal microscopy provide 3D images. Hence, a large part of the work has dealt with the development of new and improved methods for quantitative analysis of 3D images, in particular fluorescently labeled skeletal muscle cells.</p><p>A geometrical model for robust segmentation of skeletal muscle fibers was developed. Images of the multinucleated muscle cells were pre-processed using a novel spatially modulated transform, producing images with reduced complexity and facilitating easy nuclei segmentation. Fibers from several mammalian species were modeled and features were computed based on cell nuclei positions. Features such as myonuclear domain size and nearest neighbor distance, were shown to correlate with body mass, and femur length. Human muscle fibers from young and old males, and females, were related to fiber type and extracted features, where myonuclear domain size variations were shown to increase with age irrespectively of fiber type and gender.</p><p>A segmentation method for severely clustered point-like signals was developed and applied to images of fluorescent probes, quantifying the amount and location of mitochondrial DNA within cells. A synthetic cell model was developed, to provide a controllable golden standard for performance evaluation of both expert manual and fully automated segmentations. The proposed method matches the correctness achieved by manual quantification. </p><p>An interactive segmentation procedure was successfully applied to treated testicle sections of boar, showing how a common industrial plastic softener significantly affects testosterone concentrations.</p>
|
102 |
Structure analysis and lesion detection from retinal fundus imagesGonzalez, Ana Guadalupe Salazar January 2011 (has links)
Ocular pathology is one of the main health problems worldwide. The number of people with retinopathy symptoms has increased considerably in recent years. Early adequate treatment has demonstrated to be effective to avoid the loss of the vision. The analysis of fundus images is a non intrusive option for periodical retinal screening. Different models designed for the analysis of retinal images are based on supervised methods, which require of hand labelled images and processing time as part of the training stage. On the other hand most of the methods have been designed under the basis of specific characteristics of the retinal images (e.g. field of view, resolution). This compromises its performance to a reduce group of retinal image with similar features. For these reasons an unsupervised model for the analysis of retinal image is required, a model that can work without human supervision or interaction. And that is able to perform on retinal images with different characteristics. In this research, we have worked on the development of this type of model. The system locates the eye structures (e.g. optic disc and blood vessels) as first step. Later, these structures are masked out from the retinal image in order to create a clear field to perform the lesion detection. We have selected the Graph Cut technique as a base to design the retinal structures segmentation methods. This selection allows incorporating prior knowledge to constraint the searching for the optimal segmentation. Different link weight assignments were formulated in order to attend the specific needs of the retinal structures (e.g. shape). This research project has put to work together the fields of image processing and ophthalmology to create a novel system that contribute significantly to the state of the art in medical image analysis. This new knowledge provides a new alternative to address the analysis of medical images and opens a new panorama for researchers exploring this research area.
|
103 |
Level Set Segmentation and Volume Visualization of Vascular TreesLäthén, Gunnar January 2013 (has links)
Medical imaging is an important part of the clinical workflow. With the increasing amount and complexity of image data comes the need for automatic (or semi-automatic) analysis methods which aid the physician in the exploration of the data. One specific imaging technique is angiography, in which the blood vessels are imaged using an injected contrast agent which increases the contrast between blood and surrounding tissue. In these images, the blood vessels can be viewed as tubular structures with varying diameters. Deviations from this structure are signs of disease, such as stenoses introducing reduced blood flow, or aneurysms with a risk of rupture. This thesis focuses on segmentation and visualization of blood vessels, consituting the vascular tree, in angiography images. Segmentation is the problem of partitioning an image into separate regions. There is no general segmentation method which achieves good results for all possible applications. Instead, algorithms use prior knowledge and data models adapted to the problem at hand for good performance. We study blood vessel segmentation based on a two-step approach. First, we model the vessels as a collection of linear structures which are detected using multi-scale filtering techniques. Second, we develop machine-learning based level set segmentation methods to separate the vessels from the background, based on the output of the filtering. In many applications the three-dimensional structure of the vascular tree has to be presented to a radiologist or a member of the medical staff. For this, a visualization technique such as direct volume rendering is often used. In the case of computed tomography angiography one has to take into account that the image depends on both the geometrical structure of the vascular tree and the varying concentration of the injected contrast agent. The visualization should have an easy to understand interpretation for the user, to make diagnostical interpretations reliable. The mapping from the image data to the visualization should therefore closely follow routines that are commonly used by the radiologist. We developed an automatic method which adapts the visualization locally to the contrast agent, revealing a larger portion of the vascular tree while minimizing the manual intervention required from the radiologist. The effectiveness of this method is evaluated in a user study involving radiologists as domain experts.
|
104 |
Multi-Manifold learning and Voronoi region-based segmentation with an application in hand gesture recognitionHettiarachchi, Randima 12 1900 (has links)
A computer vision system consists of many stages, depending on its application. Feature extraction and segmentation are two key stages of a typical computer vision system and hence developments in feature extraction and segmentation are significant in improving the overall performance of a computer vision system. There are many inherent problems associated with feature extraction and segmentation processes of a computer vision system. In this thesis, I propose novel solutions to some of these problems in feature extraction and segmentation.
First, I explore manifold learning, which is a non-linear dimensionality reduction technique for feature extraction in high dimensional data. The classical manifold learning techniques perform dimensionality reduction assuming that original data lie on a single low dimensional manifold. However, in reality, data sets often consist of data belonging to multiple classes, which lie on their own manifolds. Thus, I propose a multi-manifold learning technique to simultaneously learn multiple manifolds present in a data set, which cannot be achieved through classical single manifold learning techniques.
Secondly, in image segmentation, when the number of segments of the image is not known, automatically determining the number of segments becomes a challenging problem. In this thesis, I propose an adaptive unsupervised image segmentation technique based on spatial and feature space Dirichlet tessellation as a solution to this problem. Skin segmentation is an important as well as a challenging problem in computer vision applications. Thus, thirdly, I propose a novel skin segmentation technique by combining the multi-manifold learning-based feature extraction and Vorono\"{i} region-based image segmentation.
Finally, I explore hand gesture recognition, which is a prevalent topic in intelligent human computer interaction and demonstrate that the proposed improvements in the feature extraction and segmentation stages improve the overall recognition rates of the proposed hand gesture recognition framework. I use the proposed skin segmentation technique to segment the hand, the object of interest in hand gesture recognition and manifold learning for feature extraction to automatically extract the salient features. Furthermore, in this thesis, I show that different instances of the same dynamic hand gesture have similar underlying manifolds, which allows manifold-matching based hand gesture recognition. / February 2017
|
105 |
Automatic measurements of femoral characteristics using 3D ultrasound images in uteroYaqub, Mohammad January 2011 (has links)
Vitamin D is very important for endochondral ossification and it is commonly insufficient during pregnancy (Javaid et al., 2006). Insufficiency of vitamin D during pregnancy predicts bone mass and hence predicts adult osteoporosis (Javaid et al., 2006). The relationship between maternal vitamin D and manually measured fetal biometry has been studied (Mahon et al., 2009). However, manual fetal biometry especially volumetric measurements are subjective, time-consuming and possibly irreproducible. Computerised measurements can overcome or at least reduce such problems. This thesis concerns the development and evaluation of novel methods to do this. This thesis makes three contributions. Firstly, we have developed a novel technique based on the Random Forests (RF) classifier to segment and measure several fetal femoral characteristics from 3D ultrasound volumes automatically. We propose a feature selection step in the training stage to eliminate irrelevant features and utilise the "good" ones. We also develop a weighted voting mechanism to weight tree probabilistic decisions in the RF classifier. We show that the new RF classifier is more accurate than the classic method (Yaqub et al., 2010b, Yaqub et al., 2011b). We achieved 83% segmentation precision using the proposed technique compared to manually segmented volumes. The proposed segmentation technique was also validated on segmenting adult brain structures in MR images and it showed excellent accuracy. The second contribution is a wavelet-based image fusion technique to enhance the quality of the fetal femur and to compensate for missing information in one volume due to signal attenuation and acoustic shadowing. We show that using image fusion to increase the image quality of ultrasound images of bony structures leads to a more accurate and reproducible assessment and measurement qualitatively and quantitatively (Yaqub et al., 2010a, Yaqub et al., 2011a). The third contribution concerns the analysis of data from a cohort study of 450 fetal femoral ultrasound volumes (18-21 week gestation). The femur length, cross-sectional areas, volume, splaying indices and angles were automatically measured using the RF method. The relationship between these measurements and the fetal gestational age and maternal vitamin D was investigated. Segmentation of a fetal femur is fast (2.3s/volume), thanks to the parallel implementation. The femur volume, length, splaying index were found to significantly correlate with fetal gestational age. Furthermore, significant correlations between the automatic measurements and 10 nmol increment in maternal 25OHD during second trimester were found.
|
106 |
Automatic analysis of magnetic resonance images of speech articulationRaeesy, Zeynabalsadat January 2013 (has links)
Magnetic resonance imaging (MRI) technology has facilitated capturing the dynamics of speech production at fine temporal and spatial resolutions, thus generating substantial quantities of images to be analysed. Manual processing of large MRI databases is labour intensive and time consuming. Hence, to study articulation on large scale, techniques for automatic feature extraction are needed. This thesis investigates approaches for automatic information extraction from an MRI database of dynamic articulation. We first study the articulation by observing the pixel intensity variations in image sequences. The correspondence between acoustic segments and images is established by forced alignment of speech signals recorded during the articulation. We obtain speaker-specific typical phoneme articulations that represent general articulatory configurations in running speech. Articulation dynamics are parametrised by measuring the magnitude of change in intensities over time. We demonstrate a direct correlation between the dynamics of articulation thus measured and the energy of the generated acoustic signals. For more sophisticated applications, a parametric description of vocal tract shape is desired. We investigate different shape extraction techniques and present a framework that can automatically identify and extract the vocal tract shapes. The framework incorporates shape prior information and intensity features in recognising and delineating the shape. The new framework is a promising new tool for automatic identification of vocal tract boundaries in large MRI databases, as demonstrated through extensive assessments. The segmentation framework proposed in this thesis is, to the best of our knowledge, novel in the field of speech production. The methods investigated in this thesis facilitate automatic information extraction from images, either for studying the dynamics of articulation or for vocal tract shape modelling. This thesis advances the state-of-the-art by bringing new perspectives to studying articulation, and introducing a segmentation framework that is automatic, does not require extensive initialisation, and reports a minimum number of failures.
|
107 |
Analysis of Medical Images by Colonies of Prehending EntitiesSmith, Rebecca 11 May 2010 (has links)
The concept of emergent behavior is difficult to define, but can be considered as higher-level activity created by the individual actions of a population of simple agents. A potential means to model such behavior has been previously developed using Alfred North Whitehead's concept of Actual Entities. In computational form, actual entities are agents which evolve over time in response to interactions with their environment via the process of prehension. This occurs within the context of a Colony of Prehending Entities, a framework for implementation of AE concepts. This thesis explores the practical application of this framework in analysis of medical images, with specific focus on Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans. Specialized Slice COPEs are developed for analysis of individual image slices from these scans, focusing on the detection and segmentation of structures of interest (such as bone matter, ventricular tissue, and tumors). These structures exist in 3D and can be extracted across multiple consecutive scan slices. Therefore, a specialized Scan COPE is also proposed which aims to render the structure's volume via interpolation between previously analyzed slice images. The software developed for the specified application also provides visualization of a COPE's evolution toward its goal. This has additional value in general study of the COPE framework and the emergent behavior it generates.
|
108 |
Automatic Segmentation of Pressure Images Acquired in a Clinical SettingPepperl, Anathea 09 May 2013 (has links)
One of the major obstacles to pressure ulcer research is the difficulty in accurately measuring mechanical loading of specific anatomical sites. A human motion analysis system capable of automatically segmenting a patient's body into high-risk areas can greatly improve the ability of researchers and clinicians to understand how pressure ulcers develop in a hospital environment. This project has developed automated computational methods and algorithms to analyze pressure images acquired in a hospital setting. The algorithm achieved 99% overall accuracy for the classification of pressure images into three pose classes (left lateral, supine, and right lateral). An applied kinematic model estimated the overall pose of the patient. The algorithm accuracy depended on the body site, with the sacrum, left trochanter, and right trochanter achieving an accuracy of 87-93%. This project reliably segments pressure images into high-risk regions of interest.
|
109 |
Synthesis of Thoracic Computer Tomography Images using Generative Adversarial NetworksHagvall Hörnstedt, Julia January 2019 (has links)
The use of machine learning algorithms to enhance and facilitate medical diagnosis and analysis is a promising and an important area, which could improve the workload of clinicians’ substantially. In order for machine learning algorithms to learn a certain task, large amount of data needs to be available. Data sets for medical image analysis are rarely public due to restrictions concerning the sharing of patient data. The production of synthetic images could act as an anonymization tool to enable the distribution of medical images and facilitate the training of machine learning algorithms, which could be used in practice. This thesis investigates the use of Generative Adversarial Networks (GAN) for synthesis of new thoracic computer tomography (CT) images, with no connection to real patients. It also examines the usefulness of the images by comparing the quantitative performance of a segmentation network trained with the synthetic images with the quantitative performance of the same segmentation network trained with real thoracic CT images. The synthetic thoracic CT images were generated using CycleGAN for image-to-image translation between label map ground truth images and thoracic CT images. The synthetic images were evaluated using different set-ups of synthetic and real images for training the segmentation network. All set-ups were evaluated according to sensitivity, accuracy, Dice and F2-score and compared to the same parameters evaluated from a segmentation network trained with 344 real images. The thesis shows that it was possible to generate synthetic thoracic CT images using GAN. However, it was not possible to achieve an equal quantitative performance of a segmentation network trained with synthetic data compared to a segmentation network trained with the same amount of real images in the scope of this thesis. It was possible to achieve equal quantitative performance of a segmentation network, as a segmentation network trained on real images, by training it with a combination of real and synthetic images, where a majority of the images were synthetic images and a minority were real images. By using a combination of 59 real images and 590 synthetic images, equal performance as a segmentation network trained with 344 real images was achieved regarding sensitivity, Dice and F2-score. Equal quantitative performance of a segmentation network could thus be achieved by using fewer real images together with an abundance of synthetic images, created at close to no cost, indicating a usefulness of synthetically generated images.
|
110 |
Abordagens para a segmentação de coronárias em ecocardiografia. / Approaches for coronary segmentation in echocardiography.Souza, André Fernando Lourenço de 03 August 2010 (has links)
A Ecocardiografia continua sendo a técnica de captura de imagens mais promissora, não-invasiva, sem radiação ionizante e de baixo custo para avaliação de condições cardíacas. Porém, é afetada consideravelmente por ruídos do tipo speckle, que são difíceis de serem filtrados. Por isso fez-se necessário fazer a escolha certa entre filtragem e segmentador para a obtenção de resultados melhores na segmentação de estruturas. O objetivo dessa pesquisa foi estudar essa combinação entre filtro e segmentador. Para isso, foi desenvolvido um sistema segmentador, a fim de sistematizar essa avaliação. Foram implementados dois filtros para atenuar o efeito do ruído speckle - Linear Scaling Mean Variance (LSMV) e o filtro de Chitwong - testados em imagens simuladas. Foram simuladas 60 imagens com 300 por 300 pixels, 3 modelos, 4 espessuras e 5 níveis de contrastes diferentes, todas com ruído speckle. Além disso, foram feitos testes com a combinação de filtros. Logo após, foi implementado um algoritmo de conectividade Fuzzy para fazer a segmentação e um sistema avaliador, seguindo os critérios descritos por Loizou, que faz a contagem de verdadeiro-positivos (VP) e falso-positivos (FP). Foi verificado que o filtro LSMV é a melhor opção para segmentação por conectividade Fuzzy. Foram obtidas taxas de VP e FP na ordem de 95% e 5%, respectivamente, e acurácia em torno de 95%. Para imagens ruidosas com alto contraste, aplicando a segmentação sem filtragem, a acurácia obtida foi na ordem de 60%. / The echocardiography is the imaging technique that remains most promising, noninvasive, no ionizing radiation and inexpensive to assess heart conditions. On the other hand, is considerably affected by noises, such as speckle, that are very difficult to be filtered. That is why it is necessary to make the right choice of filter and segmentation method to obtain the best results on image segmentation. The goal was evaluate this filter and segmentation method combination. For that, it was developed a segmentation system, to help the assessment. Two filters were implemented to mitigate the effect of speckle noise Linear Scaling Mean Variance (LSMV) and the filter presented by Chitwong - to be tested in simulated images. We simulated 60 images, with size 300 by 300 pixels, 3 models, 4 thicknesses and 5 different levels of contrast, all with speckle noise. In addition, tests were made with a combination of filters. Furthermore, it was implemented a Fuzzy Connectedness algorithm and an evaluation system, following the criteria described by Loizou, which makes the true positives (TP) and false positives (FP) counting. It was found that the LSMV filter is the best option for Fuzzy Connectedness. We obtained rates of TP and FP of 95% and 5% using LSMV, and accuracy of 95%. Using high contrast noisy images, without filtering, we obtained the accuracy in order of 60%.
|
Page generated in 0.115 seconds