421 |
Microarray image processing : a novel neural network frameworkZineddin, Bachar January 2011 (has links)
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
|
422 |
Privacy Protecting Surveillance: A Proof-of-Concept Demonstrator / Demonstrator för integritetsskyddad övervakningFredrik, Hemström January 2015 (has links)
Visual surveillance systems are increasingly common in our society today. There is a conflict between the demands for security of the public and the demands to preserve the personal integrity. This thesis suggests a solution in which parts of the surveillance images are covered in order to conceal the identities of persons appearing in video, but not their actions or activities. The covered parts could be encrypted and unlocked only by the police or another legal authority in case of a crime. This thesis implements a proof-of-concept demonstrator using a combination of image processing techniques such as foreground segmentation, mathematical morphology, geometric camera calibration and region tracking. The demonstrator is capable of tracking a moderate number of moving objects and conceal their identity by replacing them with a mask or a blurred image. Functionality for replaying recorded data and unlocking individual persons are included. The concept demonstrator shows the chain from concealing the identities of persons to unlocking only a single person on recorded data. Evaluation on a publicly available dataset shows overall good performance.
|
423 |
Classification of skin tumours through the analysis of unconstrained imagesViana, Joaquim Mesquita da Cunha January 2009 (has links)
Skin cancer is the most frequent malignant neoplasm for Caucasian individuals. According to the Skin Cancer Foundation, the incidence of melanoma, the most malignant of skin tumours, and resultant mortality, have increased exponentially during the past 30 years, and continues to grow. [1]. Although often intractable in advanced stages, skin cancer in general and melanoma in particular, if detected in an early stage, can achieve cure ratios of over 95% [1,55]. Early screening of the lesions is, therefore, crucial, if a cure is to be achieved. Most skin lesions classification systems rely on a human expert supported dermatoscopy, which is an enhanced and zoomed photograph of the lesion zone. Nevertheless and although contrary claims exist, as far as is known by the author, classification results are currently rather inaccurate and need to be verified through a laboratory analysis of a piece of the lesion’s tissue. The aim of this research was to design and implement a system that was able to automatically classify skin spots as inoffensive or dangerous, with a small margin of error; if possible, with higher accuracy than the results achieved normally by a human expert and certainly better than any existing automatic system. The system described in this thesis meets these criteria. It is able to capture an unconstrained image of the affected skin area and extract a set of relevant features that may lead to, and be representative of, the four main classification characteristics of skin lesions: Asymmetry; Border; Colour; and Diameter. These relevant features are then evaluated either through a Bayesian statistical process - both a simple k-Nearest Neighbour as well as a Fuzzy k-Nearest Neighbour classifier - a Support Vector Machine and an Artificial Neural Network in order to classify the skin spot as either being a Melanoma or not. The characteristics selected and used through all this work are, to the author’s knowledge, combined in an innovative manner. Rather than simply selecting absolute values from the images characteristics, those numbers were combined into ratios, providing a much greater independence from environment conditions during the process of image capture. Along this work, image gathering became one of the most challenging activities. In fact several of the initially potential sources failed and so, the author had to use all the pictures he could find, namely on the Internet. This limited the test set to 136 images, only. Nevertheless, the process results were excellent. The algorithms developed were implemented into a fully working system which was extensively tested. It gives a correct classification of between 76% and 92% – depending on the percentage of pictures used to train the system. In particular, the system gave no false negatives. This is crucial, since a system which gave false negatives may deter a patient from seeking further treatment with a disastrous outcome. These results are achieved by detecting precise edges for every lesion image, extracting features considered relevant according to the giving different weights to the various extracted features and submitting these values to six classification algorithms – k-Nearest Neighbour, Fuzzy k-Nearest Neighbour, Naïve Bayes, Tree Augmented Naïve Bayes, Support Vector Machine and Multilayer Perceptron - in order to determine the most reliable combined process. Training was carried out in a supervised way – all the lesions were previously classified by an expert on the field before being subject to the scrutiny of the system. The author is convinced that the work presented on this PhD thesis is a valid contribution to the field of skin cancer diagnostics. Albeit its scope is limited – one lesion per image – the results achieved by this arrangement of segmentation, feature extraction and classification algorithms showed this is the right path to achieving a reliable early screening system. If and when, to all these data, values for age, gender and evolution might be used as classification features, the results will, no doubt, become even more accurate, allowing for an improvement in the survival rates of skin cancer patients.
|
424 |
4D MR phase and magnitude segmentations with GPU parallel computingBergen, Robert 26 May 2014 (has links)
Analysis of phase-contrast MR images yields cardiac flow information which can be manipulated to produce accurate segmentations of the aorta. New phase contrast segmentation algorithms are proposed that use mean-based calculations and least mean squared curve fitting techniques. A GPU is used to accelerate these algorithms and it is shown that it is possible to achieve up to a 2760x speedup relative to the CPU computation times. Level sets are applied to a magnitude image, where initial conditions are given by the previous segmentation algorithms. A qualitative comparison of results shows that the algorithm parallelized on the GPU appears to produce the most accurate segmentation. After segmentation, particle trace simulations are run to visualize flow patterns in the aorta. A procedure for the definition of analysis planes is proposed from which virtual particles can be emitted/collected within the vessel, which is useful for future quantification of various flow parameters. / October 2014
|
425 |
Apprentissage statistique, variétés de formes et applications à la segmentation d'imagesEtyngier, Patrick 21 January 2008 (has links) (PDF)
La segmentation d'image avec a priori de forme a fait l'objet d'une attention particulière ces dernières années. La plupart des travaux existants reposent sur des espaces de formes linéarisés avec de petits modes de déformations autour d'une forme moyenne. Cette approche n'est pertinente que lorsque les formes sont relativement similaires. Dans cette thèse, nous introduisons un nouveau cadre dans lequel il est possible de manipuler des a priori de formes plus généraux. Nous modélisons une catégorie de formes comme une variété de dimension finie, la variété des formes a priori, que nous analysons à l'aide d'échantillons de formes en utilisant des techniques de réduction de dimension telles que les diffusion maps. Un plongement dans un espace réduit est alors appris à partir des échantillons. Cependant, ce modèle ne fournit pas d'opérateur de projection explicite sur la variété sous-jacente et nous nous attaquons à ce problème. Les contributions de ce travail se divisent en trois parties. Tout d'abord, nous proposons différentes solutions au problème des "out-of-sample" et nous définissons trois forces attirantes dirigées vers la variété. 1. Projection vers le point le plus proche; 2. Projection ayant la même valeur de plongement; 3. Projection à valeur de plongement constant. Ensuite, nous introduisons un terme d'a-priori de formes pour les coutours/régions actifs/ves. Un terme d'énergie non-linéaire est alors construit pour attirer les formes vers la variété. Enfin, nous décrivons un cadre variationnel pour le debruitage de variété. Des résultats sur des objets réels tels que des silhouettes de voitures ou des structures anatomiques montrent les possibilités de notre méthode.
|
426 |
Adaptive biological image-guided radiation therapy in pharyngo-laryngeal squamous cell carcinomaGeets, Xavier 28 April 2008 (has links)
In recent years, the impressive progress performed in imaging, computational and technological fields have made possible the emergence of image-guided radiation therapy (IGRT) and adaptive radiation therapy (ART). The accuracy in radiation dose delivery reached by IMRT offers the possibility to increase locoregional dose-intensity, potentially overcoming the poor tumor control achieved by standard approaches. However, before implementing such a technique in clinical routine, a particular attention has to be paid at the target volumes definition and delineation procedures to avoid inadequate dosage to TVs/OARs.
In head and neck squamous cell carcinoma (HNSCC), the GTV is typically defined on CT acquired prior to treatment. However, providing functional information about the tumor, FDG-PET might advantageously complete the classical CT-Scan to better define the TVs. Similarly, re-imaging the tumor with optimal imaging modality might account for the constantly changing anatomy and tumor shape occurring during the course of fractionated radiotherapy. Integrating this information into the treatment planning might ultimately lead to a much tighter dose distribution.
From a methodological point of view, the delineation of TVs on anatomical or functional images is not a trivial task. Firstly, the poor soft tissue contrast provided by CT comes out of large interobserver variability in GTV delineation. In this regard, we showed that the use of consistent delineation guidelines significantly improved consistency between observers, either with CT and with MRI. Secondly, the intrinsic characteristics of PET images, including the blur effect and the high level of noise, make the detection of the tumor edges arduous. In this context, we developed specific image restoration tools, i.e. edge-preserving filters for denoising, and deconvolution algorithms for deblurring. This procedure restores the image quality, allowing the use of gradient-based segmentation techniques. This method was validated on phantom and patient images, and proved to be more accurate and reliable than threshold-based methods.
Using these segmentation methods, we proved that GTVs significantly shrunk during radiotherapy in patients with HNSCC, whatever the imaging modality used (MRI, CT, FDG-PET). No clinically significant difference was found between CT and MRI, while FDG-PET provided significantly smaller volumes than those based on anatomical imaging. Refining the target volume delineation by means of functional and sequential imaging ultimately led to more optimal dose distribution to TVs with subsequent soft tissue sparing.
In conclusion, we demonstrated that a multi-modality-based adaptive planning is feasible in HN tumors and potentially opens new avenues for dose escalation strategies. As a high level of accuracy is required by such approach, the delineation of TVs however requires a special care.
|
427 |
Image segmentation using MRFs and statistical shape modelingBesbes, Ahmed 13 September 2010 (has links) (PDF)
Nous présentons dans cette thèse un nouveau modèle statistique de forme et l'utilisons pour la segmentation d'images avec a priori. Ce modèle est représenté par un champ de Markov. Les noeuds du graphe correspondent aux points de contrôle situés sur le contour de la forme géométrique, et les arêtes du graphe représentent les dépendances entre les points de contrôle. La structure du champ de Markov est déterminée à partir d'un ensemble de formes, en utilisant des techniques d'apprentissage de variétés et de groupement non-supervisé. Les contraintes entre les points sont assurées par l'estimation des fonctions de densité de probabilité des longueurs de cordes normalisées. Dans une deuxième étape, nous construisons un algorithme de segmentation qui intègre le modèle statistique de forme, et qui le relie à l'image grâce à un terme région, à travers l'utilisation de diagrammes de Voronoi. Dans cette approche, un contour de forme déformable évolue vers l'objet à segmenter. Nous formulons aussi un algorithme de segmentation basé sur des détecteurs de points d'intérêt, où le terme de régularisation est lié à l'apriori de forme. Dans ce cas, on cherche à faire correspondre le modèle aux meilleurs points candidats extraits de l'image par le détecteur. L'optimisation pour les deux algorithmes est faite en utilisant des méthodes récentes et efficaces. Nous validons notre approche à travers plusieurs jeux de données en 2D et en 3D, pour des applications de vision par ordinateur ainsi que l'analyse d'images médicales.
|
428 |
A Probabilistic Approach to Image Feature Extraction, Segmentation and InterpretationPal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
|
429 |
Volume Visualisation Via Variable-Detail Non-Photorealistic IllustrationMcKinley, Joanne January 2002 (has links)
The rapid proliferation of 3D volume data, including MRI and CT scans, is prompting the search within computer graphics for more effective volume visualisation techniques. Partially because of the traditional association with medical subjects, concepts borrowed from the domain of scientific illustration show great promise for enriching volume visualisation. This thesis describes the first general system dedicated to creating user-directed, variable-detail, scientific illustrations directly from volume data. In particular, using volume segmentation for explicit abstraction in non-photorealistic volume renderings is a new concept. The unique challenges and opportunities of volume data require rethinking many non-photorealistic algorithms that traditionally operate on polygonal meshes. The resulting 2D images are qualitatively different from but complementary to those normally seen in computer graphics, and inspire an analysis of the various artistic implications of volume models for scientific illustration.
|
430 |
Cool-Season Moisture Delivery and Multi-Basin Streamflow Anomalies in the Western United StatesMalevich, Steven Brewster, Malevich, Steven Brewster January 2017 (has links)
Widespread droughts can have a significant impact on western United States streamflow, but the causes of these events are not fully understood. This dissertation examines streamflow from multiple western US basins and establishes the robust, leading modes of variability in interannual streamflow throughout the past century. I show that approximately 50% of this variability is associated with spatially widespread streamflow anomalies that are statistically independent from streamflow's response to the El Niño-Southern Oscillation (ENSO). The ENSO-teleconnection accounts for approximately 25% of the interannual variability in streamflow, across this network. These atmospheric circulation anomalies associated with the most spatially widespread variability are associated with the Aleutian low and the persistent coastal atmospheric ridge in the Pacific Northwest. I use a watershed segmentation algorithm to explicitly track the position and intensity of these features and compare their variability to the multi-basin streamflow variability. Results show that latitudinal shifts in the coastal atmospheric ridge are more strongly associated with streamflow's north-south dipole response to ENSO variability while more spatially widespread anomalies in streamflow most strongly relate to seasonal changes in the coastal ridge intensity. This likely reflects persistent coastal ridge blocking of cool-season precipitation into western US river basins. I utilize the 35 model runs of the Community Earth System Model Large Ensemble (CESMLE) to determine whether the model ensemble simulates the anomalously strong coastal ridges and extreme widespread wintertime precipitation anomalies found in the observation record. Though there is considerable bias in the CESMLE, the CESMLE runs simulate extremely widespread dry precipitation anomalies with a frequency of approximately one extreme event per century during the historical simulations (1920 - 2005). These extremely widespread dry events correspond significantly with anomalously intense coastal atmospheric ridges. The results from these three papers connect widespread interannual streamflow anomalies in the western US - and especially extremely widespread streamflow droughts - with semi-permanent atmospheric ridge anomalies near the coastal Pacific Northwest. This is important to western US water managers because these widespread events appear to have been a robust feature of the past century. The semi-permanent atmospheric features associated with these widespread dry streamflow anomalies are projected to change position significantly in the next century as a response to global climate change. This may change widespread streamflow anomaly characteristic in the western US, though my results do not show evidence of these changes within the instrument record of last century.
|
Page generated in 0.1854 seconds