• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 26
  • 15
  • 8
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 52
  • 50
  • 49
  • 33
  • 32
  • 30
  • 27
  • 23
  • 22
  • 18
  • 17
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Methodology of surface defect detection using machine vision with magnetic particle inspection on tubular material / Méthodologie de détection des défauts de surface par vision artificielle avec magnetic particle inspection sur le matériel tubulaire

Mahendra, Adhiguna 08 November 2012 (has links)
[...]L’inspection des surfaces considérées est basée sur la technique d’Inspection par Particules Magnétiques (Magnetic Particle Inspection (MPI)) qui révèle les défauts de surfaces après les traitements suivants : la surface est enduite d’une solution contenant les particules, puis magnétisées et soumise à un éclairage Ultra-Violet. La technique de contrôle non destructif MPI est une méthode bien connue qui permet de révéler la présence de fissures en surface d’un matériau métallique. Cependant, une fois le défaut révélé par le procédé, ladétection automatique sans intervention de l’opérateur en toujours problématique et à ce jour l'inspection basée sur le procédé MPI des matériaux tubulaires sur les sites de production deVallourec est toujours effectuée sur le jugement d’un opérateur humain. Dans cette thèse, nous proposons une approche par vision artificielle pour détecter automatiquement les défauts à partir des images de la surface de tubes après traitement MPI. Nous avons développé étape par étape une méthodologie de vision artificielle de l'acquisition d'images à la classification.[...] La première étape est la mise au point d’un prototype d'acquisition d’images de la surface des tubes. Une série d’images a tout d’abord été stockée afin de produire une base de données. La version actuelle du logiciel permet soit d’enrichir la base de donnée soit d’effectuer le traitement direct d’une nouvelle image : segmentation et saisie de la géométrie (caractéristiques de courbure) des défauts. Mis à part les caractéristiques géométriques et d’intensité, une analyse multi résolution a été réalisée sur les images pour extraire des caractéristiques texturales. Enfin la classification est effectuée selon deux classes : défauts et de non-défauts. Celle ci est réalisée avec le classificateur des forêts aléatoires (Random Forest) dont les résultats sontcomparés avec les méthodes Support Vector Machine et les arbres de décision.La principale contribution de cette thèse est l'optimisation des paramètres utilisées dans les étapes de segmentations dont ceux des filtres de morphologie mathématique, du filtrage linéaire utilisé et de la classification avec la méthode robuste des plans d’expériences (Taguchi), très utilisée dans le secteur de la fabrication. Cette étape d’optimisation a été complétée par les algorithmes génétiques. Cette méthodologie d’optimisation des paramètres des algorithmes a permis un gain de temps et d’efficacité significatif. La seconde contribution concerne la méthode d’extraction et de sélection des caractéristiques des défauts. Au cours de cette thèse, nous avons travaillé sur deux bases de données d’images correspondant à deux types de tubes : « Tool Joints » et « Tubes Coupling ». Dans chaque cas un tiers des images est utilisé pour l’apprentissage. Nous concluons que le classifieur du type« Random Forest » combiné avec les caractéristiques géométriques et les caractéristiques detexture extraites à partir d’une décomposition en ondelettes donne le meilleur taux declassification pour les défauts sur des pièces de « Tool Joints »(95,5%) (Figure 1). Dans le cas des « coupling tubes », le meilleur taux de classification a été obtenu par les SVM avec l’analyse multirésolution (89.2%) (figure.2) mais l’approche Random Forest donne un bon compromis à 82.4%. En conclusion la principale contrainte industrielle d’obtenir un taux de détection de défaut de 100% est ici approchée mais avec un taux de l’ordre de 90%. Les taux de mauvaises détections (Faux positifs ou Faux Négatifs) peuvent être améliorés, leur origine étant dans l’aspect de l’usinage du tube dans certaines parties, « Hard Bending ».De plus, la méthodologie développée peut être appliquée à l’inspection, par MPI ou non, de différentes lignes de produits métalliques / Industrial surface inspection of tubular material based on Magnetic Particle Inspection (MPI) is a challenging task. Magnetic Particle Inspection is a well known method for Non Destructive Testing with the goal to detect the presence of crack in the tubular surface. Currently Magnetic Particle Inspection for tubular material in Vallourec production site is stillbased on the human inspector judgment. It is time consuming and tedious job. In addition, itis prone to error due to human eye fatigue. In this thesis we propose a machine vision approach in order to detect the defect in the tubular surface MPI images automatically without human supervision with the best detection rate. We focused on crack like defects since they represent the major ones. In order to fulfill the objective, a methodology of machine vision techniques is developed step by step from image acquisition to defect classification. The proposed framework was developed according to industrial constraint and standard hence accuracy, computational speed and simplicity were very important. Based on Magnetic Particle Inspection principles, an acquisition system is developed and optimized, in order to acquire tubular material images for storage or processing. The characteristics of the crack-like defects with respect to its geometric model and curvature characteristics are used as priory knowledge for mathematical morphology and linear filtering. After the segmentation and binarization of the image, vast amount of defect candidates exist. Aside from geometrical and intensity features, Multi resolution Analysis wasperformed on the images to extract textural features. Finally classification is performed with Random Forest classifier due to its robustness and speed and compared with other classifiers such as with Support Vector Machine Classifier. The parameters for mathematical morphology, linear filtering and classification are analyzed and optimized with Design Of Experiments based on Taguchi approach and Genetic Algorithm. The most significant parameters obtained may be analyzed and tuned further. Experiments are performed ontubular materials and evaluated by its accuracy and robustness by comparing ground truth and processed images. This methodology can be replicated for different surface inspection application especially related with surface crack detection
102

Off-line and On-line Affective Recognition of a Computer User through A Biosignal Processing Approach

Ren, Peng 29 March 2013 (has links)
Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.
103

Towards spectral mathematical morphology / Vers la morphologie mathématique spectrale

Deborah, Hilda 21 December 2016 (has links)
En fournissant en plus de l'information spatiale une mesure spectrale en fonction des longueurs d'ondes, l'imagerie hyperspectrale s'enorgueillie d'atteindre une précision bien plus importante que l'imagerie couleur. Grâce à cela, elle a été utilisée en contrôle qualité, inspection de matériaux,… Cependant, pour exploiter pleinement ce potentiel, il est important de traiter la donnée spectrale comme une mesure, d'où la nécessité de la métrologie, pour laquelle exactitude, incertitude et biais doivent être maitrisés à tous les niveaux de traitement.Face à cet objectif, nous avons choisi de développer une approche non-linéaire, basée sur la morphologie mathématique et de l'étendre au domaine spectral par le biais d'une relation d'ordre spectral basée sur les fonctions de distance. Une nouvelle fonction de distance spectrale et une nouvelle relation d'ordonnancement sont ainsi proposées. De plus, un nouvel outil d'analyse du basé sur les histogrammes de différences spectrales a été développé.Afin d'assurer la validité des opérateurs, une validation théorique rigoureuse et une évaluation métrologique ont été mises en œuvre à chaque étage de développement. Des protocoles d'évaluation de la qualité des traitements morphologiques sont proposés, exploitant des jeux de données artificielles pour la validation théorique, des ensembles de données dont certaines caractéristiques sont connues pour évaluer la robustesse et la stabilité et des jeux de données de cas réel pour prouver l'intérêt des approches en contexte applicatif. Les applications sont développées dans le contexte du patrimoine culturel pour l'analyse de peintures et pigments. / Providing not only spatial information but also spectral measure as a function of wavelength, hyperspectral imaging boasts a much greater gain in accuracy than the traditional color imaging. And for this capability, hyperspectral imaging has been employed for quality control, inspection of materials in various fields. However, to fully exploit this potential, it is important to process the spectral data as a measure. This induces the need of metrology where accuracy, uncertainty, and bias are managed at every level of processing.Aiming at developing a metrological image processing framework for spectral data, we select to develop a nonlinear approach using the mathematical morphology framework and extended it to the spectral domain by means of a distance-based ordering relation. A novel spectral distance function and spectral ordering relation are proposed, in addition of a new analysis tools based on spectral differences. To ensure the validity of the spectral mathematical morphology framework, rigorous theoretical validation and metrological assessment are carried out at each development stages. So, protocols for quality assessment of spectral image processing tools are developed. These protocols consist of artificial datasets to validate completely the theoretical requirements, datasets with known characteristics to assess the robustness and stability, and datasets from real cases to proof the usefulness of the framework on applicative context. The application tasks themselves are within the cultural heritage domain, where the target images come from pigments and paintings. / Hyperspektral avbildning muliggjør mye mer nøyaktige målinger enn tradisjonelle gråskala og fargebilder, gjennom både høy romlig og spektral oppløsning (funksjon av bølgelengde). På grunn av dette har hyperspektral avbildning blitt anvendt i økende grad ulike applikasjoner som kvalitetskontroll og inspeksjon av materialer. Men for å fullt ut utnytte sitt potensiale, er det viktig å være i stand til å behandle spektrale bildedata som målinger på en gyldig måte. Dette induserer behovet for metrologi, der nøyaktighet, usikkerhet og skjevhet blir adressert og kontrollert på alle nivå av bildebehandlingen.Med sikte på å utvikle et metrologisk rammeverk for spektral bildebehandling valgte vi en ikke-lineær metodikk basert på det etablerte matematisk morfologi-rammeverket. Vi har utvidet dette rammeverket til det spektrale domenet ved hjelp av en avstandsbasert sorteringsrelasjon. En ny spektral avstandsfunksjon og nye spektrale sorteringsrelasjoner ble foreslått, samt nye verktøy for spektral bildeanalyse basert på histogrammer av spektrale forskjeller.For å sikre gyldigheten av det nye spektrale rammeverket for matematisk morfologi, har vi utført en grundig teoretisk validering og metrologisk vurde-ring på hvert trinn i utviklingen. Dermed er og-så nye protokoller for kvalitetsvurdering av spektrale bildebehandlingsverktøy utviklet. Disse protokollene består av kunstige datasett for å validere de teoretiske måletekniske kravene, bildedatasett med kjente egenskaper for å vurdere robustheten og stabiliteten, og datasett fra reelle anvendelser for å bevise nytten av rammeverket i en anvendt sammenheng. De valgte anvendelsene er innenfor kulturminnefeltet, hvor de analyserte bildene er av pigmenter og malerier.
104

Rozpoznávání topologických informací z plánu křižovatky / Topology Recognition from Crossroad Plan

Huták, Petr January 2016 (has links)
This master‘s thesis describes research, design and development of system for topology recognition from crossroad plan. It explains the methods used for image processing, image segmentation, object recognition. It describes approaches in processing of maps represented by raster images and target software, in which the final product of practical part of project will be integrated. Thesis is focused mainly on comparison of different approaches in feature extraction from raster maps and determination their semantic meaning. Practical part of project is implemented in C# language with OpenCV library.
105

Interaktivní segmentace popředí/pozadí na mobilním telefonu / Interactive Foreground/Background Segmentation on Mobile Phone

Studený, Petr January 2015 (has links)
This thesis deals with the problem of foreground extraction on mobile devices. The main goal of this project is to find or design segmentation methods for separating a user-selected object from an image (or video). The main requirement of these methods is the image processing time and segmentation quality. Some existing solutions of this problem are mentioned and their usability on mobile devices is discussed. A mobile application is created within the project, demonstrating the implemented real time foreground extraction algorithm.
106

Automated lung screening system of multiple pathological targets in multislice CT / Système automatisé de dépistage pulmonaire de multiples cibles pathologiques en tomodensitométrie multicoupe

Chang Chien, Kuang Che 30 September 2011 (has links)
Cette recherche vise à développer un système de diagnostic assisté par ordinateur pour la détection automatique et la classification des pathologies du parenchyme pulmonaire telles que les pneumonies interstitielles idiopathiques et l'emphysème, en tomodensitométrie multicoupe. L’approche proposée repose sur morphologie mathématique 3-D, analyse de texture et logique floue, et peut être divisée en quatre étapes : (1) un schéma de décomposition multi-résolution basé sur un filtre 3-D morphologique exploitée pour discriminer les régions pulmonaires selon différentes échelles d’analyse. (2) Un partitionnement spatial supplémentaire du poumon basé sur la texture du tissu pulmonaire a été introduit afin de renforcer la séparation spatiale entre les motifs extraits au même niveau résolution dans la pyramide de décomposition. Puis, (3) une structure d'arbre hiérarchique a été construite pour décrire la relation d’adjacence entre les motifs à différents niveaux de résolution, et pour chaque motif, six fonctions d'appartenance floue ont été établies pour attribuer une probabilité d'association avec un tissu normal ou une cible pathologique. Enfin, (4) une étape de décision exploite les classifications par la logique floue afin de sélectionner la classe cible de chaque motif du poumon parmi les catégories suivantes : normal, emphysème, fibrose/rayon de miel, et verre dépoli. La validation expérimentale du système développé a permis de définir des spécifications relatives aux valeurs recommandées pour le nombre de niveaux de résolution NRL = 12, et le protocole d'acquisition comportant le noyau de reconstruction “LUNG” / ”BONPLUS” et des collimations fines (1.25 mm ou moins). Elle souligne aussi la difficulté d'évaluer quantitativement la performance de l'approche proposée en l'absence d'une vérité terrain, notamment une évaluation volumétrique, la sélection large des bords de la pathologie, et la distinction entre la fibrose et les structures (vasculaires) de haute densité / This research aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on 3-D mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). The experimental validation of the developed CAD system allowed defining some specifications related with the recommendation values for the number of the resolution levels NRL = 12, and the CT acquisition protocol including the “LUNG” / ”BONPLUS” reconstruction kernel and thin collimations (1.25 mm or less). It also stresses out the difficulty to quantitatively assess the performance of the proposed approach in the absence of a ground truth, such as a volumetric assessment, large margin selection, and distinguishability between fibrosis and high-density (vascular) regions
107

Detekce a sledování malých pohybujících se objektů / Detection and Tracking of Small Moving Objects

Filip, Jan Unknown Date (has links)
Thesis deals with the detection and tracking of small moving objects from static images. This work shows a general overview of methods and approaches to detection and tracking of objects. There are also described some other approaches to the whole solution. It also included basic definitions, such a noise, convolution and mathematical morphology. The work described Bayesian filtering and Kalman filter. It described the theory of Wavelets, wavelets filters and transformations. The work deals with different ways of the blob`s detection. It is here the design and implementation of applications, which is based on the wavelets filters and Kalman filter. It`s implemented several methods of background subtraction, which are compared by testing. Testing and application are designed to detect vehicles, which are moving faraway (at least 200 m away).
108

Estudo de porosidade por processamento de imagens aplicada a patologias do concreto / Computer vision system for identification of alkali aggregate in concrete image

Rodrigo Erthal Wilson 11 August 2015 (has links)
A reação álcali-agregado - RAA é uma patologia de ação lenta que tem sido observada em construções de concreto capaz de comprometer suas estruturas. Sabe-se que a reação álcali-agregado é um fenômeno bastante complexo em virtude da grande variedade de rochas na natureza que são empregadas como agregados no preparo do concreto, podendo cada mineral utilizado afetar de forma distinta a reação ocorrida. Em função dos tipos de estrutura, das suas condições de exposição e dos materiais empregados, a RAA não se comporta sempre da mesma forma, em virtude disto a pesquisa constante neste tema é necessária para o meio técnico e a sociedade. Pesquisas laboratoriais, empíricas e experimentais tem sido rotina em muitos dos estudos da RAA dada ainda à carência de certas definições mais precisas a respeito dos métodos de ensaio, mas também em função da necessidade do melhor conhecimento dos materiais de uso em concretos como os agregados, cimentos, adições, aditivos entre outros e do comportamento da estrutura. Embora técnicas de prevenção possam reduzir significativamente a incidência da RAA, muitas estruturas foram construídas antes que tais medidas fossem conhecidas, havendo no Brasil vários casos de estruturas afetadas, sendo custosos os reparos dessas estruturas. Em estudos recentes sobre o tamanho das partículas de álcali-agregado e sua distribuição foi concluído que o tamanho do agregado está relacionado com o potencial danoso da RAA. Existem ainda indícios de que o tamanho e a distribuição dos poros do concreto também sejam capazes de influenciar o potencial reativo do concreto. Neste trabalho desenvolvemos um Sistema de Visão Artificial (SVA) que, com o uso de técnicas de Processamento de Imagens, é capaz de identificar em imagens de concreto, agregado e poros que atendam em sua forma, às especificações do usuário, possibilitando o cálculo da porosidade e produzindo imagens segmentadas à partir das quais será possível extrair dados relativos à geometria desses elementos. Serão feitas duas abordagens para a obtenção das imagens, uma por Escâner Comercial, que possui vantagens relacionadas à facilidade de aquisição do equipamento, e outra por micro tomógrafo. Uma vez obtidas informações sobre as amostras de concreto, estas podem ser utilizadas para pesquisar a RAA, comparar estruturas de risco com estruturas antigas de forma a melhorar a previsão de risco de ocorrência, bem como serem aplicadas a outras no estudo de outras patologias do concreto menos comuns no nosso país, como o efeito gelo/degelo. / The alkali-aggregate reaction - RAA is a condition of slow action that has been observed in concrete constructions that could affect their structures. It is known that the alkali-aggregate reaction is a very complex phenomenon because of the great variety of rocks in nature that are used as aggregates for concrete, and each mineral used differently affects the reaction occurred. Depending on the type of structure, its exposure conditions and the materials used, this phenomenon does not always behaves the same way, because of this, constant research in this area is needed for the technical means and the society. Laboratory, empirical and experimental research has been routine in many of the RAA studies still given the lack of certain more precise definitions concerning the testing methods, but also because of the need for better understanding of the use of materials in concrete as aggregate, cement, additions, additives etc. and structure behavior. Prevention techniques could significantly reduce the incidence of RAA. Still, many structures were built before such measures were known, several cases of affected structures were discovered in Brazil, all with large spending on repairs of the affected structures. In recent studies on the particle size of the alkaliaggregate and its distribution was concluded that the aggregate size is related to the damaging potential of the RAA. There are also indications that the size and distribution of concrete pores are also capable of influencing the reactive potential of the concrete. In the present work we developed an Artificial Vision System ( VAS ) that uses image processing techniques to identify aggregate and pores in hardened concrete images, enabling the calculation of porosity and producing segmented images that can be used to investigate data about the geometry of these elements. Were made two approaches for obtaining the images, one by Scanner Commercial, which has related advantages will ease the acquisition of equipment, and other micro CT scanner. Once obtained information on the concrete samples, these can be used to search the AAR compared risk structures with old structures so as to enhance the occurrence of risk prediction, as well as be applied to other concrete in the study of other pathologies less common in our country, as ice effect / thaw.
109

Contribució a l'estudi de les uninormes en el marc de les equacions funcionals. Aplicacions a la morfologia matemàtica

Ruiz Aguilera, Daniel 04 June 2007 (has links)
Les uninormes són uns operadors d'agregació que, per la seva definició, es poden considerar com a conjuncions o disjuncions, i que han estat aplicades a camps molt diversos. En aquest treball s'estudien algunes equacions funcionals que tenen com a incògnites les uninormes, o operadors definits a partir d'elles. Una d'elles és la distributivitat, que és resolta per les classes d'uninormes conegudes, solucionant, en particular, un problema obert en la teoria de l'anàlisi no-estàndard. També s'estudien les implicacions residuals i fortes definides a partir d'uninormes, trobant solució a la distributivitat d'aquestes implicacions sobre uninormes. Com a aplicació d'aquests estudis, es revisa i s'amplia la morfologia matemàtica borrosa basada en uninormes, que proporciona un marc inicial favorable per a un nou enfocament en l'anàlisi d'imatges, que haurà de ser estudiat en més profunditat. / Las uninormas son unos operadores de agregación que, por su definición se pueden considerar como conjunciones o disjunciones y que han sido aplicados a campos muy diversos. En este trabajo se estudian algunas ecuaciones funcionales que tienen como incógnitas las uninormas, o operadores definidos a partir de ellas.Una de ellas es la distributividad, que se resuelve para las classes de uninormas conocidas, solucionando, en particular, un problema abierto en la teoría del análisis no estándar. También se estudian las implicaciones residuales y fuertes definidas a partir de uninormas, encontrando solución a la distributividad de estas implicaciones sobre uninormas. Como aplicación de estos estudios, se revisa y amplía la morfología matemática borrosa basada en uninormas, que proporciona un marco inicial favorable para un nuevo enfoque en el análisis de imágenes, que tendrá que ser estudiado en más profundidad. / Uninorms are aggregation operators that, due to its definition, can be considered as conjunctions or disjunctions, and they have been applied to very different fields. In this work, some functional equations are studied, involving uninorms, or operators defined from them as unknowns. One of them is the distributivity equation, that is solved for all the known classes of uninorms, finding solution, in particular, to one open problem in the non-standard analysis theory. Residual implications, as well as strong ones defined from uninorms are studied, obtaining solution to the distributivity equation of this implications over uninorms. As an application of all these studies, the fuzzy mathematical morphology based on uninorms is revised and deeply studied, getting a new framework in image processing, that it will have to be studied in more detail.
110

Εντοπισμός θέσης υπομικροσυστοιχιών και spots σε ψηφιακές εικόνες μικροσυστοιχιών

Μαστρογιάννη, Αικατερίνη 05 January 2011 (has links)
Η τεχνολογία των DNA μικροσυστοιχιών είναι μια υψηλής απόδοσης τεχνική που καθορίζει το κατά πόσο ένα κύτταρο μπορεί να ελέγξει, ταυτόχρονα, την έκφραση ενός πολύ μεγάλου αριθμού γονιδίων. Οι DNA μικροσυστοιχίες χρησιμοποιούνται για την παρακολούθηση και τον έλεγχο των αλλαγών που υφίστανται τα επίπεδα της γονιδιακής έκφρασης λόγω περιβαλλοντικών συνθηκών ή των αλλαγών που λαμβάνουν χώρα σε ασθενή κύτταρα σε σχέση με τα υγιή, χρησιμοποιώντας εξελιγμένες μεθόδους επεξεργασίας πληροφοριών. Εξαιτίας του τρόπου με τον οποίον παράγονται οι μικροσυστοιχίες, κατά την πειραματική επεξεργασία τους, εμφανίζεται ένας μεγάλος αριθμός διαδικασιών που εισάγουν σφάλματα, γεγονός που αναπόφευκτα οδηγεί στην δημιουργία υψηλού επιπέδου θορύβου και σε κατασκευαστικά προβλήματα στα προκύπτοντα δεδομένα. Κατά την διάρκεια των τελευταίων δεκαπέντε ετών, έχουν προταθεί από αρκετούς ερευνητές, πολλές και ικανές μέθοδοι που δίνουν λύσεις στο πρόβλημα της ενίσχυσης και της βελτίωσης των εικόνων μικροσυστοιχίας. Παρά το γεγονός της ευρείας ενασχόλησης των ερευνητών με τις μεθόδους επεξεργασίας των εικόνων μικροσυστοιχίας, η διαδικασία βελτίωσης τους αποτελεί ακόμη, ένα θέμα που προκαλεί ενδιαφέρον καθώς η ανάγκη για καλύτερα αποτελέσματα δεν έχει μειωθεί. Στόχος της διδακτορικής διατριβής είναι να συνεισφέρει σημαντικά στην προσπάθεια βελτίωσης των αποτελεσμάτων προτείνοντας μεθόδους ψηφιακής επεξεργασίας εικόνας που επιφέρουν βελτίωση της ποιότητας των εικόνων μέσω της μείωσης των συνιστωσών του θορύβου και της τεμαχιοποίησης της εικόνας. Πιο συγκεκριμένα, στα πλαίσια εκπόνησης της διατριβής παρουσιάζεται μια νέα αυτόματη μέθοδος εντοπισμού της θέσης των υπομικροσυστοιχιών σκοπός της οποίας είναι να καλυφθεί εν μέρει το κενό που υπάρχει στην βιβλιογραφία των μικροσυστοιχιών για το βήμα της προεπεξεργασίας που αφορά στην αυτόματη εύρεση της θέσης των υπομικροσυστοιχιών σε μια μικροσυστοιχία. Το βήμα αυτό της προεπεξεργασίας, σπανίως, λαμβάνεται υπόψιν καθώς στις περισσότερες εργασίες σχετικές με τις μικροσυστοιχίες, γίνεται μια αυθαίρετη υπόθεση ότι οι υπομικροσυστοιχίες έχουν με κάποιον τρόπο ήδη εντοπιστεί. Στα πραγματικά συστήματα αυτόματης ανάλυσης της εικόνας μικροσυστοιχίας, την αρχική εκτίμηση της θέσης των υπομικροσυστοιχιών, συνήθως, ακολουθεί η διόρθωση που πραγματοποιείται σε κάθε μια από τις θέσεις αυτές από τους χειριστές των συστημάτων. Η αυτοματοποίηση της εύρεσης θέσης των υπομικροσυστοιχιών οδηγεί σε πιο γρήγορους και ακριβείς υπολογισμούς που αφορούν στην πληροφορία που προσδιορίζεται από την εικόνα μικροσυστοιχίας. Στην συνέχεια της διατριβής, παρουσιάζεται μια συγκριτική μελέτη για την αποθορυβοποίηση των εικόνων μικροσυστοιχίας χρησιμοποιώντας τον μετασχηματισμό κυματιδίου και τα χωρικά φίλτρα ενώ επιπλέον με την βοήθεια τεχνικών της μαθηματικής μορφολογίας πραγματοποιείται δραστική μείωση του θορύβου που έχει την μορφή «αλάτι και πιπέρι». Τέλος, στα πλαίσια της εκπόνησης της διδακτορικής διατριβής, παρουσιάζεται μια μέθοδος κατάτμησης των περιοχών των spot των μικροσυστοιχιών, βασιζόμενη στον αλγόριθμο Random Walker. Κατά την πειραματική διαδικασία επιτυγχάνεται επιτυχής κατηγοριοποίηση των spot, ακόμα και στην περίπτωση εικόνων μικροσυστοιχίας με σοβαρά προβλήματα (θόρυβος, κατασκευαστικά λάθη, λάθη χειρισμού κατά την διαδικασία κατασκευής της μικροσυστοιχίας κ.α.), απαιτώντας σαν αρχική γνώση μόνο ένα μικρό αριθμό από εικονοστοιχεία προκειμένου να επιτευχθεί υψηλής ποιότητας κατάτμηση εικόνας. Τα πειραματικά αποτελέσματα συγκρίνονται ποιοτικά με αυτά που προκύπτουν με την εφαρμογή του μοντέλου κατάτμησης Chan-Vese το οποίο χρησιμοποιεί μια αρχική υπόθεση των συνόρων που υπάρχουν μεταξύ των ομάδων προς ταξινόμηση, αποδεικνύοντας ότι η ακρίβεια με την οποία η προτεινόμενη μέθοδος ταξινομεί τις περιοχές των spot στην σωστή κατηγορία σε μια μικροσυστοιχία, είναι σαφώς καλύτερη και πιο ακριβής. / DNA microarray technology is a high-throughput technique that determines how a cell can control the expression of large numbers of genes simultaneously. Microarrays are used to monitor changes in the expression levels of genes in response to changes in environmental conditions or in healthy versus diseased cells by using advanced information processing methods. Due to the nature of the acquisition process, microarray experiments involve a large number of error-prone procedures that lead to a high level of noise and structural problems in the resulting data. During the last fifteen years, robust methods have been proposed by many researchers resulting in several solutions for the enhancement of the microarray images. Though microarray image analysis has been elaborated quite enough, the enhancement process is still an intriguing issue as the need for even better results has not decreased. The goal of this PhD thesis is to significantly contribute to the above effort by proposing enhancing methods (denoising, segmentation) for the microarray image analysis. More specifically, a novel automated subgrid detection method is presented introducing a pre-processing step of the subgrid detection. This step is rarely taken into consideration as in most microarray enhancing methods it is arbitrarily assumed that the subgrids have been already identified. The automation of the subgrid detection leads to faster and more accurate information extraction from microarray images. Consequently, the PhD thesis presents a comparative denoising framework for microarray image denoising that includes wavelets and spatial filters while on the other hand uses mathematical morphology methods to reduce the “salt&pepper”-like noise in microarray images. Finally, a method for microarray spot segmentation is proposed, based on the Random Walker algorithm. During the experimental process, accurate spot segmentation is obtained even in case of relatively-high distorted images, using only an initial annotation for a small number of pixels for high-quality image segmentation. The experimental results are qualitatively compared to the Chan-Vese segmentation model, showing that the accuracy of the proposed microarray spot detection method is more accurate than the spot borders defined by the compared method.

Page generated in 0.0796 seconds