• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Detecção de estruturas finas e ramificadas em imagens usando campos aleatórios de Markov e informação perceptual / Detection of thin and ramified structures in images using Markov random fields and perceptual information

Talita Perciano Costa Leite 28 August 2012 (has links)
Estruturas do tipo linha/curva (line-like, curve-like), alongadas e ramificadas são comumente encontradas nos ecossistemas que conhecemos. Na biomedicina e na biociências, por exemplo, diversas aplicações podem ser observadas. Justamente por este motivo, extrair este tipo de estrutura em imagens é um constante desafio em problemas de análise de imagens. Porém, diversas dificuldades estão envolvidas neste processo. Normalmente as características espectrais e espaciais destas estruturas podem ser muito complexas e variáveis. Especificamente as mais \"finas\" são muito frágeis a qualquer tipo de processamento realizado na imagem e torna-se muito fácil a perda de informações importantes. Outro problema bastante comum é a ausência de parte das estruturas, seja por motivo de pouca resolução, ou por problemas de aquisição, ou por casos de oclusão. Este trabalho tem por objetivo explorar, descrever e desenvolver técnicas de detecção/segmentação de estruturas finas e ramificadas. Diferentes métodos são utilizados de forma combinada, buscando uma melhor representação topológica e perceptual das estruturas e, assim, melhores resultados. Grafos são usados para a representação das estruturas. Esta estrutura de dados vem sendo utilizada com sucesso na literatura na resolução de diversos problemas em processamento e análise de imagens. Devido à fragilidade do tipo de estrutura explorado, além das técnicas de processamento de imagens, princípios de visão computacional são usados. Busca-se, desta forma, obter um melhor \"entendimento perceptual\" destas estruturas na imagem. Esta informação perceptual e informações contextuais das estruturas são utilizadas em um modelo de campos aleatórios de Markov, buscando o resultado final da detecção através de um processo de otimização. Finalmente, também propomos o uso combinado de diferentes modalidades de imagens simultaneamente. Um software é resultado da implementação do arcabouço desenvolvido e o mesmo é utilizado em duas aplicações para avaliar a abordagem proposta: extração de estradas em imagens de satélite e extração de raízes em imagens de perfis de solo. Resultados do uso da abordagem proposta na extração de estradas em imagens de satélite mostram um melhor desempenho em comparação com método existente na literatura. Além disso, a técnica de fusão proposta apresenta melhora significativa de acordo com os resultados apresentados. Resultados inéditos e promissores são apresentados na extração de raízes de plantas. / Line- curve-like, elongated and ramified structures are commonly found inside many known ecosystems. In biomedicine and biosciences, for instance, different applications can be observed. Therefore, the process to extract this kind of structure is a constant challenge in image analysus problems. However, various difficulties are involved in this process. Their spectral and spatial characteristics are usually very complex and variable. Considering specifically the thinner ones, they are very \"fragile\" to any kind of process applied to the image, and then, it becomes easy the loss of crucial data. Another very common problem is the absence of part of the structures, either because of low image resolution and image acquisition problems or because of occlusion problems. This work aims to explore, describe and develop techniques for detection/segmentation of thin and ramified structures. Different methods are used in a combined way, aiming to reach a better topological and perceptual representation of the structures and, therefore, better results. Graphs are used to represent the structures. This data structure has been successfully used in the literature for the development of solutions for many image processing and analysis problems. Because of the fragility of the kind of structures we are dealing with, some computer vision principles are used besides usual image processing techniques. In doing so, we search for a better \"perceptual understanding\" of these structures in the image. This perceptual information along with contextual information about the structures are used in a Markov random field, searching for a final detection through an optimization process. Lastly, we propose the combined use of different image modalities simultaneously. A software is produced from the implementation of the developed framework and it is used in two application in order to evaluate the proposed approach: extraction of road networks from satellite images and extraction of plant roots from soil profile images. Results using the proposed approach for the extraction of road networks show a better performance if compared with an existent method from the literature. Besides that, the proposed fusion technique presents a meaningful improvement according to the presented results. Original and promising results are presented for the extraction of plant roots from soil profile images.
412

Corte normalizado em grafos = um algoritmo aglomerativo para segmentação de imagens de colonias de bactérias= Normalized cut on graphs: an aglomerative algorithm for bacterial colonies image segmentation / Normalized cut on graphs : an aglomerative algorithm for bacterial colonies image segmentation

Costa, André Luis da, 1982- 22 August 2018 (has links)
Orientador: Marco Antonio Garcia de Carvalho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-22T22:09:46Z (GMT). No. of bitstreams: 1 Costa_AndreLuisda_M.pdf: 6614237 bytes, checksum: b36b41dce03cbb78f037ec20725bddd2 (MD5) Previous issue date: 2013 / Resumo: O problema de segmentação de colônias de bactérias em placas de Petri possui algumas características bem distintas daquelas encontradas, por exemplo, em problemas de segmentação de imagens naturais. A principal característica é o alto número de colônias que podem ser encontradas em uma placa. Desta forma, é primordial que o algoritmo de segmentação seja capaz de realizar a segmentação da imagem em um grande número de regiões. Este cenário extremo é ideal para analisar limitações dos algoritmos de segmentação. De fato, neste trabalho foi verificado que o algoritmo de corte normalizado original, que se fundamenta na teoria espectral de grafos, é inadequado para aplicações que exigem que a segmentação seja realizada em um grande número de regiões. Contudo, a utilização do critério de corte normalizado para segmentar imagens de colônias de bactérias ainda é possível graças a um novo algoritmo que está sendo introduzido neste trabalho. O novo algoritmo fundamenta-se no agrupamento hierárquico dos nós do grafo, ao invés de utilizar conceito da teoria espectral. Experimentos mostram também que o biparticionamento de um grafo pelo novo algoritmo apresenta um valor de corte normalizado médio cerca de 40 vezes menor que o biparticionamento pelo algoritmo baseado na teoria espectral / Abstract: The problem of bacteria colonies segmentation in Petri dishes has some very different characteristics from those found, for example, in segmenting natural images. The main feature is the high number of colonies that can be found on a plate. Thus, it is essential that the segmentation algorithm is capable of performing the image segmentation into a huge number of regions. This extreme scenario is ideal for analyzing segmentation algorithms limitations. In fact, this study showed that the original normalized cut algorithm, which is based on the spectral graph theory, is inappropriate for applications that require that the segmentation be performed on a large number of regions. However, the use of normalized cut criteria for segmenting bacteria colonies images is still possible thanks to a new algorithm that is being introduced in this paper. The new algorithm is based on hierarchical clustering of the graph nodes, instead of using the spectral theory concepts. Experiments also show that the bi-partitioning of a graph by the new algorithm has a normalized cut average value about 40 times lesser than the bi-partitioning by the algorithm based on the spectral theory / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
413

Semi-automatic classification of remote sensing images = Classificação semi-automática de imagens de sensorimento remoto / Classificação semi-automática de imagens de sensorimento remoto

Santos, Jefersson Alex dos, 1984- 25 March 2013 (has links)
Orientadores: Ricardo da Silva Torres, Alexandre Xavier Falcão / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T15:18:27Z (GMT). No. of bitstreams: 1 Santos_JeferssonAlexdos_D.pdf: 18672412 bytes, checksum: 58ac60d8b5342ab705a78d5c82265ab8 (MD5) Previous issue date: 2013 / Resumo: Um grande esforço tem sido feito para desenvolver sistemas de classificação de imagens capazes de criar mapas temáticos de alta qualidade e estabelecer inventários precisos sobre o uso do solo. As peculiaridades das imagens de sensoriamento remoto (ISR), combinados com os desafios tradicionais de classificação de imagens, tornam a classificação de ISRs uma tarefa difícil. Grande parte dos desafios de pesquisa estão relacionados à escala de representação dos dados e, ao mesmo tempo, à dimensão e à representatividade do conjunto de treinamento utilizado. O principal foco desse trabalho está nos problemas relacionados à representação dos dados e à extração de características. O objetivo é desenvolver soluções efetivas para classificação interativa de imagens de sensoriamento remoto. Esse objetivo foi alcançado a partir do desenvolvimento de quatro linhas de pesquisa. A primeira linha de pesquisa está relacionada ao fato de embora descritores de imagens propostos na literatura obterem bons resultados em várias aplicações, muitos deles nunca foram usados para classificação de imagens de sensoriamento remoto. Nessa tese, foram testados doze descritores que codificam propriedades espectrais e sete descritores de textura. Também foi proposta uma metodologia baseada no classificador K-Vizinhos mais Próximos (K-nearest neighbors - KNN) para avaliação de descritores no contexto de classificação. Os descritores Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) e Quantized Compound Change Histogram (QCCH), apresentaram os melhores resultados experimentais na identificação de alvos de café e pastagem. A segunda linha de pesquisa se refere ao problema de seleção de escalas de segmentação para classificação de imagens de sensoriamento baseada em objetos. Métodos propostos recentemente exploram características extraídas de objetos segmentados para melhorar a classificação de imagens de alta resolução. Entretanto, definir uma escala de segmentação adequada é uma tarefa desafiadora. Nessa tese, foram propostas duas abordagens de classificação multiescala baseadas no algoritmo Adaboost. A primeira abordagem, Multiscale Classifier (MSC), constrói um classificador forte que combina características extraídas de múltiplas escalas de segmentação. A outra, Hierarchical Multiscale Classifier (HMSC), explora a relação hierárquica das regiões segmentadas para melhorar a eficiência sem reduzir a qualidade da classificação xi quando comparada à abordagem MSC. Os experimentos realizados mostram que é melhor usar múltiplas escalas do que utilizar apenas uma escala de segmentação. A correlação entre os descritores e as escalas de segmentação também é analisada e discutida. A terceira linha de pesquisa trata da seleção de amostras de treinamento e do refinamento dos resultados da classificação utilizando segmentação multiescala. Para isso, foi proposto um método interativo para classificação multiescala de imagens de sensoriamento remoto. Esse método utiliza uma estratégia baseada em aprendizado ativo que permite o refinamento dos resultados de classificação pelo usuário ao longo de interações. Os resultados experimentais mostraram que a combinação de escalas produzem melhores resultados do que a utilização de escalas isoladas em um processo de realimentação de relevância. Além disso, o método interativo obtém bons resultados com poucas interações. O método proposto necessita apenas de uma pequena porção do conjunto de treinamento para construir classificadores tão fortes quanto os gerados por um método supervisionado utilizando todo o conjunto de treinamento disponível. A quarta linha de pesquisa se refere à extração de características de uma hierarquia de regiões para classificação multiescala. Assim, foi proposta uma abordagem que explora as relações existentes entre as regiões da hierarquia. Essa abordagem, chamada BoW-Propagation, utiliza o modelo bag-of-visual-word para propagar características ao longo de múltiplas escalas. Essa ideia foi estendida para propagar descritores globais baseados em histogramas, a abordagem H-Propagation. As abordagens propostas aceleram o processo de extração e obtém bons resultados quando comparadas a descritores globais / Abstract: A huge effort has been made in the development of image classification systems with the objective of creating high-quality thematic maps and to establish precise inventories about land cover use. The peculiarities of Remote Sensing Images (RSIs) combined with the traditional image classification challenges make RSI classification a hard task. Many of the problems are related to the representation scale of the data, and to both the size and the representativeness of used training set. In this work, we addressed four research issues in order to develop effective solutions for interactive classification of remote sensing images. The first research issue concerns the fact that image descriptors proposed in the literature achieve good results in various applications, but many of them have never been used in remote sensing classification tasks. We have tested twelve descriptors that encode spectral/color properties and seven texture descriptors. We have also proposed a methodology based on the K-Nearest Neighbor (KNN) classifier for evaluation of descriptors in classification context. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID), and Quantized Compound Change Histogram (QCCH) yield the best results in coffee and pasture recognition tasks. The second research issue refers to the problem of selecting the scale of segmentation for object-based remote sensing classification. Recently proposed methods exploit features extracted from segmented objects to improve high-resolution image classification. However, the definition of the scale of segmentation is a challenging task. We have proposed two multiscale classification approaches based on boosting of weak classifiers. The first approach, Multiscale Classifier (MSC), builds a strong classifier that combines features extracted from multiple scales of segmentation. The other, Hierarchical Multiscale Classifier (HMSC), exploits the hierarchical topology of segmented regions to improve training efficiency without accuracy loss when compared to the MSC. Experiments show that it is better to use multiple scales than use only one segmentation scale result. We have also analyzed and discussed about the correlation among the used descriptors and the scales of segmentation. The third research issue concerns the selection of training examples and the refinement of classification results through multiscale segmentation. We have proposed an approach for xix interactive multiscale classification of remote sensing images. It is an active learning strategy that allows the classification result refinement by the user along iterations. Experimental results show that the combination of scales produces better results than isolated scales in a relevance feedback process. Furthermore, the interactive method achieves good results with few user interactions. The proposed method needs only a small portion of the training set to build classifiers that are as strong as the ones generated by a supervised method that uses the whole available training set. The fourth research issue refers to the problem of extracting features of a hierarchy of regions for multiscale classification. We have proposed a strategy that exploits the existing relationships among regions in a hierarchy. This approach, called BoW-Propagation, exploits the bag-of-visual-word model to propagate features along multiple scales. We also extend this idea to propagate histogram-based global descriptors, the H-Propagation method. The proposed methods speed up the feature extraction process and yield good results when compared with global low-level extraction approaches / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
414

Etudes de méthodes et outils pour la cohérence visuelle en réalité mixte appliquée au patrimoine / Studies of methods and tools for the really mixed visual coherence applied to the patrimony

Durand, Emmanuel 19 November 2013 (has links)
Le travail présenté dans ce mémoire a pour cadre le dispositif de réalité mixte ray-on, conçu par la société on-situ. Ce dispositif, dédié à la mise en valeur du patrimoine architectural et en particulier d'édifices historiques, est installé sur le lieu de l'édifice et propose à l'utilisateur une vision uchronique de celui-ci. Le parti pris étant celui du photo-réalisme, deux pistes ont été suivies : l'amélioration du mélange réel virtuel par la reproduction de l'éclairage réel sur les objets virtuels, et la mise en place d'une méthode de segmentation d'image résiliente aux changements lumineux.Pour la reproduction de l'éclairage, une méthode de rendu basé-image est utilisée et associée à une capture haute dynamique de l'environnement lumineux. Une attention particulière est portée pour que ces deux phases soient justes photométriquement et colorimétriquement. Pour évaluer la qualité de la chaîne de reproduction de l'éclairage, une scène test constituée d'une mire de couleur calibrée est mise en place, et capturée sous de multiples éclairages par un couple de caméra, l'une capturant une image de la mire, l'autre une image de l'environnement lumineux. L'image réelle est alors comparée au rendu virtuel de la même scène, éclairée par cette seconde image.La segmentation résiliente aux changements lumineux a été développée à partir d'une classe d'algorithmes de segmentation globale de l'image, considérant celle-ci comme un graphe où trouver la coupe minimale séparant l'arrière plan et l'avant plan. L'intervention manuelle nécessaire à ces algorithmes a été remplacée par une pré-segmentation de moindre qualité à partir d'une carte de profondeur, cette pré-segmentation étant alors utilisée comme une graîne pour la segmentation finale. / The work described in this report has as a target the mixed reality device ray-on, developed by the on-situ company. This device, dedicated to cultural heritage and specifically architectural heritage, is meant to be installed on-site and shows the user an uchronic view of its surroundings. As the chosen stance is to display photo-realistic images, two trails were followed: the improvement of the real-virtual merging by reproducing accurately the real lighting on the virtual objects, and the development of a real-time segmentation method which is resilient to lighting changes.Regarding lighting reproduction, an image-based rendering method is used in addition to a high dynamic range capture of the lighting environment. The emphasis is put on the photometric and colorimetric correctness of these two steps. To measure the quality of the lighting reproduction chain, a test scene is set up with a calibrated color checker, captured by a camera while another camera is grabbing the lighting environment. The image of the real scene is then compared to the simulation of the same scene, enlightened by the light probe.Segmentation resilient to lighting changes is developed from a set of global image segmentation methods, which consider an image as a graph where a cut of minimal energy has to be found between the foreground and the background. These methods being semi-automatic, the manual part is replaced by a low resolution pre-segmentation based on the depthmap of the scene which is used as a seed for the final segmentation.
415

Segmentation Methods for Medical Image Analysis : Blood vessels, multi-scale filtering and level set methods

Läthén, Gunnar January 2010 (has links)
Image segmentation is the problem of partitioning an image into meaningful parts, often consisting of an object and background. As an important part of many imaging applications, e.g. face recognition, tracking of moving cars and people etc, it is of general interest to design robust and fast segmentation algorithms. However, it is well accepted that there is no general method for solving all segmentation problems. Instead, the algorithms have to be highly adapted to the application in order to achieve good performance. In this thesis, we will study segmentation methods for blood vessels in medical images. The need for accurate segmentation tools in medical applications is driven by the increased capacity of the imaging devices. Common modalities such as CT and MRI generate images which simply cannot be examined manually, due to high resolutions and a large number of image slices. Furthermore, it is very difficult to visualize complex structures in three-dimensional image volumes without cutting away large portions of, perhaps important, data. Tools, such as segmentation, can aid the medical staff in browsing through such large images by highlighting objects of particular importance. In addition, segmentation in particular can output models of organs, tumors, and other structures for further analysis, quantification or simulation. We have divided the segmentation of blood vessels into two parts. First, we model the vessels as a collection of lines and edges (linear structures) and use filtering techniques to detect such structures in an image. Second, the output from this filtering is used as input for segmentation tools. Our contributions mainly lie in the design of a multi-scale filtering and integration scheme for de- tecting vessels of varying widths and the modification of optimization schemes for finding better segmentations than traditional methods do. We validate our ideas on synthetical images mimicking typical blood vessel structures, and show proof-of-concept results on real medical images.
416

Fusion d'informations par la théorie de l'évidence pour la segmentation d'images / Information fusion using theory of evidence for image segmentation

Chahine, Chaza 31 October 2016 (has links)
La fusion d’informations a été largement étudiée dans le domaine de l’intelligence artificielle. Une information est en général considérée comme imparfaite. Par conséquent, la combinaison de plusieurs sources d’informations (éventuellement hétérogènes) peut conduire à une information plus globale et complète. Dans le domaine de la fusion on distingue généralement les approches probabilistes et non probabilistes dont fait partie la théorie de l’évidence, développée dans les années 70. Cette méthode permet de représenter à la fois, l’incertitude et l’imprécision de l’information, par l’attribution de fonctions de masses qui s’appliquent non pas à une seule hypothèse (ce qui est le cas le plus courant pour les méthodes probabilistes) mais à un ensemble d’hypothèses. Les travaux présentés dans cette thèse concernent la fusion d’informations pour la segmentation d’images.Pour développer cette méthode nous sommes partis de l’algorithme de la « Ligne de Partage des Eaux » (LPE) qui est un des plus utilisés en détection de contours. Intuitivement le principe de la LPE est de considérer l’image comme un relief topographique où la hauteur d’un point correspond à son niveau de gris. On suppose alors que ce relief se remplit d’eau par des sources placées au niveau des minima locaux de l’image, formant ainsi des bassins versants. Les LPE sont alors les barrages construits pour empêcher les eaux provenant de différents bassins de se mélanger. Un problème de cette méthode de détection de contours est que la LPE directement appliquée sur l’image engendre une sur-segmentation, car chaque minimum local engendre une région. Meyer et Beucher ont proposé de résoudre cette question en spécifiant un ensemble de marqueurs qui seront les seules sources d’inondation du relief. L'extraction automatique des marqueurs à partir des images ne conduit pas toujours à un résultat satisfaisant, en particulier dans le cas d'images complexes. Plusieurs méthodes ont été proposées pour déterminer automatiquement ces marqueurs.Nous nous sommes en particulier intéressés à l’approche stochastique d’Angulo et Jeulin qui estiment une fonction de densité de probabilité (fdp) d'un contour (LPE) après M simulations de la segmentation LPE classique. N marqueurs sont choisis aléatoirement pour chaque réalisation. Par conséquent, une valeur de fdp élevée est attribuée aux points de contours correspondant aux fortes réalisations. Mais la décision d’appartenance d’un point à la « classe contour » reste dépendante d’une valeur de seuil. Un résultat unique ne peut donc être obtenu.Pour augmenter la robustesse de cette méthode et l’unicité de sa réponse, nous proposons de combiner des informations grâce à la théorie de l’évidence.La LPE se calcule généralement à partir de l’image gradient, dérivée du premier ordre, qui donne une information globale sur les contours dans l’image. Alors que la matrice Hessienne, matrice des dérivées d’ordre secondaire, donne une information plus locale sur les contours. Notre objectif est donc de combiner ces deux informations de nature complémentaire en utilisant la théorie de l’évidence. Les différentes versions de la fusion sont testées sur des images réelles de la base de données Berkeley. Les résultats sont comparés avec cinq segmentations manuelles fournies, en tant que vérités terrain, avec cette base de données. La qualité des segmentations obtenues par nos méthodes sont fondées sur différentes mesures: l’uniformité, la précision, l’exactitude, la spécificité, la sensibilité ainsi que la distance métrique de Hausdorff / Information fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance
417

Machine learning methods for brain tumor segmentation / Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveau

Havaei, Seyed Mohammad January 2017 (has links)
Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year. To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process. Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis. In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation. / Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
418

SEGMENTAÇÃO DE GRÃOS DE HEMATITA EM AMOSTRAS DE MINÉRIO DE FERRO POR ANÁLISE DE IMAGENS DE LUZ POLARIZADA / HEMATITE GRAIN SEGMENTATION OF IRON ORE SAMPLES BY POLARIZED LIGHT IMAGE ANALYSIS

Rosa, Marlise 19 February 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The aim of the present work is to classify co-registered pixels of stacks of polarized light images of iron ore into their respective crystalline grains or pores, thus producing grain segmented images that can be analyzed by their size, shape and orientation distributions, as well as their porosity and the size and morphology of the pores. Polished sections of samples of hematite-rich ore are digitally imaged in a rotating polarizer microscope at varying planepolarization angles. An image stack is produced for every field of view, where each image corresponds to a polarizer position. Any point in the sample is registered to the same pixel coordinates at all images in the stack. The resulting set of intensities for each pixel is directly related to the orientation of the crystal sampled at the corresponding position. Multivariate analysis of the sets of intensities leads to the classification of the pixels into their respective crystalline grains. Individual hematite grains of iron ore, as well as their pores, are segmented. The results are compared to those obtained by visual point counting methods. / O objetivo do presente trabalho é classificar pixels co-registrados de pilhas de imagens de luz polarizada de minério de ferro nos seus respectivos grãos cristalinos ou poros, produzindo assim imagens segmentadas por grãos que podem ser analisados quanto às suas distribuições de tamanho, forma e orientação, bem como sua porosidade, tamanho e forma dos poros. Seções polidas de amostras de minério de ferro rico em hematita foram imageadas difratalmente em um microscópio com polarizador giratório em ângulos variados de polarização. Uma pilha de imagens foi produzida para cada campo na qual cada imagem corresponde a uma orientação do polarizador. Cada ponto na amostra foi registrado nas mesmas coordenadas em todas as imagens da pilha. O conjunto resultante de intensidades de cada pixel está diretamente relacionado com a orientação do cristal amostrado na posição correspondente. A análise multivariada dos conjuntos de intensidades leva à classificação dos pixels nos seus respectivos grãos cristalinos. Grãos individuais de hematita do minério de ferro, bem como os seus poros foram segmentados. Os resultados foram comparados com aqueles obtidos pelo método de contagem dos pontos, ou seja, por inspeção visual.
419

Modélisation statistique pour la prédiction du pronostic de patients atteints d’un Accident Vasculaire Cérébral / Statistical modeling for predicting the prognosis of stroke patients

Ozenne, Brice 23 October 2015 (has links)
L’Accident Vasculaire Cérébral (AVC) est une maladie grave pour laquelle des critères très stricts encadrent l’administration du traitement curatif en phase aigüe. Ces critères limitent drastiquement l’accès à ce traitement : on estime que seuls 10% des patients atteints d’un AVC en bénéficient. L’objectif de ce travail est de proposer un modèle prédictif de l’évolution de l’AVC qui permette d’identifier le volume de tissu à risque de chaque patient. Ce volume, qui correspond au bénéfice potentiel du traitement, permettra de mieux orienter le médecin dans sa décision de traiter. Pour répondre à cet objectif nous nous intéressons aux problématiques d’évaluation de modèles prédictifs dans un contexte de faible prévalence, de modélisation prédictive sur données spatiales, de prédiction volumique en fonction de l’option de traitement et de segmentation d’images en présence d’artefacts. Les outils développés ont été rassemblés au sein d’une librairie de fonctions du logiciel R nommée MRIaggr / Stroke is a serious disease that needs emergency health care. Due to potential side effects, the patients must fulfil very restrictive criteria for eligibility to the curative treatment. These criteria limit drastically the accessibility to treatment : currently, an estimated 10% of stroke patients are treated. The purpose of this work was to develop a statistical framework for stroke predictive models. We deal with assessing predictive models in a low-prevalence context, building predictive models for spatial data, making volumic predictions depending on the treatement option, and performing image segmentation in presence of image artefacts. Tools developed in this thesis have been collected in an R package named MRIaggr
420

Particle swarm optimization methods for pattern recognition and image processing

Omran, Mahamed G.H. 17 February 2005 (has links)
Pattern recognition has as its objective to classify objects into different categories and classes. It is a fundamental component of artificial intelligence and computer vision. This thesis investigates the application of an efficient optimization method, known as Particle Swarm Optimization (PSO), to the field of pattern recognition and image processing. First a clustering method that is based on PSO is proposed. The application of the proposed clustering algorithm to the problem of unsupervised classification and segmentation of images is investigated. A new automatic image generation tool tailored specifically for the verification and comparison of various unsupervised image classification algorithms is then developed. A dynamic clustering algorithm which automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference is then developed. Finally, PSO-based approaches are proposed to tackle the color image quantization and spectral unmixing problems. In all the proposed approaches, the influence of PSO parameters on the performance of the proposed algorithms is evaluated. / Thesis (PhD)--University of Pretoria, 2006. / Computer Science / unrestricted

Page generated in 0.1484 seconds