• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Αυτόματη ανίχνευση του αρτηριακού τοιχώματος της καρωτίδας από εικόνες υπερήχων β-σάρωσης

Ματσάκου, Αικατερίνη 10 August 2011 (has links)
Σε αυτή την εργασία παρουσιάζεται μια πλήρως αυτοματοποιημένη μεθοδολογία κατάτμησης για την ανίχνευση των ορίων του αρτηριακού τοιχώματος σε διαμήκεις εικόνες καρωτίδας β-σάρωσης. Συγκεκριμένα υλοποιείται ένας συνδυασμός της μεθοδολογίας του μετασχηματισμού Hough για την ανίχνευση ευθειών με μια μεθοδολογία ενεργών καμπυλών. Η μεθοδολογία του μετασχηματισμού Hough χρησιμοποιείται για τον ορισμό της αρχικής καμπύλης, η οποία στη συνέχεια παραμορφώνεται σύμφωνα με ένα μοντέλο ενεργών καμπυλών βασισμένων σε πεδίο ροής του διανύσματος κλίσης (Gradient Vector Flow - GVF). Το GVF μοντέλο ενεργών καμπυλών βασίζεται στον υπολογισμό του χάρτη ακμών της εικόνας και τον μετέπειτα υπολογισμό του διανυσματικού πεδίου ροής κλίσης, το οποίο με τη σειρά του προκαλεί την παραμόρφωση της αρχικής καμπύλης με σκοπό την εκτίμηση των πραγματικών ορίων του αρτηριακού τοιχώματος. Η προτεινόμενη μεθοδολογία εφαρμόστηκε σε είκοσι (20) εικόνες υγιών περιπτώσεων και δεκαοχτώ (18) εικόνες περιπτώσεων με αθηρωμάτωση για τον υπολογισμό της διαμέτρου του αυλού και την αξιολόγηση της μεθόδου από ποσοτικούς δείκτες ανάλυσης κατά ROC (Receiver Operating Characteristic – ROC). Σύμφωνα με τα αποτελέσματα, δεν παρατηρήθηκαν στατιστικά σημαντικές διαφορές ανάμεσα στις μετρήσεις της διαμέτρου που πραγματοποιήθηκαν από τη διαδικασία της αυτόματης ανίχνευσης και τις αντίστοιχες μετρήσεις που προέκυψαν από την χειροκίνητη ανίχνευση. Οι τιμές της ευαισθησίας, της ειδικότητας και της ακρίβειας στις υγιείς περιπτώσεις ήταν αντίστοιχα 0.97, 0.99 και 0.98 για τις διαστολικές και τις συστολικές εικόνες. Στις παθολογικές περιπτώσεις οι αντίστοιχες τιμές ήταν μεγαλύτερες από 0.89, 0.96 και 0.93. Συμπερασματικά, η προτεινόμενη μεθοδολογία αποτελεί μια ακριβή και αξιόπιστη μέθοδο κατάτμησης εικόνων καρωτίδας και μπορεί να χρησιμοποιηθεί στην κλινική πράξη. / In this thesis, a fully automatic segmentation method based on a combination of a combination of the Hough Transform for the detection of straight lines with active contours is presented, for detecting the carotid artery wall in longitudinal B-mode ultrasound images. A Hough-transform-based methodology is used for the definition of the initial snake, followed by a gradient vector flow (GVF) snake deformation. The GVF snake is based on the calculation of the image edge map and the calculation of the gradient vector flow field which guides its deformation for the estimation of the real arterial wall boundaries. The proposed methodology was applied in twenty and eighteen cases of healthy and atherosclerotic carotid respectively, in order to calculate the lumen diameter and evaluate the method by means of ROC analysis (Receiver Operating Characteristic – ROC). According to the results, there was no significant difference between the automated segmentation and the manual diameter measurements. In healthy cases the sensitivity, specificity and accuracy were 0.97, 0.99 and 0.98, respectively, for both diastolic and systolic phase. In atherosclerotic cases the calculated values of the indices were larger than 0.89, 0.96 and 0.93, respectively. In conclusion, the proposed methodology provides an accurate and reliable way to segment ultrasound images of the carotid wall and can be used in clinical practice.
502

Sistema de agentes polig?nicos para estegan?lise de imagens digitais

Azevedo, Samuel Oliveira de 06 August 2007 (has links)
Made available in DSpace on 2014-12-17T15:47:44Z (GMT). No. of bitstreams: 1 SamuelOA.pdf: 1023593 bytes, checksum: 651d5e25960d6664c54a1e7690f2acb6 (MD5) Previous issue date: 2007-08-06 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / In this work, we propose a multi agent system for digital image steganalysis, based on the poliginic bees model. Such approach aims to solve the problem of automatic steganalysis for digital media, with a case study on digital images. The system architecture was designed not only to detect if a file is suspicious of covering a hidden message, as well to extract the hidden message or information regarding it. Several experiments were performed whose results confirm a substantial enhancement (from 67% to 82% success rate) by using the multi-agent approach, fact not observed in traditional systems. An ongoing application using the technique is the detection of anomalies in digital data produced by sensors that capture brain emissions in little animals. The detection of such anomalies can be used to prove theories and evidences of imagery completion during sleep provided by the brain in visual cortex areas / Neste trabalho, propomos um sistema multi-agentes para estegan?lise em imagens digitais, baseado na met?fora das abelhas polig?nicas. Tal abordagem visa resolver o problema da estegan?lise autom?tica de m?dias digitais, com estudo de caso para imagens digitais. A arquitetura do sistema foi projetada n?o s? para detectar se um arquivo ? ou n?o suspeito de possuir uma mensagem oculta em si, como tamb?m para extrair essa mensagem ou informa??es acerca dela. Foram realizados v?rios experimentos cujos resultados confirmam uma melhoria substancial (de 67% para 82% de acertos) com o uso da abordagem multi-agente, fato n?o observado em outros sistemas tradicionais. Uma aplica??o atualmente em andamento com o uso da t?cnica ? a detec??o de anomalias em dados digitais produzidos por sensores que captam emiss?es cerebrais em pequenos animais. A detec??o de tais anomalias pode ser usada para comprovar teorias e evidencias de complementa??o do imageamento durante o sono, provida pelo c?rebro nas ?reas visuais do c?rtex cerebral
503

Segmentation et interprétation d'images naturelles pour l'identification de feuilles d'arbres sur smartphone / Segmentation and interpretation of natural images for tree leaf identification on smartphones

Cerutti, Guillaume 21 November 2013 (has links)
Les espèces végétales, et en particulier les espèces d'arbres, forment un cadre de choix pour un processus de reconnaissance automatique basé sur l'analyse d'images. Les critères permettant de les identifier sont en effet le plus souvent des éléments morphologiques visuels, bien décrits et référencés par la botanique, qui laissent à penser qu'une reconnaissance par la forme est envisageable. Les feuilles constituent dans ce contexte les organes végétaux discriminants les plus faciles à appréhender, et sont de ce fait les plus communément employés pour ce problème qui connaît actuellement un véritable engouement. L'identification automatique pose toutefois un certain nombre de problèmes complexes, que ce soit dans le traitement des images ou dans la difficulté même de la classification en espèces, qui en font une application de pointe en reconnaissance de formes.Cette thèse place le problème de l'identification des espèces d'arbres à partir d'images de leurs feuilles dans le contexte d'une application pour smartphones destinée au grand public. Les images sur lesquelles nous travaillons sont donc potentiellement complexes et leur acquisition peu supervisée. Nous proposons alors des méthodes d'analyse d'images dédiées, permettant la segmentation et l'interprétation des feuilles d'arbres, en se basant sur une modélisation originale de leurs formes, et sur des approches basées modèles déformables. L'introduction de connaissances a priori sur la forme des objets améliore ainsi de façon significative la qualité et la robustesse de l'information extraite de l'image. Le traitement se déroulant sur l'appareil, nous avons développé ces algorithmes en prenant en compte les contraintes matérielles liées à leur utilisation.Nous introduisons également une description spécifique des formes des feuilles, inspirée par les caractéristiques déterminantes recensées dans les ouvrages botaniques. Ces différents descripteurs fournissent des informations de haut niveau qui sont fusionnées en fin de processus pour identifier les espèces, tout en permettant une interprétation sémantique intéressante dans le cadre de l'interaction avec un utilisateur néophyte. Les performances obtenues en termes de classification, sur près de 100 espèces d'arbres, se situent par ailleurs au niveau de l'état de l'art dans le domaine, et démontrent une robustesse particulière sur les images prises en environnement naturel. Enfin, nous avons intégré l'implémentation de notre système de reconnaissance dans l'application Folia pour iPhone, qui constitue une validation de nos approches et méthodes dans un cadre réel. / Plant species, and especially tree species, constitute a well adapted target for an automatic recognition process based on image analysis. The criteria that make their identification possible are indeed often morphological visual elements, which are well described and referenced by botany. This leads to think that a recognition through shape is worth considering. Leaves stand out in this context as the most accessible discriminative plant organs, and are subsequently the most often used for this problem recently receiving a particular attention. Automatic identification however gives rise to a fair amount of complex problems, linked with the processing of images, or in the difficult nature of the species classification itself, which make it an advanced application for pattern recognition.This thesis considers the problem of tree species identification from leaf images within the framework of a smartphone application intended for a non-specialist audience. The images on which we expect to work are then potentially very complex scenes and their acquisition rather unsupervised. We consequently propose dedicated methods for image analysis, in order to segment and interpret tree leaves, using an original shape modelling and deformable templates. The introduction on prior knowledge on the shape of objects enhances significatively the quality and the robustness of the information we extract from the image. All processing being carried out on the mobile device, we developed those algorithms with concern towards the material constraints of their exploitation. We also introduce a very specific description of leaf shapes, inspired by the determining characteristics listed in botanical references. These different descriptors constitute independent sources of high-level information that are fused at the end of the process to identify species, while providing the user with a possible semantic interpretation. The classification performance demonstrated over approximately 100 tree species are competitive with state-of-the-art methods of the domain, and show a particular robustness to difficult natural background images. Finally, we integrated the implementation of our recognition system into the \textbf{Folia} application for iPhone, which constitutes a validation of our approaches and methods in a real-world use.
504

Image Processing Methods for Myocardial Scar Analysis from 3D Late-Gadolinium Enhanced Cardiac Magnetic Resonance Images

Usta, Fatma 25 July 2018 (has links)
Myocardial scar, a non-viable tissue which occurs on the myocardium due to the insufficient blood supply to the heart muscle, is one of the leading causes of life-threatening heart disorders, including arrhythmias. Analysis of myocardial scar is important for predicting the risk of arrhythmia and locations of re-entrant circuits in patients’ hearts. For applications, such as computational modeling of cardiac electrophysiology aimed at stratifying patient risk for post-infarction arrhythmias, reconstruction of the intact geometry of scar is required. Currently, 2D multi-slice late gadolinium-enhanced magnetic resonance imaging (LGEMRI) is widely used to detect and quantify myocardial scar regions of the heart. However, due to the anisotropic spatial dimensions in 2D LGE-MR images, creating scar geometry from these images results in substantial reconstruction errors. For applications requiring reconstructing the intact geometry of scar surfaces, 3D LGE-MR images are more suited as they are isotropic in voxel dimensions and have a higher resolution. While many techniques have been reported for segmentation of scar using 2D LGEMR images, the equivalent studies for 3D LGE-MRI are limited. Most of these 2D and 3D techniques are basic intensity threshold-based methods. However, due to the lack of optimum threshold (Th) value, these intensity threshold-based methods are not robust in dealing with complex scar segmentation problems. In this study, we propose an algorithm for segmentation of myocardial scar from 3D LGE-MR images based on Markov random field based continuous max-flow (CMF) method. We utilize the segmented myocardium as the region of interest for our algorithm. We evaluated our CMF method for accuracy by comparing its results to manual delineations using 3D LGE-MR images of 34 patients. We also compared the results of the CMF technique to ones by conventional full-width-at-half-maximum (FWHM) and signal-threshold-to-reference-mean (STRM) methods. The CMF method yields a Dice similarity coefficient (DSC) of 71 +- 8.7% and an absolute volume error (|VE|) of 7.56 +- 7 cm3. Overall, the CMF method outperformed the conventional methods for almost all reported metrics in scar segmentation. We present a comparison study for scar geometries obtained from 2D vs 3D LGE-MRI. As the myocardial scar geometry greatly influences the sensitivity of risk prediction in patients, we compare and understand the differences in reconstructed geometry of scar generated using 2D versus 3D LGE-MR images beside providing a scar segmentation study. We use a retrospectively acquired dataset of 24 patients with a myocardial scar who underwent both 2D and 3D LGE-MR imaging. We use manually segmented scar volumes from 2D and 3D LGE-MRI. We then reconstruct the 2D scar segmentation boundaries to 3D surfaces using a LogOdds-based interpolation method. We use numerous metrics to quantify and analyze the scar geometry including fractal dimensions, the number-of-connected-components, and mean volume difference. The higher 3D fractal dimension results indicate that the 3D LGE-MRI produces a more complex surface geometry by better capturing the sparse nature of the scar. Finally, 3D LGE-MRI produces a larger scar surface volume (27.49 +- 20.38 cm3) than 2D-reconstructed LGE-MRI (25.07 +- 16.54 cm3).
505

Limited angular range X-ray micro-computerized tomography : derivation of anatomical information as a prior for optical luminescence tomography / Micro-tomographie par rayons X à angle limité : dérivation d’une information anatomique a priori pour la tomographie optique par luminescence

Barquero, Harold 22 May 2015 (has links)
Cette thèse traite du couplage d'un tomographe optique par luminescence (LCT) et d'un tomographe par rayons X (XCT), en présence d'une contrainte sur la géométrie d'acquisition du XCT. La couverture angulaire du XCT est limitée à 90 degrés pour satisfaire des contraintes spatiales imposées par le LCT existant dans lequel le XCT doit être intégré. L'objectif est de dériver une information anatomique, à partir de l'image morphologique issue du XCT. Notre approche a consisté i) en l'implémentation d'un algorithme itératif régularisé pour la reconstruction tomographique à angle limité, ii) en la construction d'un atlas anatomique statistique de la souris et iii) en l'implémentation d'une chaîne automatique réalisant la segmentation des images XCT, l'attribution d'une signification anatomique aux éléments segmentés, le recalage de l'atlas statistique sur ces éléments et ainsi l'estimation des contours de certains tissus à faible contraste non identifiables en pratique dans une image XCT standard. / This thesis addresses the combination of an Optical Luminescence Tomograph (OLT) and X-ray Computerized Tomograph (XCT), dealing with geometrical constraints defined by the existing OLT system in which the XCT must be integrated. The result is an acquisition geometry of XCT with a 90 degrees angular range only. The aim is to derive an anatomical information from the morphological image obtained with the XCT. Our approach consisted i) in the implementation of a regularized iterative algorithm for the tomographic reconstruction with limited angle data, ii) in the construction of a statistical anatomical atlas of the mouse and iii) in the implementation of an automatic segmentation workflow performing the segmentation of XCT images, the labelling of the segmented elements, the registration of the statistical atlas on these elements and consequently the estimation of the outlines of low contrast tissues that can not be identified in practice in a standard XCT image.
506

[en] POPULATION DISTRIBUTION MAPPING THROUGH THE DETECTION OF BUILDING AREAS IN GOOGLE EARTH IMAGES OF HETEROGENEOUS REGIONS USING DEEP LEARNING / [pt] MAPEAMENTO DA DISTRIBUIÇÃO POPULACIONAL ATRAVÉS DA DETECÇÃO DE ÁREAS EDIFICADAS EM IMAGENS DE REGIÕES HETEROGÊNEAS DO GOOGLE EARTH USANDO DEEP LEARNING

CASSIO FREITAS PEREIRA DE ALMEIDA 08 February 2018 (has links)
[pt] Informações precisas sobre a distribuição da população são reconhecidamente importantes. A fonte de informação mais completa sobre a população é o censo, cujos os dados são disponibilizados de forma agregada em setores censitários. Esses setores são unidades operacionais de tamanho e formas irregulares, que dificulta a análise espacial dos dados associados. Assim, a mudança de setores censitários para um conjunto de células regulares com estimativas adequadas facilitaria a análise. Uma metodologia a ser utilizada para essa mudança poderia ser baseada na classificação de imagens de sensoriamento remoto para a identificação de domicílios, que é a base das pesquisas envolvendo a população. A detecção de áreas edificadas é uma tarefa complexa devido a grande variabilidade de características de construção e de imagens. Os métodos usuais são complexos e muito dependentes de especialistas. Os processos automáticos dependem de grandes bases de imagens para treinamento e são sensíveis à variação de qualidade de imagens e características das construções e de ambiente. Nesta tese propomos a utilização de um método automatizado para detecção de edificações em imagens Google Earth que mostrou bons resultados utilizando um conjunto de imagens relativamente pequeno e com grande variabilidade, superando as limitações dos processos existentes. Este resultado foi obtido com uma aplicação prática. Foi construído um conjunto de imagens com anotação de áreas construídas para 12 regiões do Brasil. Estas imagens, além de diferentes na qualidade, apresentam grande variabilidade nas características das edificações e no ambiente geográfico. Uma prova de conceito será feita na utilização da classificação de área construída nos métodos dasimétrico para a estimação de população em gride. Ela mostrou um resultado promissor quando comparado com o método usual, possibilitando a melhoria da qualidade das estimativas. / [en] The importance of precise information about the population distribution is widely acknowledged. The census is considered the most reliable and complete source of this information, and its data are delivered in an aggregated form in sectors. These sectors are operational units with irregular shapes, which hinder the spatial analysis of the data. Thus, the transformation of sectors onto a regular grid would facilitate such analysis. A methodology to achieve this transformation could be based on remote sensing image classification to identify building where the population lives. The building detection is considered a complex task since there is a great variability of building characteristics and on the images quality themselves. The majority of methods are complex and very specialist dependent. The automatic methods require a large annotated dataset for training and they are sensitive to the image quality, to the building characteristics, and to the environment. In this thesis, we propose an automatic method for building detection based on a deep learning architecture that uses a relative small dataset with a large variability. The proposed method shows good results when compared to the state of the art. An annotated dataset has been built that covers 12 cities distributed in different regions of Brazil. Such images not only have different qualities, but also shows a large variability on the building characteristics and geographic environments. A very important application of this method is the use of the building area classification in the dasimetric methods for the population estimation into grid. The concept proof in this application showed a promising result when compared to the usual method allowing the improvement of the quality of the estimates.
507

Construção e aplicação de atlas de pontos salientes 3D na inicialização de modelos geométricos deformáveis em imagens de ressonância magnética

Pinto, Carlos Henrique Villa 10 March 2016 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-09-30T13:54:49Z No. of bitstreams: 1 DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-14T14:06:37Z (GMT) No. of bitstreams: 1 DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-14T14:06:48Z (GMT) No. of bitstreams: 1 DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) / Made available in DSpace on 2016-10-14T14:06:58Z (GMT). No. of bitstreams: 1 DissCHVP.pdf: 4899707 bytes, checksum: e7de60b5431e48ddbc2b9016dae268c7 (MD5) Previous issue date: 2016-03-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / The magnetic resonance (MR) imaging has become an indispensable tool for the diagnosis and study of various diseases and syndromes of the central nervous system, such as Alzheimer’s disease (AD). In order to perform the precise diagnosis of a disease, as well as the evolutionary monitoring of a certain treatment, the neuroradiologist doctor often needs to measure and assess volume and shape changes in certain brain structures along a series of MR images. For that, the previous delineation of the structures of interest is necessary. In general, such task is manually done, with limited help from a computer, and therefore it has several problems. For this reason, many researchers have turned their efforts towards the development of automatic techniques for segmentation of brain structures in MR images. Among the various approaches proposed in the literature, techniques based on deformable models and anatomical atlases are among those which present the best results. However, one of the main difficulties in applying geometric deformable models is the initial positioning of the model. Thus, this research aimed to develop an atlas of 3D salient points (automatically detected from a set of MR images) and to investigate the applicability of such atlas in guiding the initial positioning of geometric deformable models representing brain structures, with the purpose of helping the automatic segmentation of such structures in MR images. The processing pipeline included the use of a 3D salient point detector based on the phase congruency measure, an adaptation of the shape contexts technique to create point descriptors and the estimation of a B-spline transform to map pairs of matching points. The results, evaluated using the Jaccard and Dice metrics before and after the model initializations, showed a significant gain in the tests involving synthetically deformed images of normal patients, but for images of clinical patients with AD the gain was marginal and can still be improved in future researches. Some ways to do such improvements are discussed in this work. / O imageamento por ressonância magnética (RM) tornou-se uma ferramenta indispensável no diagnóstico e estudo de diversas doenças e síndromes do sistema nervoso central, tais como a doença de Alzheimer (DA). Para que se possa realizar o diagnóstico preciso de uma doença, bem como o acompanhamento evolutivo de um determinado tratamento, o médico neurorradiologista frequentemente precisa medir e avaliar alterações de volume e forma em determinadas estruturas do cérebro ao longo de uma série de imagens de RM. Para isso, a delineação prévia das estruturas de interesse nas imagens é necessária. Em geral, essa tarefa é realizada manualmente, com ajuda limitada de um computador, e portanto possui diversos problemas. Por esse motivo, vários pesquisadores têm voltado seus esforços para o desenvolvimento de técnicas automáticas de segmentação de estruturas cerebrais em imagens de RM. Dentre as várias abordagens propostas na literatura, técnicas baseadas em modelos deformáveis e atlas anatômicos estão entre as que apresentam os melhores resultados. No entanto, uma das principais dificuldades na aplicação de modelos geométricos deformáveis é o posicionamento inicial do modelo. Assim, esta pesquisa teve por objetivo desenvolver um atlas de pontos salientes 3D (automaticamente detectados em um conjunto de imagens de RM) e investigar a aplicabilidade de tal atlas em guiar o posicionamento inicial de modelos geométricos deformáveis representando estruturas cerebrais, com o propósito de auxiliar a segmentação automática de tais estruturas em imagens de RM. O arcabouço de processamento incluiu o uso de um detector de pontos salientes 3D baseado na medida de congruência de fase, uma adaptação da técnica shape contexts para a criação de descritores de pontos e a estimação de uma transformação B-spline para mapear pares de pontos correspondentes. Os resultados, avaliados com as métricas Jaccard e Dice antes e após a inicialização dos modelos, mostraram um ganho significativo em testes envolvendo imagens sinteticamente deformadas de pacientes normais, mas em imagens de pacientes clínicos com DA o ganho foi marginal e ainda pode ser melhorado em pesquisas futuras. Algumas maneiras de se realizar tais melhorias são discutidas neste trabalho. / FAPESP: 2015/02232-1 / CAPES: 2014/11988-0
508

Zona de empate : o elo entre transformadas de watershed e conexidade nebulosa / Tie-zone : the bridge between watershed transforms and fuzzy connectedness

Audigier, Romaric Matthias Michel 13 August 2018 (has links)
Orientador: Roberto de Alencar Lotufo / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-13T08:32:02Z (GMT). No. of bitstreams: 1 Audigier_RomaricMatthiasMichel_D.pdf: 1753584 bytes, checksum: 1d31eb6f095099ffb5c3ec8d0a96a9cf (MD5) Previous issue date: 2007 / Resumo: Esta tese introduz o novo conceito de transformada de zona de empate que unifica as múltiplas soluções de uma transformada de watershed, conservando apenas as partes comuns em todas estas, tal que as partes que diferem constituem a zona de empate. A zona de empate aplicada ao watershed via transformada imagem-floresta (TZ-IFT-WT) se revela um elo inédito entre transformadas de watershed baseadas em paradigmas muito diferentes: gota d'água, inundação, caminhos ótimos e floresta de peso mínimo. Para todos esses paradigmas e os algoritmos derivados, é um desafio se ter uma solução única, fina, e que seja consistente com uma definição. Por isso, propõe-se um afinamento da zona de empate, único e consistente. Além disso, demonstra-se que a TZ-IFT-WT também é o dual de métodos de segmentação baseados em conexidade nebulosa. Assim, a ponte criada entre as abordagens morfológica e nebulosa permite aproveitar avanços de ambas. Em conseqüência disso, o conceito de núcleo de robustez para as sementes é explorado no caso do watershed. / Abstract: This thesis introduces the new concept of tie-zone transform that unifies the multiple solutions of a watershed transform, by conserving only the common parts among them such that the differing parts constitute the tie zone. The tie zone applied to the watershed via image-foresting transform (TZ-IFTWT) proves to be a link between watershed transforms based on very different paradigms: drop of water, flooding, optimal paths and forest of minimum weight. For all these paradigms and the derived algorithms, it is a challenge to get a unique and thin solution which is consistent with a definition. That is why we propose a unique and consistent thinning of the tie zone. In addition, we demonstrate that the TZ-IFT-WT is also the dual of segmentation methods based on fuzzy connectedness. Thus, the bridge between the morphological and the fuzzy approaches allows to take benefit from the advance of both. As a consequence, the concept of cores of robustness for the seeds is exploited in the case of watersheds. / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
509

New PDE models for imaging problems and applications

Calatroni, Luca January 2016 (has links)
Variational methods and Partial Differential Equations (PDEs) have been extensively employed for the mathematical formulation of a myriad of problems describing physical phenomena such as heat propagation, thermodynamic transformations and many more. In imaging, PDEs following variational principles are often considered. In their general form these models combine a regularisation and a data fitting term, balancing one against the other appropriately. Total variation (TV) regularisation is often used due to its edgepreserving and smoothing properties. In this thesis, we focus on the design of TV-based models for several different applications. We start considering PDE models encoding higher-order derivatives to overcome wellknown TV reconstruction drawbacks. Due to their high differential order and nonlinear nature, the computation of the numerical solution of these equations is often challenging. In this thesis, we propose directional splitting techniques and use Newton-type methods that despite these numerical hurdles render reliable and efficient computational schemes. Next, we discuss the problem of choosing the appropriate data fitting term in the case when multiple noise statistics in the data are present due, for instance, to different acquisition and transmission problems. We propose a novel variational model which encodes appropriately and consistently the different noise distributions in this case. Balancing the effect of the regularisation against the data fitting is also crucial. For this sake, we consider a learning approach which estimates the optimal ratio between the two by using training sets of examples via bilevel optimisation. Numerically, we use a combination of SemiSmooth (SSN) and quasi-Newton methods to solve the problem efficiently. Finally, we consider TV-based models in the framework of graphs for image segmentation problems. Here, spectral properties combined with matrix completion techniques are needed to overcome the computational limitations due to the large amount of image data. Further, a semi-supervised technique for the measurement of the segmented region by means of the Hough transform is proposed.
510

Automated assessment of cardiac morphology and function : An integrated B-spline framework for real-time segmentation and tracking of the left ventricle / Caractérisation automatique de la morphologie et de la fonction cardiaque : Une cadre B-spline intégré pour la segmentation et le suivi en temps réel du ventricule gauche

Barbosa, Daniel 28 October 2013 (has links)
L’objectif principal de cette thèse est le développement de techniques de segmentation et de suivi totalement automatisées du ventricule gauche (VG) en RT3DE. Du fait de la nature difficile et complexe des données RT3DE, l’application directe des algorithmes classiques de vision par ordinateur est le plus souvent impossible. Les solutions proposées ont donc été formalisées et implémentées de sorte à satisfaire les contraintes suivantes : elles doivent permettre une analyse complètement automatique (ou presque) et le temps de calcul nécessaire doit être faible afin de pouvoir fonctionner en temps réel pour une utilisation clinique optimale. Dans ce contexte, nous avons donc proposé un nouveau cadre ou les derniers développements en segmentation d’images par ensembles de niveaux peuvent être aisément intégrés, tout en évitant les temps de calcul importants associés à ce type d’algorithmes. La validation clinique de cette approche a été effectuée en deux temps. Tout d’abord, les performances des outils développés ont été évaluées dans un contexte global se focalisant sur l’utilisation en routine clinique. Dans un second temps, la précision de la position estimée du contour du ventricule gauche a été mesurée. Enfin, les méthodes proposées ont été intégrées dans une suite logicielle utilisée à des fins de recherche. Afin de permettre une utilisation quotidienne efficace, des solutions conviviales ont été proposées incluant notamment un outil interactif pour corriger la segmentation du VG. / The fundamental goal of the present thesis was the development of automatic strategies for left ventricular (LV) segmentation and tracking in RT3DE data. Given the challenging nature of RT3DE data, classical computer vision algorithms often face complications when applied to ultrasound. Furthermore, the proposed solutions were formalized and built to respect the following requirements: they should allow (nearly) fully automatic analysis and their computational burden should be low, thus enabling real-time processing for optimal online clinical use. With this in mind, we have proposed a novel segmentation framework where the latest developments in level-set-based image segmentation algorithms could be straightforwardly integrated, while avoiding the heavy computational burden often associated with level-set algorithms. Furthermore, a strong validation component was included in order to assess the performance of the proposed algorithms in realistic scenarios comprising clinical data. First, the performance of the developed tools was evaluated from a global perspective, focusing on its use in clinical daily practice. Secondly, also the spatial accuracy of the estimated left ventricular boundaries was assessed. As a final step, we aimed at the integration of the developed methods in an in-house developed software suite used for research purposes. This included user-friendly solutions for efficient daily use, namely user interactive tools to adjust the segmented left ventricular boundaries.

Page generated in 0.1559 seconds