• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 17
  • 16
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Normalized Cut Approximations

Monroe, William Stonewall 01 May 2011 (has links)
Image segmentation is an important task in computer vision and understanding. Graph Cuts have been shown to be useful in image segmentation problems. Using a criterion for segmentation optimality, they can obtain segmentation without relying heavily on a priori information regarding the specific type of object. Discussed here are a few approximations to the Normalized Cut criterion, the solving of which has been shown to be an NP-hard problem. Two Normalized Cut algorithms have been previously proposed, and a third is proposed here which accomplishes approximation by a similar method as one of the previous algorithms. It is also more efficient than either of the previously proposed Normalized Cut approximations.
2

Geometric Scene Labeling for Long-Range Obstacle Detection

Hillgren, Patrik January 2015 (has links)
Autonomous Driving or self driving vehicles are concepts of vehicles knowing their environment and making driving manoeuvres without instructions from a driver. The concepts have been around for decades but has improved significantly in the last years since research in this area has made significant progress. Benefits of autonomous driving include the possibility to decrease the number of accidents in traffic and thereby saving lives. A major challenge in autonomous driving is to acquire 3D information and relations between all objects in surrounding traffic. This is referred to as \textit{spatial perception}. Stereo camera systems have become a central sensor module for advanced driver assistance systems and autonomous driving. For object detection and measurements at large distances stereo vision encounter difficulties. This includes objects being small, having low contrast and the presence of image noise. Having an accurate perception of the environment at large distances is however of high interest for many applications, especially autonomous driving. This thesis proposes a method which tries to increase the range to where generic objects are first detected using a given stereo camera setup. Objects are represented by planes in 3D space. The input image is segmented into the various objects and the 3D plane parameters are estimated jointly. The 3D plane parameters are estimated directly from the stereo image pairs. In particular, this thesis investigates methods to introduce geometric constraints to the segmentation or labeling task, i.e assigning each considered pixel in the image to a plane. The methods provided in this thesis show that despite the difficulties at large distances it is possible to exploit planar primitives in 3D space for obstacle detection at distances where other methods fail. / En autonom bil innebär att bilen har en uppfattning om sin omgivning och kan utifran det ta beslut angående hur bilen ska manövreras. Konceptet med självkörande bilar har existerat i årtionden men har utvecklats snabbt senaste åren sedan billigare datorkraft finns lättare tillgänglig. Fördelar med autonomiska bilar innebär bland annat att antalet olyckor i trafiken minskas och därmed liv räddas. En av de största utmaningarna med autonoma bilar är att få 3D information och relationer mellan objekt som finns i den omgivande trafikmiljön. Detta kallas för spatial perception och innebär att detektera alla objekt och tilldela en korrekt postition till dem. Stereo kamerasystem har fått en central roll för avancerade förarsystem och autonoma bilar. För detektion av objekt på stora avstånd träffar stereo system på svårigheter. Detta inkluderar väldigt små objekt, låg kontrast och närvaron av brus i bilden. Att ha en ackurativ perception på stora avstånd är dock vitalt för många applikationer, inte minst autonoma bilar. Den här rapporten föreslar en metod som försöker öka avståndet till där objekt först upptäcks. Objekt representeras av plan i 3D rymden. Bilder givna från stereo par segmenteras i olika object och plan parametrar estimeras samtidigt. Planens parametrar estimeras direkt från stereo bild paren. Den här rapporten utreder metoder att introducera gemoetriska begränsningar att använda vid segmenteringsuppgiften. Metoderna som presenteras i denna rapport visar att trots den höga närvaron av brus på stora avstånd är det möjligt att estimera geometriska objekt som är starka nog att möjliggöra detektion av objekt på ett avstand där andra metoder misslyckas.
3

Planar segmentation of range images

Muller, Simon Adriaan 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Range images are images that store at each pixel the distance between the sensor and a particular point in the observed scene, instead of the colour information. They provide a convenient storage format for 3-D point cloud information captured from a single point of view. Range image segmentation is the process of grouping the pixels of a range image into regions of points that belong to the same surface. Segmentations are useful for many applications that require higherlevel information, and with range images they also represent a significant step towards complete scene reconstruction. This study considers the segmentation of range images into planar surfaces. It discusses the theory and also implements and evaluates some current approaches found in the literature. The study then develops a new approach based on the theory of graph cut optimization which has been successfully applied to various other image processing tasks but, according to a search of the literature, has otherwise not been used to attempt segmenting range images. This new approach is notable for its strong guarantees in optimizing a specific energy function which has a rigorous theoretical underpinning for handling noise in images. It proves to be very robust to noise and also different values of the few parameters that need to be trained. Results are evaluated in a quantitative manner using a standard evaluation framework and datasets that allow us to compare against various other approaches found in the literature. We find that our approach delivers results that are competitive when compared to the current state-of-the-art, and can easily be applied to images captured with different techniques that present varying noise and processing challenges. / AFRIKAANSE OPSOMMING: Dieptebeelde is beelde wat vir elke piksel die afstand tussen die sensor en ’n spesifieke punt in die waargenome toneel, in plaas van die kleur, stoor. Dit verskaf ’n gerieflike stoorformaat vir 3-D puntwolke wat vanaf ’n enkele sigpunt opgeneem is. Die segmentasie van dieptebeelde is die proses waarby die piksels van ’n dieptebeeld in gebiede opgedeel word, sodat punte saam gegroepeer word as hulle op dieselfde oppervlak lê. Segmentasie is nuttig vir verskeie toepassings wat hoërvlak inligting benodig en, in die geval van dieptebeelde, verteenwoordig dit ’n beduidende stap in die rigting van volledige toneel-rekonstruksie. Hierdie studie ondersoek segmentasie waar dieptebeelde opgedeel word in plat vlakke. Dit bespreek die teorie, en implementeer en evalueer ook sekere van die huidige tegnieke wat in die literatuur gevind kan word. Die studie ontwikkel dan ’n nuwe tegniek wat gebaseer is op die teorie van grafieksnit-optimering wat al suksesvol toegepas is op verskeie ander beeldverwerkingsprobleme maar, sover ’n studie op die literatuur wys, nog nie gebruik is om dieptebeelde te segmenteer nie. Hierdie nuwe benadering is merkbaar vir sy sterk waarborge vir die optimering van ’n spesifieke energie-funksie wat ’n sterk teoretiese fondasie het vir die hantering van geraas in beelde. Die tegniek bewys om fors te wees tot geraas sowel as die keuse van waardes vir die min parameters wat afgerig moet word. Resultate word geëvalueer op ’n kwantitatiewe wyse deur die gebruik van ’n standaard evalueringsraamwerk en datastelle wat ons toelaat om hierdie tegniek te vergelyk met ander tegnieke in die literatuur. Ons vind dat ons tegniek resultate lewer wat mededingend is ten opsigte van die huidige stand-van-die-kuns en dat ons dit maklik kan toepas op beelde wat deur verskeie tegnieke opgeneem is, alhoewel hulle verskillende geraastipes en verwerkingsuitdagings bied.
4

ESTIMATION OF DEPTH FROM DEFOCUS BLUR IN VIRTUAL ENVIRONMENTS COMPARING GRAPH CUTS AND CONVOLUTIONAL NEURAL NETWORK

Prodipto Chowdhury (5931032) 17 January 2019 (has links)
Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
5

Vyhodnocování nádorů pomocí analýz DCE-MRI snímků / Tumor assessment using DCE-MRI image analysis

Šilhán, Jiří January 2012 (has links)
This thesis deals with processing of data obtained by DCE-MRI, which uses magnetic resonance to track the propagation of contrast agents in the blo- odstream. Patient is given a contrast agent and then a series of images of the target area is taken. The output is a set of image data and perfusion maps. Work employs segmentation method which uses graph cuts to interactively look for the tumor, and evaluates it according to its shape properties. Study of whole data sets is simplified by image fusion methods.
6

Graph-based Methods for Interactive Image Segmentation

Malmberg, Filip January 2011 (has links)
The subject of digital image analysis deals with extracting relevant information from image data, stored in digital form in a computer. A fundamental problem in image analysis is image segmentation, i.e., the identification and separation of relevant objects and structures in an image. Accurate segmentation of objects of interest is often required before further processing and analysis can be performed. Despite years of active research, fully automatic segmentation of arbitrary images remains an unsolved problem. Interactive, or semi-automatic, segmentation methods use human expert knowledge as additional input, thereby making the segmentation problem more tractable. The goal of interactive segmentation methods is to minimize the required user interaction time, while maintaining tight user control to guarantee the correctness of the results. Methods for interactive segmentation typically operate under one of two paradigms for user guidance: (1) Specification of pieces of the boundary of the desired object(s). (2) Specification of correct segmentation labels for a small subset of the image elements. These types of user input are referred to as boundary constraints and regional constraints, respectively. This thesis concerns the development of methods for interactive segmentation, using a graph-theoretic approach. We view an image as an edge weighted graph, whose vertex set is the set of image elements, and whose edges are given by an adjacency relation among the image elements. Due to its discrete nature and mathematical simplicity, this graph based image representation lends itself well to the development of efficient, and provably correct, methods. The contributions in this thesis may be summarized as follows: Existing graph-based methods for interactive segmentation are modified to improve their performance on images with noisy or missing data, while maintaining a low computational cost. Fuzzy techniques are utilized to obtain segmentations from which feature measurements can be made with increased precision. A new paradigm for user guidance, that unifies and generalizes regional and boundary constraints, is proposed. The practical utility of the proposed methods is illustrated with examples from the medical field.
7

Segmentação de imagens digitais combinando watershed e corte normalizado em grafos / Digital image segmentation combining watershed and normalized cut

Pinto, Tiago Willian, 1985- 25 August 2018 (has links)
Orientadores: Marco Antonio Garcia de Carvalho, Paulo Sérgio Martins Pedro / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-25T02:01:02Z (GMT). No. of bitstreams: 1 Pinto_TiagoWillian_M.pdf: 4501631 bytes, checksum: fd8dab16452e93b1ceec36bc90f085b9 (MD5) Previous issue date: 2014 / Resumo: Em Visão Computacional, a importância da segmentação de imagens é comparável apenas à sua complexidade. Interpretar a semântica de uma imagem com exatidão envolve inúmeras variáveis e condições, o que deixa um vasto campo em aberto aos pesquisadores. O intuito deste trabalho é implementar um método de segmentação de imagens através da combinação de quatro técnicas de computação: A Transformação Watershed, o Watershed Hierárquico, o Contextual Spaces Algorithm e o Corte Normalizado. A Transformação Watershed é uma técnica de segmentação de imagens do campo da Morfologia Matemática baseada em crescimento de regiões e uma forma eficiente de implementá-la é através da Transformada Imagem-Floresta. Esta técnica produz uma super-segmentação da imagem, o que dificulta a interpretação visual do resultado. Uma das formas de simplificar e reduzir essa quantidade de regiões é através da construção de um espaço de escalas chamado Watershed Hierárquico, que agrupa regiões através de um limiar que representa uma característica do relevo. O Contextual Spaces Algorithm é uma técnica de reclassificação utilizada no campo de Busca de Imagens Baseado em contexto, e explora a similaridade entre os diferentes objetos de uma coleção através da análise do contexto entre elas. O Corte Normalizado é uma técnica que explora a análise do grau de dissimilaridade entre regiões e tem suas bases na teoria espectral dos grafos. O Watershed Hierárquico é uma abordagem multiescala de análise das regiões do watershed, que possibilita a extração de métricas que podem servir de subsídio para aplicação do Corte Normalizado. A proposta deste projeto é combinar estas técnicas, implementando um método de segmentação que explore os benefícios alcançados por cada uma, variando entre diferentes métricas do Watershed Hierárquico com o Corte Normalizado e comparando os resultados obtidos / Abstract: In computer vision , the importance of image segmentation is comparable only by its complexity. Interpreting the semantics of an image accurately involves many variables and conditions, which leaves a vast field open to researchers. The purpose of this work is to implement a method of image segmentation by combining four computing techniques: The Watershed Transform, the Hierarchical Watershed, Contextual Spaces Algorithm and Normalized Cut. The Watershed Transform is a technique for image segmentation from the field of Mathematical Morphology based on region growing and an efficient way to implement it is through the Image Foresting Transform. This technique produces an over-segmentated image, which makes the visual interpretation of the result be very hard. One way to simplify and reduce the quantity of regions is by constructing a space of scales called Hierarchical Watershed, grouping regions through a threshold that represents a characteristic of the relief. The Contextual Spaces Algorithm is a reranking technique used in the field of Context Based Image Retrieval, and explores the similarity between different objects in a collection by analyzing the context between them. Normalized Cut is a technique that exploits the analysis of the degree of dissimilarity between regions and has its foundations in the spectral graph theory. The Hierarchical Watershed is a multiscale approach for analyzing regions of the watershed, which enables the extraction of metrics that can serve as a basis for applying the Normalized Cut. The purpose of this project is to combine these techniques, implementing a segmentation method that exploits the benefits achieved by each one, varying between different metrics of Hierarchical Watershed with Normalized Cut and comparing the results / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
8

Brain MRI segmentation for the longitudinal follow-up of regional atrophy in Alzheimer’s Disease

Petit, Clemence January 2014 (has links)
Brain atrophy measurement is increasingly important in studies of neurodegenerative diseases such as Alzheimer’s disease. From this perspective, a regional segmentation framework for magnetic resonance images has recently been developed by the team that I joined for my master thesis. It combines an atlas fusion and a tissue classification. A graph-cuts optimization step is then applied to obtain the final segmentation from the combination probability maps. To begin with neighboring constraints were integrated into the optimization step so as to prevent some labels to be adjacent in accordance with anatomical criteria. They were successfully tested on a restricted list of patient images which previously presented segmentation errors. Secondly, a multigrid tissue classification was implemented in order to compensate for the effects of intensity inhomogeneities. However, the visual observations on a few cases showed little improvement compared to the increased computation time. Consequently another possibility was investigated to modify the classification. An atlas-based classification was implemented and tested both on a small-scale and a large-scale. The efficiency of the proposed method was visually assessed on a few patients, especially regarding the separation between grey and white matter. The process was then applied on a database containing several hundreds patients and the results demonstrated an improved group separation based on grey matter volume, whose reduction is particularly significant with patients suffering from Alzheimer’s Disease. To conclude, several links of the segmentation framework have been upgraded, which promises good results for future regional atrophy studies.
9

Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network

Chowdhury, Prodipto 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
10

Segmentation d'objets déformables en imagerie ultrasonore / Deformable object segmentation in ultra-sound images

Massich, Joan 04 December 2013 (has links)
Le cancer du sein est le type de cancer le plus répandu, il est la cause principale de mortalité chez les femmes aussi bien dans les pays occidentaux que dans les pays en voie de développement. L'imagerie médicale joue un rôle clef dans la réduction de la mortalité du cancer du sein, en facilitant sa première détection par le dépistage, le diagnostic et la biopsie guidée. Bien que la Mammographie Numérique (DM) reste la référence pour les méthodes d'examen existantes, les échographies ont prouvé leur place en tant que modalité complémentaire. Les images de cette dernière fournissent des informations permettant de différencier le caratère bénin ou malin des lésions solides, ce qui ne peut être détecté par DM. Malgré leur utilité clinique, les images échographiques sont bruitées, ce qui compromet les diagnostiques des radiologues à partir de celles ci. C'est pourquoi un des objectifs premiers des chercheurs en imagerie médicale est d'améliorer la qualité des images et des méthodologies afin de simplifier et de systématiser la lecture et l'interprétation de ces images.La méthode proposée considère le processus de segmentation comme la minimisation d'une structure probabilistique multi-label utilisant un algorithme de minimisation du Max-Flow/Min-Cut pour associer le label adéquat parmi un ensemble de labels figurant des types de tissus, et ce, pour tout les pixels de l'image.Cette dernière est divisée en régions adjacentes afin que tous les pixels d'une même régions soient labelisés de la même manière en fin du processus. Des modèles stochastiques pour la labellisation sont crées à partir d'une base d'apprentissage de données. / Breast cancer is the second most common type of cancer being the leading cause of cancer death among females both in western and in economically developing countries. Medical imaging is key for early detection, diagnosis and treatment follow-up. Despite Digital Mammography (DM) remains the reference imaging modality, Ultra-Sound (US) imaging has proven to be a successful adjunct image modality for breast cancer screening, specially as a consequence of the discriminative capabilities that US offers for differentiating between solid lesions that are benign or malignant. Despite US usability,US suffers inconveniences due to its natural noise that compromises the diagnosis capabilities of radiologists. Therefore the research interest in providing radiologists with Computer Aided Diagnosis (CAD) tools to assist the doctors during decision taking. This thesis analyzes the current strategies to segment breast lesions in US data in order to infer meaningful information to be feet to CAD, and proposes a fully automatic methodology for generating accurate segmentations of breast lesions in US data with low false positive rates.

Page generated in 0.0564 seconds