• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 140
  • 140
  • 92
  • 72
  • 63
  • 52
  • 49
  • 46
  • 39
  • 37
  • 31
  • 30
  • 23
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Restoring the balance between stuff and things in scene understanding

Caesar, Holger January 2018 (has links)
Scene understanding is a central field in computer vision that attempts to detect objects in a scene and reason about their spatial, functional and semantic relations. While many works focus on things (objects with a well-defined shape), less attention has been given to stuff classes (amorphous background regions). However, stuff classes are important as they allow to explain many aspects of an image, including the scene type, thing classes likely to be present and physical attributes of all objects in the scene. The goal of this thesis is to restore the balance between stuff and things in scene understanding. In particular, we investigate how the recognition of stuff differs from things and develop methods that are suitable to deal with both. We use stuff to find things and annotate a large-scale dataset to study stuff and things in context. First, we present two methods for semantic segmentation of stuff and things. Most methods require manual class weighting to counter imbalanced class frequency distributions, particularly on datasets with stuff and thing classes. We develop a novel joint calibration technique that takes into account class imbalance, class competition and overlapping regions by calibrating for the pixel-level evaluation criterion. The second method shows how to unify the advantages of region-based approaches (accurately delineated object boundaries) and fully convolutional approaches (end-to-end training). Both are combined in a universal framework that is equally suitable to deal with stuff and things. Second, we propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. This is particularly important if we want to scale scene understanding to real-world applications with thousands of classes, without having to exhaustively annotate millions of images. Finally, we present COCO-Stuff - the largest existing dataset with dense stuff and thing annotations. Existing datasets are much smaller and were made with expensive polygon-based annotation. We use a very efficient stuff annotation protocol to densely annotate 164K images. Using this new dataset, we provide a detailed analysis of the dataset and visualize how stuff and things co-occur spatially in an image. We revisit the question whether stuff or things are easier to detect and which is more important based on visual and linguistic analysis.
72

Réseaux de neurones convolutifs pour la segmentation sémantique et l'apprentissage d'invariants de couleur / Convolutional neural networks for semantic segmentation and color constancy

Fourure, Damien 12 December 2017 (has links)
La vision par ordinateur est un domaine interdisciplinaire étudiant la manière dont les ordinateurs peuvent acquérir une compréhension de haut niveau à partir d’images ou de vidéos numériques. En intelligence artificielle, et plus précisément en apprentissage automatique, domaine dans lequel se positionne cette thèse, la vision par ordinateur passe par l’extraction de caractéristiques présentes dans les images puis par la généralisation de concepts liés à ces caractéristiques. Ce domaine de recherche est devenu très populaire ces dernières années, notamment grâce aux résultats des réseaux de neurones convolutifs à la base des méthodes dites d’apprentissage profond. Aujourd’hui les réseaux de neurones permettent, entre autres, de reconnaître les différents objets présents dans une image, de générer des images très réalistes ou même de battre les champions au jeu de Go. Leurs performances ne s’arrêtent d’ailleurs pas au domaine de l’image puisqu’ils sont aussi utilisés dans d’autres domaines tels que le traitement du langage naturel (par exemple en traduction automatique) ou la reconnaissance de son. Dans cette thèse, nous étudions les réseaux de neurones convolutifs afin de développer des architectures et des fonctions de coûts spécialisées à des tâches aussi bien de bas niveau (la constance chromatique) que de haut niveau (la segmentation sémantique d’image). Une première contribution s’intéresse à la tâche de constance chromatique. En vision par ordinateur, l’approche principale consiste à estimer la couleur de l’illuminant puis à supprimer son impact sur la couleur perçue des objets. Les expériences que nous avons menées montrent que notre méthode permet d’obtenir des performances compétitives avec l’état de l’art. Néanmoins, notre architecture requiert une grande quantité de données d’entraînement. Afin de corriger en parti ce problème et d’améliorer l’entraînement des réseaux de neurones, nous présentons plusieurs techniques d’augmentation artificielle de données. Nous apportons également deux contributions sur une problématique de haut niveau : la segmentation sémantique d’image. Cette tâche, qui consiste à attribuer une classe sémantique à chacun des pixels d’une image, constitue un défi en vision par ordinateur de par sa complexité. D’une part, elle requiert de nombreux exemples d’entraînement dont les vérités terrains sont coûteuses à obtenir. D’autre part, elle nécessite l’adaptation des réseaux de neurones convolutifs traditionnels afin d’obtenir une prédiction dite dense, c’est-à-dire, une prédiction pour chacun pixel présent dans l’image d’entrée. Pour résoudre la difficulté liée à l’acquisition de données d’entrainements, nous proposons une approche qui exploite simultanément plusieurs bases de données annotées avec différentes étiquettes. Pour cela, nous définissons une fonction de coût sélective. Nous développons aussi une approche dites d’auto-contexte capturant d’avantage les corrélations existantes entre les étiquettes des différentes bases de données. Finalement, nous présentons notre troisième contribution : une nouvelle architecture de réseau de neurones convolutifs appelée GridNet spécialisée pour la segmentation sémantique d’image. Contrairement aux réseaux traditionnels, notre architecture est implémentée sous forme de grille 2D permettant à plusieurs flux interconnectés de fonctionner à différentes résolutions. Afin d’exploiter la totalité des chemins de la grille, nous proposons une technique d’entraînement inspirée du dropout. En outre, nous montrons empiriquement que notre architecture généralise de nombreux réseaux bien connus de l’état de l’art. Nous terminons par une analyse des résultats empiriques obtenus avec notre architecture qui, bien qu’entraînée avec une initialisation aléatoire des poids, révèle de très bonnes performances, dépassant les approches populaires souvent pré-entraînés / Computer vision is an interdisciplinary field that investigates how computers can gain a high level of understanding from digital images or videos. In artificial intelligence, and more precisely in machine learning, the field in which this thesis is positioned,computer vision involves extracting characteristics from images and then generalizing concepts related to these characteristics. This field of research has become very popular in recent years, particularly thanks to the results of the convolutional neural networks that form the basis of so-called deep learning methods. Today, neural networks make it possible, among other things, to recognize different objects present in an image, to generate very realistic images or even to beat the champions at the Go game. Their performance is not limited to the image domain, since they are also used in other fields such as natural language processing (e. g. machine translation) or sound recognition. In this thesis, we study convolutional neural networks in order to develop specialized architectures and loss functions for low-level tasks (color constancy) as well as high-level tasks (semantic segmentation). Color constancy, is the ability of the human visual system to perceive constant colours for a surface despite changes in the spectrum of illumination (lighting change). In computer vision, the main approach consists in estimating the color of the illuminant and then suppressing its impact on the perceived color of objects. We approach the task of color constancy with the use of neural networks by developing a new architecture composed of a subsampling operator inspired by traditional methods. Our experience shows that our method makes it possible to obtain competitive performances with the state of the art. Nevertheless, our architecture requires a large amount of training data. In order to partially correct this problem and improve the training of neural networks, we present several techniques for artificial data augmentation. We are also making two contributions on a high-level issue : semantic segmentation. This task, which consists of assigning a semantic class to each pixel of an image, is a challenge in computer vision because of its complexity. On the one hand, it requires many examples of training that are costly to obtain. On the other hand, it requires the adaptation of traditional convolutional neural networks in order to obtain a so-called dense prediction, i. e., a prediction for each pixel present in the input image. To solve the difficulty of acquiring training data, we propose an approach that uses several databases annotated with different labels at the same time. To do this, we define a selective loss function that has the advantage of allowing the training of a convolutional neural network from data from multiple databases. We also developed self-context approach that captures the correlations between labels in different databases. Finally, we present our third contribution : a new convolutional neural network architecture called GridNet specialized for semantic segmentation. Unlike traditional networks, implemented with a single path from the input (image) to the output (prediction), our architecture is implemented as a 2D grid allowing several interconnected streams to operate at different resolutions. In order to exploit all the paths of the grid, we propose a technique inspired by dropout. In addition, we empirically demonstrate that our architecture generalize many of well-known stateof- the-art networks. We conclude with an analysis of the empirical results obtained with our architecture which, although trained from scratch, reveals very good performances, exceeding popular approaches often pre-trained
73

Apprentissage autosupervisé de modèles prédictifs de segmentation à partir de vidéos / Self-supervised learning of predictive segmentation models from video

Luc, Pauline 25 June 2019 (has links)
Les modèles prédictifs ont le potentiel de permettre le transfert des succès récents en apprentissage par renforcement à de nombreuses tâches du monde réel, en diminuant le nombre d’interactions nécessaires avec l’environnement.La tâche de prédiction vidéo a attiré un intérêt croissant de la part de la communauté ces dernières années, en tant que cas particulier d’apprentissage prédictif dont les applications en robotique et dans les systèmes de navigations sont vastes.Tandis que les trames RGB sont faciles à obtenir et contiennent beaucoup d’information, elles sont extrêmement difficile à prédire, et ne peuvent être interprétées directement par des applications en aval.C’est pourquoi nous introduisons ici une tâche nouvelle, consistant à prédire la segmentation sémantique ou d’instance de trames futures.Les espaces de descripteurs que nous considérons sont mieux adaptés à la prédiction récursive, et nous permettent de développer des modèles de segmentation prédictifs performants jusqu’à une demi-seconde dans le futur.Les prédictions sont interprétables par des applications en aval et demeurent riches en information, détaillées spatialement et faciles à obtenir, en s’appuyant sur des méthodes état de l’art de segmentation.Dans cette thèse, nous nous attachons d’abord à proposer pour la tâche de segmentation sémantique, une approche discriminative se basant sur un entrainement par réseaux antagonistes.Ensuite, nous introduisons la tâche nouvelle de prédiction de segmentation sémantique future, pour laquelle nous développons un modèle convolutionnel autoregressif.Enfin, nous étendons notre méthode à la tâche plus difficile de prédiction de segmentation d’instance future, permettant de distinguer entre différents objets.Du fait du nombre de classes variant selon les images, nous proposons un modèle prédictif dans l’espace des descripteurs d’image convolutionnels haut niveau du réseau de segmentation d’instance Mask R-CNN.Cela nous permet de produire des segmentations visuellement plaisantes en haute résolution, pour des scènes complexes comportant un grand nombre d’objets, et avec une performance satisfaisante jusqu’à une demi seconde dans le futur. / Predictive models of the environment hold promise for allowing the transfer of recent reinforcement learning successes to many real-world contexts, by decreasing the number of interactions needed with the real world.Video prediction has been studied in recent years as a particular case of such predictive models, with broad applications in robotics and navigation systems.While RGB frames are easy to acquire and hold a lot of information, they are extremely challenging to predict, and cannot be directly interpreted by downstream applications.Here we introduce the novel tasks of predicting semantic and instance segmentation of future frames.The abstract feature spaces we consider are better suited for recursive prediction and allow us to develop models which convincingly predict segmentations up to half a second into the future.Predictions are more easily interpretable by downstream algorithms and remain rich, spatially detailed and easy to obtain, relying on state-of-the-art segmentation methods.We first focus on the task of semantic segmentation, for which we propose a discriminative approach based on adversarial training.Then, we introduce the novel task of predicting future semantic segmentation, and develop an autoregressive convolutional neural network to address it.Finally, we extend our method to the more challenging problem of predicting future instance segmentation, which additionally segments out individual objects.To deal with a varying number of output labels per image, we develop a predictive model in the space of high-level convolutional image features of the Mask R-CNN instance segmentation model.We are able to produce visually pleasing segmentations at a high resolution for complex scenes involving a large number of instances, and with convincing accuracy up to half a second ahead.
74

Segmentation and structuring of video documents for indexing applications

Tapu, Ruxandra Georgina 07 December 2012 (has links) (PDF)
Recent advances in telecommunications, collaborated with the development of image and video processing and acquisition devices has lead to a spectacular growth of the amount of the visual content data stored, transmitted and exchanged over Internet. Within this context, elaborating efficient tools to access, browse and retrieve video content has become a crucial challenge. In Chapter 2 we introduce and validate a novel shot boundary detection algorithm able to identify abrupt and gradual transitions. The technique is based on an enhanced graph partition model, combined with a multi-resolution analysis and a non-linear filtering operation. The global computational complexity is reduced by implementing a two-pass approach strategy. In Chapter 3 the video abstraction problem is considered. In our case, we have developed a keyframe representation system that extracts a variable number of images from each detected shot, depending on the visual content variation. The Chapter 4 deals with the issue of high level semantic segmentation into scenes. Here, a novel scene/DVD chapter detection method is introduced and validated. Spatio-temporal coherent shots are clustered into the same scene based on a set of temporal constraints, adaptive thresholds and neutralized shots. Chapter 5 considers the issue of object detection and segmentation. Here we introduce a novel spatio-temporal visual saliency system based on: region contrast, interest points correspondence, geometric transforms, motion classes' estimation and regions temporal consistency. The proposed technique is extended on 3D videos by representing the stereoscopic perception as a 2D video and its associated depth
75

Living in a dynamic world : semantic segmentation of large scale 3D environments

Miksik, Ondrej January 2017 (has links)
As we navigate the world, for example when driving a car from our home to the work place, we continuously perceive the 3D structure of our surroundings and intuitively recognise the objects we see. Such capabilities help us in our everyday lives and enable free and accurate movement even in completely unfamiliar places. We largely take these abilities for granted, but for robots, the task of understanding large outdoor scenes remains extremely challenging. In this thesis, I develop novel algorithms for (near) real-time dense 3D reconstruction and semantic segmentation of large-scale outdoor scenes from passive cameras. Motivated by "smart glasses" for partially sighted users, I show how such modeling can be integrated into an interactive augmented reality system which puts the user in the loop and allows her to physically interact with the world to learn personalized semantically segmented dense 3D models. In the next part, I show how sparse but very accurate 3D measurements can be incorporated directly into the dense depth estimation process and propose a probabilistic model for incremental dense scene reconstruction. To relax the assumption of a stereo camera, I address dense 3D reconstruction in its monocular form and show how the local model can be improved by joint optimization over depth and pose. The world around us is not stationary. However, reconstructing dynamically moving and potentially non-rigidly deforming texture-less objects typically require "contour correspondences" for shape-from-silhouettes. Hence, I propose a video segmentation model which encodes a single object instance as a closed curve, maintains correspondences across time and provide very accurate segmentation close to object boundaries. Finally, instead of evaluating the performance in an isolated setup (IoU scores) which does not measure the impact on decision-making, I show how semantic 3D reconstruction can be incorporated into standard Deep Q-learning to improve decision-making of agents navigating complex 3D environments.
76

Active Learning for Road Segmentation using Convolutional Neural Networks

Sörsäter, Michael January 2018 (has links)
In recent years, development of Convolutional Neural Networks has enabled high performing semantic segmentation models. Generally, these deep learning based segmentation methods require a large amount of annotated data. Acquiring such annotated data for semantic segmentation is a tedious and expensive task. Within machine learning, active learning involves in the selection of new data in order to limit the usage of annotated data. In active learning, the model is trained for several iterations and additional samples are selected that the model is uncertain of. The model is then retrained on additional samples and the process is repeated again. In this thesis, an active learning framework has been applied to road segmentation which is semantic segmentation of objects related to road scenes. The uncertainty in the samples is estimated with Monte Carlo dropout. In Monte Carlo dropout, several dropout masks are applied to the model and the variance is captured, working as an estimate of the model’s uncertainty. Other metrics to rank the uncertainty evaluated in this work are: a baseline method that selects samples randomly, the entropy in the default predictions and three additional variations/extensions of Monte Carlo dropout. Both the active learning framework and uncertainty estimation are implemented in the thesis. Monte Carlo dropout performs slightly better than the baseline in 3 out of 4 metrics. Entropy outperforms all other implemented methods in all metrics. The three additional methods do not perform better than Monte Carlo dropout. An analysis of what kind of uncertainty Monte Carlo dropout capture is performed together with a comparison of the samples selected by baseline and Monte Carlo dropout. Future development and possible improvements are also discussed.
77

Information fusion for scene understanding / Fusion d'informations pour la compréhesion de scènes

Xu, Philippe 28 November 2014 (has links)
La compréhension d'image est un problème majeur de la robotique moderne, la vision par ordinateur et l'apprentissage automatique. En particulier, dans le cas des systèmes avancés d'aide à la conduite, la compréhension de scènes routières est très importante. Afin de pouvoir reconnaître le grand nombre d’objets pouvant être présents dans la scène, plusieurs capteurs et algorithmes de classification doivent être utilisés. Afin de pouvoir profiter au mieux des méthodes existantes, nous traitons le problème de la compréhension de scènes comme un problème de fusion d'informations. La combinaison d'une grande variété de modules de détection, qui peuvent traiter des classes d'objets différentes et utiliser des représentations distinctes, est faites au niveau d'une image. Nous considérons la compréhension d'image à deux niveaux : la détection d'objets et la segmentation sémantique. La théorie des fonctions de croyance est utilisée afin de modéliser et combiner les sorties de ces modules de détection. Nous mettons l'accent sur la nécessité d'avoir un cadre de fusion suffisamment flexible afin de pouvoir inclure facilement de nouvelles classes d'objets, de nouveaux capteurs et de nouveaux algorithmes de détection d'objets. Dans cette thèse, nous proposons une méthode générale permettant de transformer les sorties d’algorithmes d'apprentissage automatique en fonctions de croyance. Nous étudions, ensuite, la combinaison de détecteurs de piétons en utilisant les données Caltech Pedestrian Detection Benchmark. Enfin, les données du KITTI Vision Benchmark Suite sont utilisées pour valider notre approche dans le cadre d'une fusion multimodale d'informations pour de la segmentation sémantique. / Image understanding is a key issue in modern robotics, computer vison and machine learning. In particular, driving scene understanding is very important in the context of advanced driver assistance systems for intelligent vehicles. In order to recognize the large number of objects that may be found on the road, several sensors and decision algorithms are necessary. To make the most of existing state-of-the-art methods, we address the issue of scene understanding from an information fusion point of view. The combination of many diverse detection modules, which may deal with distinct classes of objects and different data representations, is handled by reasoning in the image space. We consider image understanding at two levels : object detection ans semantic segmentation. The theory of belief functions is used to model and combine the outputs of these detection modules. We emphazise the need of a fusion framework flexible enough to easily include new classes, new sensors and new object detection algorithms. In this thesis, we propose a general method to model the outputs of classical machine learning techniques as belief functions. Next, we apply our framework to the combination of pedestrian detectors using the Caltech Pedestrain Detection Benchmark. The KITTI Vision Benchmark Suite is then used to validate our approach in a semantic segmentation context using multi-modal information
78

Melanoma Diagnostics Using Fully Convolutional Networks on Whole Slide Images

Phillips, Adon January 2017 (has links)
Semantic segmentation as an approach to recognizing and localizing objects within an image is a major research area in computer vision. Now that convolutional neural networks are being increasingly used for such tasks, there have been many improve- ments in grand challenge results, and many new research opportunities in previously untennable areas. Using fully convolutional networks, we have developed a semantic segmentation pipeline for the identification of melanocytic tumor regions, epidermis, and dermis lay- ers in whole slide microscopy images of cutaneous melanoma or cutaneous metastatic melanoma. This pipeline includes processes for annotating and preparing a dataset from the output of a tissue slide scanner to the patch-based training and inference by an artificial neural network. We have curated a large dataset of 50 whole slide images containing cutaneous melanoma or cutaneous metastatic melanoma that are fully annotated at 40× ob- jective resolution by an expert pathologist. We will publish the source images of this dataset online. We also present two new FCN architectures that fuse multiple deconvolutional strides, combining coarse and fine predictions to improve accuracy over similar networks without multi-stride information. Our results show that the system performs better than our comparators. We include inference results on thousands of patches from four whole slide images, reassembling them into whole slide segmentation masks to demonstrate how our system generalizes on novel cases.
79

Modélisation géométrique à différent niveau de détails d'objets fabriqués par l'homme / Geometric modeling of man-made objects at different level of details

Fang, Hao 16 January 2019 (has links)
La modélisation géométrique d'objets fabriqués par l'homme à partir de données 3D est l'un des plus grands défis de la vision par ordinateur et de l'infographie. L'objectif à long terme est de générer des modèles de type CAO de la manière la plus automatique possible. Pour atteindre cet objectif, des problèmes difficiles doivent être résolus, notamment (i) le passage à l'échelle du processus de modélisation sur des données d'entrée massives, (ii) la robustesse de la méthodologie contre des mesures d'entrées erronés, et (iii) la qualité géométrique des modèles de sortie. Les méthodes existantes fonctionnent efficacement pour reconstruire la surface des objets de forme libre. Cependant, dans le cas d'objets fabriqués par l'homme, il est difficile d'obtenir des résultats dont la qualité approche celle des représentations hautement structurées, comme les modèles CAO. Dans cette thèse, nous présentons une série de contributions dans ce domaine. Tout d'abord, nous proposons une méthode de classification basée sur l'apprentissage en profondeur pour distinguer des objets dans des environnements complexes à partir de nuages de points 3D. Deuxièmement, nous proposons un algorithme pour détecter des primitives planaires dans des données 3D à différents niveaux d'abstraction. Enfin, nous proposons un mécanisme pour assembler des primitives planaires en maillages polygonaux compacts. Ces contributions sont complémentaires et peuvent être utilisées de manière séquentielle pour reconstruire des modèles de ville à différents niveaux de détail à partir de données 3D aéroportées. Nous illustrons la robustesse, le passage à l'échelle et l'efficacité de nos méthodes sur des données laser et multi-vues stéréo sur des scènes composées d'objets fabriqués par l'homme. / Geometric modeling of man-made objects from 3D data is one of the biggest challenges in Computer Vision and Computer Graphics. The long term goal is to generate a CAD-style model in an as-automatic-as-possible way. To achieve this goal, difficult issues have to be addressed including (i) the scalability of the modeling process with respect to massive input data, (ii) the robustness of the methodology to various defect-laden input measurements, and (iii) the geometric quality of output models. Existing methods work well to recover the surface of free-form objects. However, in case of manmade objects, it is difficult to produce results that approach the quality of high-structured representations as CAD models.In this thesis, we present a series of contributions to the field. First, we propose a classification method based on deep learning to distinguish objects from raw 3D point cloud. Second, we propose an algorithm to detect planar primitives in 3D data at different level of abstraction. Finally, we propose a mechanism to assemble planar primitives into compact polygonal meshes. These contributions are complementary and can be used sequentially to reconstruct city models at various level-of-details from airborne 3D data. We illustrate the robustness, scalability and efficiency of our methods on both laser and multi-view stereo data composed of man-made objects.
80

Semantic Segmentation of Iron Pellets as a Cloud Service

Christopher, Rosenvall January 2020 (has links)
This master’s thesis evaluates automatic data annotation and machine learning predictions of iron ore pellets using tools provided by Amazon Web Services (AWS) in the cloud. The main tool in focus is Amazon SageMaker which is capable of automatic data annotation as well as building, training and deploying machine learning models quickly. Three different models was trained using SageMakers built in semantic segmentation algorithm, PSP, FCN and DeepLabV3. The dataset used for training and evaluation contains 180 images of iron ore pellets collected from LKAB’s experimental blast furnace in Luleå, Sweden. The Amazon Web Services solution for automatic annotation was shown to be of no use when annotating microscopic images of iron ore pellets. Ilastik which is an interactive learning and segmentation toolkit showed far superiority for the task at hand. Out of the three trained networks Fully-Convolutional Network (FCN) performed best looking at inference and training times, it was the quickest network to train and performed within 1% worse than the fastest in regard to inference time. The Fully-Convolutional Network had an average accuracy of 85.8% on the dataset, where both PSP & DeepLabV3 was showing similar performance. From the results in this thesis it was concluded that there are benefits of running deep neural networks as a cloud service for analysis and management ofiron ore pellets.

Page generated in 0.0486 seconds