• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 7
  • 6
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 46
  • 17
  • 16
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Semantic Segmentation of Oblique Views in a 3D-Environment

Tranell, Victor January 2019 (has links)
This thesis presents and evaluates different methods to semantically segment 3D-models by rendered 2D-views. The 2D-views are segmented separately and then merged together. The thesis evaluates three different merge strategies, two different classification architectures, how many views should be rendered and how these rendered views should be arranged. The results are evaluated both quantitatively and qualitatively and then compared with the current classifier at Vricon presented in [30]. The conclusion of this thesis is that there is a performance gain to be had using this method. The best model was using two views and attains an accuracy of 90.89% which can be compared with 84.52% achieved by the single view network from [30]. The best nine view system achieved a 87.72%. The difference in accuracy between the two and the nine view system is attributed to the higher quality mesh on the sunny side of objects, which typically is the south side. The thesis provides a proof of concept and there are still many areas where the system can be improved. One of them being the extraction of training data which seemingly would have a huge impact on the performance.
22

Apprentissage de co-similarités pour la classification automatique de données monovues et multivues / Clustering of monoview and multiview data via co-similarity learning

Grimal, Clément 11 October 2012 (has links)
L'apprentissage automatique consiste à concevoir des programmes informatiques capables d'apprendre à partir de leurs environnement, ou bien à partir de données. Il existe différents types d'apprentissage, selon que l'on cherche à faire apprendre au programme, et également selon le cadre dans lequel il doit apprendre, ce qui constitue différentes tâches. Les mesures de similarité jouent un rôle prépondérant dans la plupart de ces tâches, c'est pourquoi les travaux de cette thèse se concentrent sur leur étude. Plus particulièrement, nous nous intéressons à la classification de données, qui est une tâche d'apprentissage dit non supervisé, dans lequel le programme doit organiser un ensemble d'objets en plusieurs classes distinctes, de façon à regrouper les objets similaires ensemble. Dans de nombreuses applications, ces objets (des documents par exemple) sont décrits à l'aide de leurs liens à d'autres types d'objets (des mots par exemple), qui peuvent eux-même être classifiés. On parle alors de co-classification, et nous étudions et proposons dans cette thèse des améliorations de l'algorithme de calcul de co-similarités XSim. Nous montrons que ces améliorations permettent d'obtenir de meilleurs résultats que les méthodes de l'état de l'art. De plus, il est fréquent que ces objets soient liés à plus d'un autre type d'objets, les données qui décrivent ces multiples relations entre différents types d'objets sont dites multivues. Les méthodes classiques ne sont généralement pas capables de prendre en compte toutes les informations contenues dans ces données. C'est pourquoi nous présentons dans cette thèse l'algorithme de calcul multivue de similarités MVSim, qui peut être vu comme une extension aux données multivues de l'algorithme XSim. Nous montrons que cette méthode obtient de meilleures performances que les méthodes multivues de l'état de l'art, ainsi que les méthodes monovues, validant ainsi l'apport de l'aspect multivue. Finalement, nous proposons également d'utiliser l'algorithme MVSim pour classifier des données classiques monovues de grandes tailles, en les découpant en différents ensembles. Nous montrons que cette approche permet de gagner en temps de calcul ainsi qu'en taille mémoire nécessaire, tout en dégradant relativement peu la classification par rapport à une approche directe sans découpage. / Machine learning consists in conceiving computer programs capable of learning from their environment, or from data. Different kind of learning exist, depending on what the program is learning, or in which context it learns, which naturally forms different tasks. Similarity measures play a predominant role in most of these tasks, which is the reason why this thesis focus on their study. More specifically, we are focusing on data clustering, a so called non supervised learning task, in which the goal of the program is to organize a set of objects into several clusters, in such a way that similar objects are grouped together. In many applications, these objects (documents for instance) are described by their links to other types of objects (words for instance), that can be clustered as well. This case is referred to as co-clustering, and in this thesis we study and improve the co-similarity algorithm XSim. We demonstrate that these improvements enable the algorithm to outperform the state of the art methods. Additionally, it is frequent that these objects are linked to more than one other type of objects, the data that describe these multiple relations between these various types of objects are called multiview. Classical methods are generally not able to consider and use all the information contained in these data. For this reason, we present in this thesis a new multiview similarity algorithm called MVSim, that can be considered as a multiview extension of the XSim algorithm. We demonstrate that this method outperforms state of the art multiview methods, as well as classical approaches, thus validating the interest of the multiview aspect. Finally, we also describe how to use the MVSim algorithm to cluster large-scale single-view data, by first splitting it in multiple subsets. We demonstrate that this approach allows to significantly reduce the running time and the memory footprint of the method, while slightly lowering the quality of the obtained clustering compared to a straightforward approach with no splitting.
23

MuVArch : une approche de méta-modélisation pour la représentation multi-vues des architectures hétérogènes embarqués / MuVARCH : a (meta) modeling approach for multi-view representation of heterogeneous embedded architectures

Khecharem, Amani 03 May 2016 (has links)
Nous avons défini et réalisé avec l'approche MuVarch un environnement de (méta-)modélisation orientée vers la représentation multi-vues des architectures embarquées hétérogènes (de type "smartphone" par exemple). En plus de la vue architecturale de base, support de toutes les autres, on considère les vues "performance", "consommation", "température", ainsi que la vue fonctionnelle "applicative" pour fournir des scénarios comportementaux de fonctionnement de la plate-forme. Il était important de savoir décrire en MuVarch comment les vues se raccrochent à la vue de base architecturale, et comment elle se relient également entre elles (relation entre consommation énergétique et température par exemple). L'objectif ultime est d'utiliser ce framework multi-vues et les différentes informations apportées par chacune, pour savoir supporter des politiques alternatives de mapping/allocation des tâches applicatives sur les ressources de l'architecture (la définition de ces politiques restant extérieure à nos travaux de thèse). La représentation adéquate de cette relation d'allocation forme donc un des aspects importants de nos travaux. / We introduced and realized with our MuVarch approach an heterogeneous (meta)modeling environment for multi-view representation of heterogeneous embedded architectures (of "smartphone" type for instance). In addition to the backbone architectural view supporting others, we considered performance, power, and thermal view. We introduced also the functional applicative view, to provide typical use cases for the architecture. It was important to describe in MuVarch our various views would connect to the basic one, and how they would mutually relate together as well (how temperature depends on power consumption for instance). The global objective was to let the framework consider alternative mapping/allocation strategies for applicative tasks on architectural resources (although the definition of such strategies themselves was out of the scope). The appropriate form of such an allocation relation, which may be quite involved, was thus an important aspect of this thesis.
24

Cubic-Panorama Image Dataset Analysis for Storage and Transmission

Salehi Doolabi, Saeed January 2013 (has links)
This thesis involves systems for virtual presence in remote locations, a field referred to as telepresence. Recent image-based representations such as Google map's street view provide a familiar example. Several areas of research are open; such image-based representations are huge in size and the necessity to compress data efficiently for storage is inevitable. On the other hand, users are usually located in remote areas, and thus efficient transmission of the visual information is another issue of great importance. In this work, real-world images are used in preference to computer graphics representations, mainly due to the photorealism that they provide as well as to avoid the high computational cost required for simulating large-scale environments. The cubic format is selected for panoramas in this thesis. A major feature of the captured cubic-panoramic image datasets in this work is the assumption of static scenes, and major issues of the system are compression efficiency and random access for storage, as well as computational complexity for transmission upon remote users' requests. First, in order to enable smooth navigation across different view-points, a method for aligning cubic-panorama image datasets by using the geometry of the scene is proposed and tested. Feature detection and camera calibration are incorporated and unlike the existing method, which is limited to a pair of panoramas, our approach is applicable to datasets with a large number of panoramic images, with no need for extra numerical estimation. Second, the problem of cubic-panorama image dataset compression is addressed in a number of ways. Two state-of-the-art approaches, namely the standardized scheme of H.264 and a wavelet-based codec named Dirac, are used and compared for the application of virtual navigation in image based representations of real world environments. Different frame prediction structures and group of pictures lengths are investigated and compared for this new type of visual data. At this stage, based on the obtained results, an efficient prediction structure and bitstream syntax using features of the data as well as satisfying major requirements of the system are proposed. Third, we have proposed novel methods to address the important issue of disparity estimation. A client-server based scheme is assumed and a remote user is assumed to seek information at each navigation step. Considering the compression stage, a fast method that uses our previous work on the geometry of the scene as well as the proposed prediction structure together with the cubic format of panoramas is used to estimate disparity vectors efficiently. Considering the transmission stage, a new transcoding scheme is introduced and a number of different frame-format conversion scenarios are addressed towards the goal of free navigation. Different types of navigation scenarios including forward or backward navigation, as well as user pan, tilt, and zoom are addressed. In all the aforementioned cases, results are compared both visually through error images and videos as well as using the objective measures. Altogether free navigation within the captured panoramic image datasets will be facilitated using our work and it can be incorporated in state-of-the-art of emerging cubic-panorama image dataset compression/transmission schemes.
25

A live imaging paradigm for studying Drosophila development and evolution

Schmied, Christopher 30 March 2016 (has links) (PDF)
Proper metazoan development requires that genes are expressed in a spatiotemporally controlled manner, with tightly regulated levels. Altering the expression of genes that govern development leads mostly to aberrations. However, alterations can also be beneficial, leading to the formation of new phenotypes, which contributes to the astounding diversity of animal forms. In the past the expression of developmental genes has been studied mostly in fixed tissues, which is unable to visualize these highly dynamic processes. We combine genomic fosmid transgenes, expressing genes of interest close to endogenous conditions, with Selective Plane Illumination Microscopy (SPIM) to image the expression of genes live with high temporal resolution and at single cell level in the entire embryo. In an effort to expand the toolkit for studying Drosophila development we have characterized the global expression patterns of various developmentally important genes in the whole embryo. To process the large datasets generated by SPIM, we have developed an automated workflow for processing on a High Performance Computing (HPC) cluster. In a parallel project, we wanted to understand how spatiotemporally regulated gene expression patterns and levels lead to different morphologies across Drosophila species. To this end we have compared by SPIM the expression of transcription factors (TFs) encoded by Drosophila melanogaster fosmids to their orthologous Drosophila pseudoobscura counterparts by expressing both fosmids in D. melanogaster. Here, we present an analysis of divergence of expression of orthologous genes compared A) directly by expressing the fosmids, tagged with different fluorophore, in the same D. melanogaster embryo or B) indirectly by expressing the fosmids, tagged with the same fluorophore, in separate D. melanogaster embryos. Our workflow provides powerful methodology for the study of gene expression patterns and levels during development, such knowledge is a basis for understanding both their evolutionary relevance and developmental function.
26

A live imaging paradigm for studying Drosophila development and evolution

Schmied, Christopher 27 January 2016 (has links)
Proper metazoan development requires that genes are expressed in a spatiotemporally controlled manner, with tightly regulated levels. Altering the expression of genes that govern development leads mostly to aberrations. However, alterations can also be beneficial, leading to the formation of new phenotypes, which contributes to the astounding diversity of animal forms. In the past the expression of developmental genes has been studied mostly in fixed tissues, which is unable to visualize these highly dynamic processes. We combine genomic fosmid transgenes, expressing genes of interest close to endogenous conditions, with Selective Plane Illumination Microscopy (SPIM) to image the expression of genes live with high temporal resolution and at single cell level in the entire embryo. In an effort to expand the toolkit for studying Drosophila development we have characterized the global expression patterns of various developmentally important genes in the whole embryo. To process the large datasets generated by SPIM, we have developed an automated workflow for processing on a High Performance Computing (HPC) cluster. In a parallel project, we wanted to understand how spatiotemporally regulated gene expression patterns and levels lead to different morphologies across Drosophila species. To this end we have compared by SPIM the expression of transcription factors (TFs) encoded by Drosophila melanogaster fosmids to their orthologous Drosophila pseudoobscura counterparts by expressing both fosmids in D. melanogaster. Here, we present an analysis of divergence of expression of orthologous genes compared A) directly by expressing the fosmids, tagged with different fluorophore, in the same D. melanogaster embryo or B) indirectly by expressing the fosmids, tagged with the same fluorophore, in separate D. melanogaster embryos. Our workflow provides powerful methodology for the study of gene expression patterns and levels during development, such knowledge is a basis for understanding both their evolutionary relevance and developmental function.
27

A Hybrid Approach For Full Frame Loss Concealment Of Multiview Video

Bilen, Cagdas 01 August 2007 (has links) (PDF)
Multiview video is one of the emerging research areas especially among the video coding community. Transmission of multiview video over an error prone network is possible with efficient compression of these videos. But along with the studies for efficiently compressing the multiview video, new error concealment and error protection methods are also necessary to overcome the problems due to erroneous channel conditions in practical applications. In packet switching networks, packet losses may lead to block losses in a frame or the loss of an entire frame in an encoded video sequence. In recent years several algorithms are proposed to handle the loss of an entire frame efficiently. However methods for full frame losses in stereoscopic or multiview videos are limited in the literature. In this thesis a stereoscopic approach for full frame loss concealment of multiview video is proposed. In the proposed methods, the redundancy and disparity between the views and motion information between the previously decoded frames are used to estimate the lost frame. Even though multiview video can be composed of more than two views, at most three view are utilized for concealment. The performance of the proposed algorithms are tested against monoscopic methods and the conditions under which the proposed methods are superior are investigated. The proposed algorithms are applied to both stereoscopic and multiview video.
28

Handling domain knowledge in system design models. An ontology based approach. / Explicitation de la sémantique du domaine dans les modèles de systèmes : une approche à base d'ontologies

Hacid, Kahina 06 March 2018 (has links)
Les modèles de systèmes complexes sont conçus dans différents contextes. Cependant, l'hétérogénéité induite par ces contextes n’est pas prise en compte lors de la description et de la validation de ces systèmes. De plus, ces systèmes impliquent généralement l’intervention deplusieurs experts du domaine et la réalisation de plusieurs modèles correspondant à différentes analyses (vues) de ce même système. Aucune information concernant les caractéristiques du domaine ni des analyses réalisées n'est explicitée. Nous proposons un cadre méthodologiquepermettant d’une part, de formaliser les connaissances de domaine à l’aide d’ontologies, et d’autre part d'enrichir les modèles à l’aide des connaissances de domaine en définissant des références explicites aux informations formalisées dans ces ontologies. Ce cadre permet également de rendre explicites les caractéristiques d'une analyse en les formalisant dans des modèles qualifiés de «points de vue». Nous avons réalisé deux déploiements de ce cadre méthodologique : un premier déploiement utilisant les techniques de l’Ingénierie Dirigée par les Modèles (IDM) et un second fondé sur les méthodes formelles basées sur des techniques de preuve et de raffinement. Ce cadre a été validé sur plusieurs cas d'études non triviaux issus de l'ingénierie système. / Complex systems models are designed in heterogeneous domains and this heterogeneity is rarely considered explicitly when describing and validating processes. Moreover, these systems usually involve several domain experts and several design models corresponding to different analyses (views) of the same system. However, no explicit information regarding the characteristics neither of the domain nor of the performed system analyses is given. In our thesis, we propose a general framework offering first, the formalization of domain knowledge using ontologies and second, the capability to strengthen design models by making explicit references to the domain knowledgeformalized in these ontology. This framework also provides resources for making explicit the features of an analysis by formalizing them within models qualified as ‘’points of view ‘’. We have set up two deployments of our approach: a Model Driven Engineering (MDE) based deployment and a formal methods one based on proof and refinement. This general framework has been validated on several no trivial case studies issued from system engineering.
29

Génération d'images 3D HDR / Generation of 3D HDR images

Bonnard, Jennifer 11 December 2015 (has links)
L’imagerie HDR et l’imagerie 3D sont deux domaines dont l’évolution simultanée mais indépendante n’a cessé de croître ces dernières années. D’une part, l’imagerie HDR (High Dynamic Range) permet d’étendre la gamme dynamique de couleur des images conventionnelles dites LDR (Low Dynamic Range). D’autre part, l’imagerie 3D propose une immersion dans le film projeté avec cette impression de faire partie de la scène tournée. Depuis peu, ces deux domaines sont conjugués pour proposer des images ou vidéos 3D HDR mais peu de solutions viables existent et aucune n’est accessible au grand public. Dans ce travail de thèse, nous proposons une méthode de génération d’images 3D HDR pour une visualisation sur écrans autostéréoscopiques en adaptant une caméra multi-points de vue à l’acquisition d’expositions multiples. Pour cela, des filtres à densité neutre sont fixés sur les objectifs de la caméra. Ensuite, un appareillement des pixels homologues permet l’agrégation des pixels représentant le même point dans la scène acquise. Finalement, l’attribution d’une valeur de radiance est calculée pour chaque pixel du jeud’images considéré par moyenne pondérée des valeurs LDR des pixels homologues. Une étape supplémentaire est nécessaire car certains pixels ont une radiance erronée. Nous proposons une méthode basée surla couleur des pixels voisins puis deux méthodes basées sur la correction de la disparité des pixels dontla radiance est erronée. La première est basée sur la disparité des pixels du voisinage et la seconde sur la disparité calculée indépendamment sur chaque composante couleur. Ce pipeline permet la générationd’une image HDR par point de vue. Un algorithme de tone-mapping est ensuite appliqué à chacune d’elles afin qu’elles puissent être composées avec les filtres correspondants à l’écran autostéréoscopique considéré pour permettre la visualisation de l’image 3D HDR. / HDR imaging and 3D imaging are two areas in which the simultaneous but separate development has been growing in recent years. On the one hand, HDR (High Dynamic Range) imaging allows to extend the dynamic range of traditionnal images called LDR (Low Dynamic Range). On the other hand, 3Dimaging offers immersion in the shown film with the feeling to be part of the acquired scene. Recently, these two areas have been combined to provide 3D HDR images or videos but few viable solutions existand none of them is available to the public. In this thesis, we propose a method to generate 3D HDR images for autostereoscopic displays by adapting a multi-viewpoints camera to several exposures acquisition.To do that, neutral density filters are fixed on the objectives of the camera. Then, pixel matchingis applied to aggregate pixels that represent the same point in the acquired scene. Finally, radiance is calculated for each pixel of the set of images by using a weighted average of LDR values. An additiona lstep is necessary because some pixels have wrong radiance. We proposed a method based on the color of adjacent pixels and two methods based on the correction of the disparity of those pixels. The first method is based on the disparity of pixels of the neighborhood and the second method on the disparity independently calculated on each color channel. This pipeline allows the generation of 3D HDR image son each viewpoint. A tone-mapping algorithm is then applied on each of these images. Their composition with filters corresponding to the autostereoscopic screen used allows the visualization of the generated 3DHDR image.
30

Všesměrová detekce objektů / Multiview Object Detection

Lohniský, Michal January 2014 (has links)
This thesis focuses on modification of feature extraction and multiview object detection learning process. We add new channels to detectors based on the "Aggregate channel features" framework. These new channels are created by filtering the picture by kernels from autoencoders followed by nonlinear function processing. Experiments show that these channels are effective in detection but they are also more computationally expensive. The thesis therefore discusses possibilities for improvements. Finally the thesis evaluates an artificial car dataset and discusses its small benefit on several detectors.

Page generated in 0.4094 seconds