• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 6
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 15
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Fusion Model For Enhancement of Range Images / English

Hua, Xiaoben, Yang, Yuxia January 2012 (has links)
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology. / Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
32

[en] INTERACTIVE IMAGE-BASED RENDERING FOR VIRTUAL VIEW SYNTHESIS FROM DEPTH IMAGES / [pt] RENDERIZAÇÃO INTERATIVA BASEADA EM IMAGENS PARA SÍNTESE DE VISTAS VIRTUAIS A PARTIR DE IMAGENS COM PROFUNDIDADE

CESAR MORAIS PALOMO 19 September 2017 (has links)
[pt] Modelagem e renderização baseadas em imagem tem sido uma área de pesquisa muito ativa nas últimas décadas, tendo recebido grande atenção como uma alternativa às técnicas tradicionais de síntese de imagens baseadas primariamente em geometria. Nesta área, algoritmos de visão computacional são usados para processar e interpretar fotos ou vídeos do mundo real a fim de construir um modelo representativo de uma cena, ao passo que técnicas de computação gráfica são usadas para tomar proveito desta representação e criar cenas foto-realistas. O propósito deste trabalho é investigar técnicas de renderização capazes de gerar vistas virtuais de alta qualidade de uma cena, em tempo real. Para garantir a performance interativa do algoritmo, além de aplicar otimizações a métodos de renderização existentes, fazemos uso intenso da GPU para o processamento de geometria e das imagens para gerar as imagens finais. Apesar do foco deste trabalho ser a renderização, sem reconstruir o mapa de profundidade a partir das fotos, ele implicitamente contorna possíveis problemas na estimativa da profundidade para que as cenas virtuais geradas apresentem um nível aceitável de realismo. Testes com dados públicos são apresentados para validar o método proposto e para ilustrar deficiências dos métodos de renderização baseados em imagem em geral. / [en] Image-based modeling and rendering has been a very active research topic as a powerful alternative to traditional geometry-based techniques for image synthesis. In this area, computer vision algorithms are used to process and interpret real-world photos or videos in order to build a model of a scene, while computer graphics techniques use this model to create photorealistic images based on the captured photographs or videos. The purpose of this work is to investigate rendering techniques capable of delivering visually accurate virtual views of a scene in real-time. Even though this work is mainly focused on the rendering task, without the reconstruction of the depth map, it implicitly overcomes common errors in depth estimation, yielding virtual views with an acceptable level of realism. Tests with publicly available datasets are also presented to validate our framework and to illustrate some limitations in the IBR general approach.
33

From images to point clouds:practical considerations for three-dimensional computer vision

Herrera Castro, D. (Daniel) 04 August 2015 (has links)
Abstract Three-dimensional scene reconstruction has been an important area of research for many decades. It has a myriad of applications ranging from entertainment to medicine. This thesis explores the 3D reconstruction pipeline and proposes novel methods to improve many of the steps necessary to achieve a high quality reconstruction. It proposes novel methods in the areas of depth sensor calibration, simultaneous localization and mapping, depth map inpainting, point cloud simplification, and free-viewpoint rendering. Geometric camera calibration is necessary in every 3D reconstruction pipeline. This thesis focuses on the calibration of depth sensors. It presents a review of sensors models and how they can be calibrated. It then examines the case of the well-known Kinect sensor and proposes a novel calibration method using only planar targets. Reconstructing a scene using only color cameras entails di_erent challenges than when using depth sensors. Moreover, online applications require real-time response and must update the model as new frames are received. The thesis looks at these challenges and presents a novel simultaneous localization and mapping system using only color cameras. It adaptively triangulates points based on the detected baseline while still utilizing non-triangulated features for pose estimation. The thesis addresses the extrapolating missing information in depth maps. It presents three novel methods for depth map inpainting. The first utilizes random sampling to fit planes in the missing regions. The second method utilizes a 2nd-order prior aligned with intensity edges. The third method learns natural filters to apply a Markov random field on a joint intensity and depth prior. This thesis also looks at the issue of reducing the quantity of 3D information to a manageable size. It looks at how to merge depth maps from multiple views without storing redundant information. It presents a method to discard this redundant information while still maintaining the naturally variable resolution. Finally, transparency estimation is examined in the context of free-viewpoint rendering. A procedure to estimate transparency maps for the foreground layers of a multi-view scene is presented. The results obtained reinforce the need for a high accuracy 3D reconstruction pipeline including all the previously presented steps. / Tiivistelmä Kolmiuloitteisen ympäristöä kuvaavan mallin rakentaminen on ollut tärkeä tutkimuksen kohde jo usean vuosikymmenen ajan. Sen sovelluskohteet ulottuvat aina lääketieteestä viihdeteollisuuteen. Väitöskirja tarkastelee 3D ympäristöä kuvaavan mallin tuottamisprosessia ja esittää uusia keinoja parantaa korkealaatuisen rekonstruktion tuottamiseen vaadittavia vaiheita. Työssä esitetään uusia menetelmiä etäisyyssensoreiden kalibrointiin, samanaikaisesti tapahtuvaan paikannukseen ja kartoitukseen, syvyyskartan korjaamiseen, etäisyyspistepilven yksinkertaistamiseen ja vapaan katselukulman kuvantamiseen. Väitöskirjan ensi osa keskittyy etäisyyssensoreiden kalibrointiin. Työ esittelee erilaisia sensorimalleja ja niiden kalibrointia. Yleisen tarkastelun lisäksi keskitytään hyvin tunnetun Kinect-sensorin käyttämiseen, ja ehdotetaan uutta kalibrointitapaa pelkkiä tasokohteita hyväksikäyttäen. Pelkkien värikameroiden käyttäminen näkymän rekonstruointiin tuottaa erilaisia haasteita verrattuna etäisyyssensoreiden käyttöön kuvan muodostamisessa. Lisäksi verkkosovellukset vaativat reaaliaikaista vastetta. Väitös tarkastelee kyseisiä haasteita ja esittää uudenlaisen yhtäaikaisen paikannuksen ja kartoituksen mallin tuottamista pelkkiä värikameroita käyttämällä. Esitetty tapa kolmiomittaa adaptiivisesti pisteitä taustan pohjalta samalla kun hyödynnetään eikolmiomitattuja piirteitä asentotietoihin. Työssä esitellään kolme uudenlaista tapaa syvyyskartan korjaamiseen. Ensimmäinen tapa käyttää satunnaispisteitä tasojen kohdentamiseen puuttuvilla alueilla. Toinen tapa käyttää 2nd-order prior kohdistusta ja intensiteettireunoja. Kolmas tapa oppii filttereitä joita se soveltaa Markov satunnaiskenttiin yhteisillä tiheys ja syvyys ennakoinneilla. Tämä väitös selvittää myös mahdollisuuksia 3D-information määrän pienentämiseen käsiteltävälle tasolle. Työssä selvitetään, kuinka syvyyskarttoja voidaan yhdistää ilman päällekkäisen informaation tallentamista. Työssä esitetään tapa jolla päällekkäisestä datasta voidaan luopua kuitenkin säilyttäen luonnollisesti muuttuva resoluutio. Viimeksi, tutkimuksessa on esitetty läpinäkyvyyskarttojen arviointiproseduuri etualan kerroksien monikatselukulmanäkymissä vapaan katselukulman renderöinnin näkökulmasta. Saadut tulokset vahvistavat tarkan 3D-näkymän rakentamisliukuhihnan tarvetta sisältäen kaikki edellä mainitut vaiheet.
34

Methods for image-based 3-D modeling using color and depth cameras

Ylimäki, M. (Markus) 05 December 2017 (has links)
Abstract This work addresses the problems related to three-dimensional modeling of scenes and objects and model evaluation. The work is divided into four main parts. At first, the work concentrates on purely image-based reconstruction while the second part presents a modeling pipeline based on an active depth sensor. Then, the work introduces methods for producing surface meshes from point clouds, and finally, a novel approach for model evaluation is presented. In the first part, this work proposes a multi-view stereo (MVS) reconstruction method that takes a set of images as an input and outputs a model represented as a point cloud. The method is based on match propagation, where a set of initial corresponding points between images is expanded iteratively into larger regions by searching new correspondences in the spatial neighborhood of the existing ones. The expansion is implemented using a best-first strategy, where the most reliable match is always expanded first. The method produces comparable results with the state-of-the-art but significantly faster. In the second part, this work presents a method that merges a sequence of depth maps into a single non-redundant point cloud. In the areas, where the depth maps overlap, the method fuses points together by giving more weight to points which seem to be more reliable. The method overcomes its predecessor both in accuracy and robustness. In addition, this part introduces a method for depth camera calibration. The method develops on an existing calibration approach which was originally designed for the first generation Microsoft Kinect device. The third part of the thesis addresses the problem of converting the point clouds to surface meshes. The work briefly reviews two well-known approaches and compares their ability to produce sparse mesh models without sacrificing accuracy. Finally, the fourth part of this work describes the development of a novel approach for performance evaluation of reconstruction algorithms. In addition to the accuracy and completeness, which are the metrics commonly used in existing evaluation benchmarks, the method also takes the compactness of the models into account. The metric enables the evaluation of the accuracy-compactness trade-off of the models. / Tiivistelmä Tässä työssä käsitellään näkymän tai esineen kolmiulotteista mallintamista ja tulosten laadun arviointia. Työ on jaettu neljään osaan. Ensiksi keskitytään pelkästään valokuvia hyödyntävään mallinnukseen ja sitten esitellään menetelmä syvyyskamerapohjaiseen mallinnukseen. Kolmas osa kuvaa menetelmiä verkkomallien luomiseen pistepilvestä ja lopuksi esitellään menetelmä mallien laadun arviointiin. Ensimmäisessä osassa esitellään usean kuvan stereoon perustuva mallinnusmenetelmä, joka saa syötteenä joukon valokuvia ja tuottaa kuvissa näkyvästä kohteesta pistepilvimallin. Menetelmä perustuu vastinpisteiden laajennukseen, jossa kuvien välisiä pistevastaavuuksia laajennetaan iteratiivisesti suuremmiksi vastinalueiksi hakemalla uusia vastinpistepareja jo löydettyjen läheisyydestä. Laajennus käyttää paras ensin -menetelmää, jossa luotettavin pistevastaavuus laajennetaan aina ensin. Menetelmä tuottaa vertailukelpoisia tuloksia johtaviin menetelmiin verrattuna, mutta merkittävästi nopeammin. Toisessa osassa esitellään menetelmä, joka yhdistää joukon syvyyskameralla kaapattuja syvyyskarttoja yhdeksi pistepilveksi. Alueilla, jotka sisältävät syvyysmittauksia useasta syvyyskartasta, päällekkäiset mittaukset yhdistetään painottamalla luotettavammalta vaikuttavaa mittausta. Menetelmä on tarkempi kuin edeltäjänsä ja toimii paremmin kohinaisemmalla datalla. Lisäksi tässä osassa esitellään menetelmä syvyyskameran kalibrointiin. Menetelmä kehittää jo olemassa olevaa kalibrointityökalua, joka alun perin kehitettiin ensimmäisen sukupolven Microsoft Kinect laitteelle. Väitöskirjan kolmas osa käsittelee pintamallin luomista pistepilvestä. Työ esittelee kaksi hyvin tunnettua menetelmää ja vertailee niiden kykyä luoda harvoja, mutta edelleen tarkkoja malleja. Lopuksi esitellään uudenlainen menetelmä mallinnusmenetelmien arviointiin. Tarkkuuden ja kattavuuden lisäksi, jotka ovat yleisimmät arvioinnissa käytetyt metriikat, menetelmä ottaa huomioon myös mallin pistetiheyden. Metriikan avulla on mahdollista arvioida kompromissia mallin tarkkuuden ja tiheyden välillä.
35

Realizace kamerového modulu pro mobilní robot jako nezávislého uzlu systému ROS - Robot Operating System / Realization of camera module for mobile robot as independent ROS node

Albrecht, Ladislav January 2020 (has links)
Stereo vision is one of the most popular elements in the field of mobile robots and significantly contributes to their autonomous behaviour. The aim of the diploma thesis was to design and implement a camera module as a hardware sensor input, which is independent, with the possibility of supplementing the system with other cameras, and to create a depth map from a pair of cameras. The diploma thesis consists of theoretical and practical part, including the conclusion of results. The theoretical part introduces the ROS framework, discusses methods of creating depth maps, and provides an overview of the most popular stereo cameras in robotics. The practical part describes in detail the preparation of the experiment and its implementation. It also describes the camera calibration and the depth map creating. The last chapter contains an evaluation of the experiment.
36

3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations

Konradsson, Albin, Bohman, Gustav January 2021 (has links)
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
37

Robust Learning of a depth map for obstacle avoidance with a monocular stabilized flying camera / Apprentissage robuste d'une carte de profondeur pour l'évitement d'obstacle dans le cas des cameras volantes, monoculaires et stabilisées

Pinard, Clément 24 June 2019 (has links)
Le drone orienté grand public est principalement une caméra volante, stabilisée et de bonne qualité. Ceux-ci ont démocratisé la prise de vue aérienne, mais avec leur succès grandissant, la notion de sécurité est devenue prépondérante.Ce travail s'intéresse à l'évitement d'obstacle, tout en conservant un vol fluide pour l'utilisateur.Dans ce contexte technologique, nous utilisons seulement une camera stabilisée, par contrainte de poids et de coût.Pour leur efficacité connue en vision par ordinateur et leur performance avérée dans la résolution de tâches complexes, nous utilisons des réseaux de neurones convolutionnels (CNN). Notre stratégie repose sur un systeme de plusieurs niveaux de complexité dont les premieres étapes sont de mesurer une carte de profondeur depuis la caméra. Cette thèse étudie les capacités d'un CNN à effectuer cette tâche.La carte de profondeur, étant particulièrement liée au flot optique dans le cas d'images stabilisées, nous adaptons un réseau connu pour cette tâche, FlowNet, afin qu'il calcule directement la carte de profondeur à partir de deux images stabilisées. Ce réseau est appelé DepthNet.Cette méthode fonctionne en simulateur avec un entraînement supervisé, mais n'est pas assez robuste pour des vidéos réelles. Nous étudions alors les possibilites d'auto-apprentissage basées sur la reprojection différentiable d'images. Cette technique est particulièrement nouvelle sur les CNNs et nécessite une étude détaillée afin de ne pas dépendre de paramètres heuristiques.Finalement, nous développons un algorithme de fusion de cartes de profondeurs pour utiliser DepthNet sur des vidéos réelles. Plusieurs paires différentes sont données à DepthNet afin d'avoir une grande plage de profondeurs mesurées. / Customer unmanned aerial vehicles (UAVs) are mainly flying cameras. They democratized aerial footage, but with thei success came security concerns.This works aims at improving UAVs security with obstacle avoidance, while keeping a smooth flight. In this context, we use only one stabilized camera, because of weight and cost incentives.For their robustness in computer vision and thei capacity to solve complex tasks, we chose to use convolutional neural networks (CNN). Our strategy is based on incrementally learning tasks with increasing complexity which first steps are to construct a depth map from the stabilized camera. This thesis is focused on studying ability of CNNs to train for this task.In the case of stabilized footage, the depth map is closely linked to optical flow. We thus adapt FlowNet, a CNN known for optical flow, to output directly depth from two stabilized frames. This network is called DepthNet.This experiment succeeded with synthetic footage, but is not robust enough to be used directly on real videos. Consequently, we consider self supervised training with real videos, based on differentiably reproject images. This training method for CNNs being rather novel in literature, a thorough study is needed in order not to depend too moch on heuristics.Finally, we developed a depth fusion algorithm to use DepthNet efficiently on real videos. Multiple frame pairs are fed to DepthNet to get a great depth sensing range.
38

Antal tvärsektioners påverkan på djupmodeller producerad av SeaFloor HydroLite ™ enkelstråligt ekolod : En jämförelse mot djupmodeller producerad av Kongsberg EM 2040P MKII flerstråligt ekolod

Hägg, Linnéa, Stenberg Jönsson, Simon January 2023 (has links)
Hydroakustiska mätningar har gjorts i nästan två hundra år. Det kan liknas med topografiska mätningar på land och visar hur sjö- eller havsbottnar ser ut. Idag används ekolod vilket är en teknik som skickar ut ljudvågor i vattnet för att mäta hur lång tid det tar för ljudet att studsa på bottnen och sedan komma upp till instrumentet igen. Därefter går det att räkna ut djupet med hjälp av ljudhastighetsberäkningar. Vid inmätning av enkelstråligt ekolod rekommenderas användande av tvärsektioner som kontroll av data. Flerstråligt ekolod behöver däremot inte tvärsektioner då övertäckning mellan stråken används som kontroll. I denna studie undersöks hur antalet tvärsektioner påverkar djupkartor skapade av Seafloor HydroLite TM enkelstråligt ekolod. Detta är även en undersökning av hur djupkartor producerade av SeaFloor HydroLite TM enkelstråligt ekolod skiljer sig mot djupkartor producerade av Kongsberg EM 2040 MK11 flerstråligt ekolod. Studieområdet är 1820 m2 och är beläget vid Forsbackas hamn i Storsjön, Gävle kommun. Vid inmätning av flerstråligt ekolod användes en övertäckning av lägst 50 %. Fem huvudstråk och sju tvärsektioner mättes med enkelstråligt ekolod för området. Djupkartor med olika antal tvärsektioner gjordes i Surfer 10 från enkelstråligt ekolod. Därefter jämfördes djupkartor av enkelstråligt ekolod mot kartor gjorda av data från flerstråligt ekolod för att se hur djupkartorna skiljer sig och för att se hur djupkartorna av enkelstråligt ekolod påverkas av olika antal tvärsektioner. Med användande av flerstråligt ekolod som referens mot djupkartor gjorda av enkelstråligt ekolod blev resultaten att RMS och standardosäkerhet minskar med 1 cm i RMS-värde och med 2 cm i standardosäkerhet. Jämförelse mellan ekolods systemen visar att skillnaden av djupvärderna är runt 10 cm. Slutsatserna från denna studie är att tvärsektioner endast förbättrar kvalitén på djupkartor marginellt vid jämn och enhetlig bottentopografi, men fyller en viktig funktion genom att kontrollera kvalitén av inmätningsdatat. Samt att SeaFloor HydroLite TM klarar av order 1b vid ett djup omkring en till fyra meter om ej kravet på full bottentäckning beaktas. Seafloor HydroLite TM skapar en översiktlig djupkarta medan djupmodellerna från Kongsberg EM 2040 MKII ser mera detaljer. / Hydroacoustic measurements have been conducted for almost two hundred years. It can be compared to topographic measurements on land and shows the appearance of lake or ocean floors. Today, echosounders are used, which is a technique that sends out sound waves into the water to measure the time it takes for the sound to bounce off the bottom and return to the instrument. Sound velocity calculations can then be used to calculate the depth. The use of cross-sections is recommended as a data control of single beam echosounder. However, multi beam echosounders only use overlap as control. This study examines how the number of cross-sections affects depth maps created by Seafloor HydroLite TM single beam echosounder. It also investigates the differences between depth maps produced by the SeaFloor HydroLite TM single beam echosounder and the Kongsberg EM 2040 MK11 multi beam echosounder. The study area covers 1820 m2 and is located at Forsbackas Harbor in Storsjön, Gävle municipality. A minimum overlap of 50% was used for the surveying with the multi beam echosounder. Five main lines and seven cross-sections were measured using the single beam echosounder. Depth maps with different numbers of cross-sections were created using data from the single beam echosounder. The maps from the single beam echosounder were compared to maps created from the data obtained by the multi beam echosounder to assess the differences and the impact of varying numbers of cross-sections on the depth maps from the single beam echosounder. By using the multi beam echosounder as a reference for the depth maps created by the single beam echosounder, the results showed a decrease of 1 cm in RMS value and 2 cm in standard deviation. The comparison between the echosounder systems revealed a difference of around 10 cm in depth values. The conclusions from this study are that cross-sections only marginally improve the quality of depth maps in cases of even and uniform bottom topography but serve an important function in validating the quality of the survey data. Additionally, the SeaFloor HydroLite TM is capable of meeting Order 1b at depths ranging from one to four meters if the requirement for full bottom coverage is not considered. The Seafloor HydroLite TM creates a general overview of the depth map, while the depth models from the Kongsberg EM 2040 MKII provide more detailed information.
39

Reconstrução tridimensional para objetos de herança virtual. / Tridimensional reconstruction for virtual heritage objects.

Miranda, Hardy José Santos de 28 May 2018 (has links)
Em um primeiro momento as novas tecnologias podem impulsionar acentuadamente a interação com um elemento, o que pode levar à um aprendizado significativo, mas esse impulso reduz assim que a interação se torna comum ou até mesmo repetitiva. Quando essa nova tecnologia se torna natural para o usuário ela deixa de ser uma novidade e se torna uma ferramenta. O uso de Imagens Geradas por Computador (IGC) experienciaram exatamente isso, décadas atrás, mas estão constantemente sendo iteradas de acordo com suas necessidades de reavaliação frequentes. Com o desenvolvimento das IGC as imagens tridimensionais deixaram de ser um formato excessivamente complicado, ao passo que hardwares e conceitos foram adentrando objetos do dia-a-dia como smartphones, webcams, câmeras, aplicativos de geração de malhas 3D, etc. O seu uso com objetivos museológicos se tornou evidente no campo da herança cultural para arquivamento e comunicação. Sendo assim, para verificar a viabilidade para uma solução fácil e de baixo custo visando novos usuários, diferentes tipos de métodos não-destrutivos de reconstrução baseadas na superfície foram analisados. Para isso, identificou-se a qualidade do resultado de precisão da malha, a rastreabilidade e a compatibilidade dos mesmos. Com esse objetivo, foi proposto um método com um conjunto de métricas que podem ser aplicadas para determinar a usabilidade de um objeto reconstruído com um fim específico. Quatro artefatos arqueológicos foram escaneados usando métodos de vídeo fotogrametria e vídeo de profundidade, a serem comparados com substitutos escaneados a laser. Depois de analisar os escaneamentos dos mesmos objetos com esses diferentes métodos, concluiu-se que a fotogrametria é capaz de gerar com rapidez um modelo altamente detalhado, mas com várias distorções. A profundidade de câmera gerou superfícies mais suaves e maior incidência de erros. Em última análise, cada método apresentado demonstra múltiplas possibilidades para materialização, dependendo do objetivo, resolução e de quão detalhado o objeto deve ser para ser corretamente compreendido. / At first glance new technologies can provide an engaging way to interact with a subject which may induce a meaningful learning, but it soon falls short when it becomes common or even repetitive. As this new technology becomes natural to the user, it no longer relies on novelty and goes into another condition, as a tool. The use of Computer-Generated Imagery (CGI) experienced exactly this, decades ago, but as it\'s constantly being iterated upon it needs to be reassessed often. As CGI goes, the tridimensional imagery as an overcomplicated format started to fade, as new hardware and concepts made way into everyday objects such as smartphones, webcams, cameras, 3D mesh generation apps, etc. It\'s use for museological purposes became clear in the field of cultural heritage for archiving and communication. So, to verify the viability for a low-cost and easy to use solution aiming to novice users, different types of non-destructive methods surface based reconstructions are analyzed to identify the quality of the resulted mesh based on precision, traceability and compatibility. To this end, it was proposed a method with a set of metrics which can be used to point out the usability of a reconstructed object for a specific end. Four archaeological artifacts were scanned using the video photogrammetry method and the depth video method, to be compared with a laser scanned surrogate. After analyzing the scans of the same objects with these different methods, the conclusion is that photogrammetry has the power to provide a highly detailed model very fast but with several distortions. The depth camera provided smoother surfaces and higher error. Ultimately, each method presented multiple possibilities for materialize itself, depending on target resolution and how detailed the object must be to correctly understand it.
40

MPEG Z/Alpha and high-resolution MPEG / MPEG Z/Alpha och högupplösande MPEG-video

Ziegler, Gernot January 2003 (has links)
<p>The progression of technical development has yielded practicable camera systems for the acquisition of so called depth maps, images with depth information. </p><p>Images and movies with depth information open the door for new types of applications in the area of computer graphics and vision. That implies that they will need to be processed in all increasing volumes.</p><p>Increased depth image processing puts forth the demand for a standardized data format for the exchange of image data with depth information, both still and animated. Software to convert acquired depth data to such videoformats is highly necessary. </p><p>This diploma thesis sheds light on many of the issues that come with this new task group. It spans from data acquisition over readily available software for the data encoding to possible future applications. </p><p>Further, a software architecture fulfilling all of the mentioned demands is presented. </p><p>The encoder is comprised of a collection of UNIX programs that generate MPEG Z/Alpha, an MPEG2 based video format. MPEG Z/Alpha contains beside MPEG2's standard data streams one extra data stream to store image depth information (and transparency). </p><p>The decoder suite, called TexMPEG, is a C library for the in-memory decompression of MPEG Z/Alpha. Much effort has been put into video decoder parallelization, and TexMPEG is now capable of decoding multiple video streams, not only in parallel internally, but also with inherent frame synchronization between parallely decoded MPEG videos.</p>

Page generated in 0.029 seconds