• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION

Partington, Mike 01 January 2001 (has links)
We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration.
2

Combining Street View and Aerial Images to Create Photo-Realistic 3D City Models

Ivarsson, Caroline January 2014 (has links)
This thesis evaluates two different approaches of using panoramic street view images for creating more photo-realistic 3D city models comparing to 3D city models based on only aerial images. The thesis work has been carried out at Blom Sweden AB with the use of their software and data. The main purpose of this thesis work has been to investigate if street view images can aid in creating more photo-realistic 3D city models on street level through an automatic or semi-automatic approach. Two different approaches have been investigated in this thesis work: using the street view images for texturing already generated 3D building models and using the street view images directly for reconstructing 3D city models. Data was collected over the study area of KTH, Stockholm, Sweden and the models were created with the two software TerraPhoto, used for texturing, and Smart3DCapture, used for reconstruction. The created models were analyzed and compared with the models based on only aerial images and the two approaches were compared to each other. Through using also street view images when creating city models the models are shown to become more photo-realistic representations on street level compared to the models based on only aerial images. The two tested approaches create very different 3D city models in terms of what is visible in the final model, a textured buildings model or a fully reconstructed environment, and also contains different amount of involved manual work. The textured models contain only the buildings and tend to look very flat because the street images are being projected onto flat building walls whereas the reconstructed models reconstructs everything being visible in the images, trees, cars etc., and create a full scale 3D city model. There are however limitations associated with using street view images when modeling cities since they only contain information on the ground level and no information about roofs or higher parts of the city environment and the street view images therefore need to be used in combination with aerial images. Using the street view images for reconstructing city models showed some complications in terms of wavy facades, bumpy roads and objects such as trees being inaccurately modeled. The texturing approach creates less visually pleasing models since objects such as trees and lighting poles are being projected onto the facades. The models based on aerial images look more visually appealing compared to when also using the street view images but lack the resolution be considered as photo-realistic on street level. This thesis work has shown that there is potential in using street view images when creating photo-realistic 3D city models on street level even though it is not yet a semi-automatic or automatic approach. / Denna rapport beskriver två olika tillvägagångssätt för att använda panorama gatubilder för att skapa mer fotorealistiska 3D stadsmodeller jämfört med att enbart använda sig av flygbilder. Arbetet har genomförts i samarbete med Blom Sweden AB som har tillhandahållit programvaror och data i form utav bilder. Det huvudsakliga syftet med studien var att se om gatubilderna kan användas för att skapa mer fotorealistiska 3D stadsmodeller på gatuplan genom ett automatiskt eller halv automatiskt tillvägagångssätt. Två olika tekniker undersöktes: använda gatubilderna för texturering av befintliga 3D byggnadsmodeller och att använda gatubilderna för att direkt rekonstruera 3D stadsmodeller. Data samlades in över studieområdet KTH, Stockholm, och anpassades innan 3D modellerna skapades i programvarorna TerraPhoto, användes för texturering, och Smart3DCapture, användes för rekonstruering. Modellerna blev sedan analyserade och jämförda med modeller baserade på enbart flygbilder och sedan var de två teknikerna jämförda med varandra. Genom att även använda gatubilder vid skapandet av 3D stadsmodeller har det visats att modellerna får ett mer fotorealistiskt utseende jämfört med modeller som är enbart baserade på flygbilder. De två tillvägagångssätten skapar olika typer av modeller i avseendet av vad som finns synligt i modellerna, en texturerad byggnadsmodell eller en helt rekonstruerad miljö, samt att olika mängder av manuellt arbete är involverade i de olika tillvägagångssätten. Den texturerade modellen innehåller enbart byggnader och tenderar till att se väldigt platt ut eftersom gatubilderna blir projicerade på byggnadsfasaderna medan den rekonstruerade modellen innehåller allt som finns synligt i bilderna, träd, bilar etc. och rekonstruerar en 3D stadsmodell som även innehåller den omkringliggande miljön. Det finns dock begränsningar i användandet av gatubilder och den främsta är att de är tagna på gatunivå vilket innebär att det inte finns några bilder över varken tak eller högre delar av stadsmiljön och gatubilderna behöver därför användas i kombination med flygbilder. Att använda gatubilder för rekonstruktion visade sig innebära problem i form av ojämna fasader, gropiga gator och att objekt som träd blir felaktigt modellerade. Den texturerade modellen skapar visuellt mindre tilltalande modeller som ser väldigt platta ut och med objekt projicerade på fasaderna. Stadsmodellerna som enbart är baserade på flygbilder ser visuellt bättre ut jämfört med när även gatubilder används men den låga upplösningen gör att modellerna inte kan ses som fotorealistiska på gatuplan. Det här arbetet har visat att det finns potential i att använda gatubilderna för att skapa fotorealistiska 3D stadsmodeller på gatuplan även om det ännu inte är ett halvautomatiskt eller automatiskt tillvägagångssätt.
3

2D Aesthetics with a 3D Pipeline : Achieving a 2D Aesthetic with 3D Geometry

Nilsson, Morgan, Lundmark, Andreas January 2017 (has links)
This thesis evaluates and tests different methods utilized to produce a 2D aesthetic within a 3D pipeline, as in, converting 3D geometry to an aesthetic that is similar to hand-drawn classic films such as Snow White And The Seven Dwarfs. This thesis explores methods to produce both exterior and interior lines that indicate shape and form of 3D models, the conclusion from the tested methods leaves with the statement that it is unlikely the human factor will be ever entirely replaced by automated solutions, and instead a mixed approach with shader relied solutions and involvement of texturing techniques which provides artistic controls where necessary, is deemed to be the most effective way of preserving the hand-drawn 2D aesthetic within a 3D-pipeline.
4

Etudes de méthodes et outils pour la cohérence visuelle en réalité mixte appliquée au patrimoine / Studies of methods and tools for the really mixed visual coherence applied to the patrimony

Durand, Emmanuel 19 November 2013 (has links)
Le travail présenté dans ce mémoire a pour cadre le dispositif de réalité mixte ray-on, conçu par la société on-situ. Ce dispositif, dédié à la mise en valeur du patrimoine architectural et en particulier d'édifices historiques, est installé sur le lieu de l'édifice et propose à l'utilisateur une vision uchronique de celui-ci. Le parti pris étant celui du photo-réalisme, deux pistes ont été suivies : l'amélioration du mélange réel virtuel par la reproduction de l'éclairage réel sur les objets virtuels, et la mise en place d'une méthode de segmentation d'image résiliente aux changements lumineux.Pour la reproduction de l'éclairage, une méthode de rendu basé-image est utilisée et associée à une capture haute dynamique de l'environnement lumineux. Une attention particulière est portée pour que ces deux phases soient justes photométriquement et colorimétriquement. Pour évaluer la qualité de la chaîne de reproduction de l'éclairage, une scène test constituée d'une mire de couleur calibrée est mise en place, et capturée sous de multiples éclairages par un couple de caméra, l'une capturant une image de la mire, l'autre une image de l'environnement lumineux. L'image réelle est alors comparée au rendu virtuel de la même scène, éclairée par cette seconde image.La segmentation résiliente aux changements lumineux a été développée à partir d'une classe d'algorithmes de segmentation globale de l'image, considérant celle-ci comme un graphe où trouver la coupe minimale séparant l'arrière plan et l'avant plan. L'intervention manuelle nécessaire à ces algorithmes a été remplacée par une pré-segmentation de moindre qualité à partir d'une carte de profondeur, cette pré-segmentation étant alors utilisée comme une graîne pour la segmentation finale. / The work described in this report has as a target the mixed reality device ray-on, developed by the on-situ company. This device, dedicated to cultural heritage and specifically architectural heritage, is meant to be installed on-site and shows the user an uchronic view of its surroundings. As the chosen stance is to display photo-realistic images, two trails were followed: the improvement of the real-virtual merging by reproducing accurately the real lighting on the virtual objects, and the development of a real-time segmentation method which is resilient to lighting changes.Regarding lighting reproduction, an image-based rendering method is used in addition to a high dynamic range capture of the lighting environment. The emphasis is put on the photometric and colorimetric correctness of these two steps. To measure the quality of the lighting reproduction chain, a test scene is set up with a calibrated color checker, captured by a camera while another camera is grabbing the lighting environment. The image of the real scene is then compared to the simulation of the same scene, enlightened by the light probe.Segmentation resilient to lighting changes is developed from a set of global image segmentation methods, which consider an image as a graph where a cut of minimal energy has to be found between the foreground and the background. These methods being semi-automatic, the manual part is replaced by a low resolution pre-segmentation based on the depthmap of the scene which is used as a seed for the final segmentation.
5

Advances in Modelling, Animation and Rendering

Vince, J.A., Earnshaw, Rae A. January 2002 (has links)
No / This volume contains the papers presented at Computer Graphics International 2002, in July, at the University of Bradford, UK. These papers represent original research in computer graphics from around the world.
6

Hessian-based occlusion-aware radiance caching

Zhao, Yangyang 10 1900 (has links)
Simuler efficacement l'éclairage global est l'un des problèmes ouverts les plus importants en infographie. Calculer avec précision les effets de l'éclairage indirect, causés par des rebonds secondaires de la lumière sur des surfaces d'une scène 3D, est généralement un processus coûteux et souvent résolu en utilisant des algorithmes tels que le path tracing ou photon mapping. Ces techniquesrésolvent numériquement l'équation du rendu en utilisant un lancer de rayons Monte Carlo. Ward et al. ont proposé une technique nommée irradiance caching afin d'accélérer les techniques précédentes lors du calcul de la composante indirecte de l'éclairage global sur les surfaces diffuses. Krivanek a étendu l'approche de Ward et Heckbert pour traiter le cas plus complexe des surfaces spéculaires, en introduisant une approche nommée radiance caching. Jarosz et al. et Schwarzhaupt et al. ont proposé un modèle utilisant le hessien et l'information de visibilité pour raffiner le positionnement des points de la cache dans la scène, raffiner de manière significative la qualité et la performance des approches précédentes. Dans ce mémoire, nous avons étendu les approches introduites dans les travaux précédents au problème du radiance caching pour améliorer le positionnement des éléments de la cache. Nous avons aussi découvert un problème important négligé dans les travaux précédents en raison du choix des scènes de test. Nous avons fait une étude préliminaire sur ce problème et nous avons trouvé deux solutions potentielles qui méritent une recherche plus approfondie. / Efficiently simulating global illumination is one of the most important open problems in computer graphics. Accurately computing the effects of indirect illumination, caused by secondary bounces of light off surfaces in a 3D scene, is generally an expensive process and often solved using algorithms such as path tracing or photon mapping. These approaches numerically solve the rendering equation using stochastic Monte Carlo ray tracing. Ward et al. proposed irradiance caching to accelerate these techniques when computing the indirect illumination component on diffuse surfaces. Krivanek extended the approach of Ward and Heckbert to handle the more complex case of glossy surfaces, introducing an approach referred to as radiance caching. Jarosz et al. and Schwarzhaupt et al. proposed a more accurate visibility-aware Hessian-based model to greatly improve the placement of records in the scene for use in an irradiance caching context, significantly increasing the quality and performance of the baseline approach. In this thesis, we extended similar approaches introduced in these aforementioned work to the problem of radiance caching to improve the placement of records. We also discovered a crucial problem overlooked in the previous work due to the choice of test scenes. We did a preliminary study of this problem, and found several potential solutions worth further investigation.

Page generated in 0.0418 seconds