• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 13
  • 12
  • 7
  • 6
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 65
  • 52
  • 35
  • 26
  • 24
  • 24
  • 23
  • 21
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Improving the Perception of Depth of Image-Based Objects in a Virtual Environment

Whang, JooYoung 29 July 2020 (has links)
In appreciation of High-Performance Computing, modern scientific simulations are scaling into millions and even billions of grid points. As we enter the exa-scale, new strategies are required for visualization and analysis. While Image-Based Rendering (IBR) has emerged as a viable solution to the asymmetry between data size and its storage and required rendering power, it is limited in its 2D image portrayal of 3D spatial objects. This work describes a novel technique to capture, represent, and render depth information in the context of 3D IBR. We tested the value of displacement by displacement map, shading by normal, and image angle interval with our technique. We ran an online user study of 60 participants to evaluate the value of adding depth information back to Image-Based Rendering and found significant benefits. / Master of Science / In scientific research, data visualization is important for better understanding data. Modern experiments and simulations are expanding rapidly in scale, and there will come a day when rendering the entire 3D geometry becomes impossible resource-wise. Cinema was proposed as an image-Based solution to this problem, where the model was represented by an interpolated series of images. However, using flat images cannot fully express the 3D characteristics of a data. Therefore, in this work, we try to improve the depth portrayal of the images by protruding the pixels and applying shading. We show the results of a user study conducted with 60 participants on the effect of pixel protrusion, shading, and varying the number of images representing the object. Results show that this method would be useful for 3D scientific visualizations. The resulting object almost accurately resembles the 3D object.
22

A complete and practical system for interactive walkthroughs of arbitrarily complex scenes

Yang, Lining 06 August 2003 (has links)
No description available.
23

AI-based image matching for high-altitude vehicle navigation / AI-baserad bildmatchning för navigation av höghöjdsfordon

Lernholt, Oskar January 2024 (has links)
Localization without Global Navigation Satellite Systems (GNSS) is an area of interest forautonomous operations of aerial vehicles. A promising navigation method involves using onboardimages and comparing them to geo-tagged reference images for global localization. This studyinvestigates algorithms for global localization of flying vehicles at altitudes around 1-3 km usingimages. The focus is on matching onboard camera images with georeferenced images undersignificant appearance variations due to seasonal changes or different image sources. Fourmethods are evaluated: two traditional correlation techniques, the cross mutual informationfunction (CMIF) method, and the Deep Phase Correlation Network (DPCN) which uses neuralnetworks. Synthetic data from the X-Plane 12 flight simulator, featuring built-in graphics andsatellite imagery, is used to generate datasets over varying locations and under diverseconditions. The results indicate that the DPCN method, combining deep learning and correlation,achieves the highest accuracy across most test scenarios. This study underscores the potentialof DPCN for robust aerial vehicles image matching in GNSS-denied environments, while alsonoting its limitations in challenging weather conditions. Future improvements could involve largertraining datasets, the use of real-world images, and integration with additional navigation methodslike inertial and visual odometry.
24

Learning Geometry-free Face Re-lighting

Moore, Thomas Brendan 01 January 2007 (has links)
The accurate modeling of the variability of illumination in a class of images is a fundamental problem that occurs in many areas of computer vision and graphics. For instance, in computer vision there is the problem of facial recognition. Simply, one would hope to be able to identify a known face under any illumination. On the other hand, in graphics one could imagine a system that, given an image, the illumination model could be identified and then used to create new images. In this thesis we describe a method for learning the illumination model for a class of images. Once the model is learnt it is then used to render new images of the same class under the new illumination. Results are shown for both synthetic and real images. The key contribution of this work is that images of known objects can be re-illuminated using small patches of image data and relatively simple kernel regression models. Additionally, our approach does not require any knowledge of the geometry of the class of objects under consideration making it relatively straightforward to implement. As part of this work we will examine existing geometric and image-based re-lighting techniques; give a detailed description of our geometry-free face re-lighting process; present non-linear regression and basis selection with respect to image synthesis; discuss system limitations; and look at possible extensions and future work.
25

Image-based approaches for photo-realistic rendering of complex objects

Hilsmann, Anna 03 April 2014 (has links)
Fotorealistisches Rendering ist eines der Hauptziele der Computer Grafik. Mittels physikalischer Simulation ist eine fotorealistische Darstellung immer noch rechenaufwändig. Diese Arbeit stellt neue Methoden für Bild-basiertes Rendering komplexer Objekte am Beispiel von Kleidung vor. Die vorgestellten Methoden nutzen Kamerabilder und deren fotorealistische Eigenschaften für komplexe Animationen und Texturmodifikationen. Basierend auf der Annahme, dass für eng anliegende Kleidung Faltenwurf hauptsächlich von der Pose des Trägers beeinflusst wird, schlägt diese Dissertation ein neues Bild-basiertes Verfahren vor, das neue Bilder von Kleidungsstücken abhängig von der Körperpose einer Person aus einer Datenbank von Bildern synthetisiert. Posen-abhängige Eigenschaften (Textur und Schattierung) werden über Abbildungsvorschriften zwischen den Bildern extrahiert und im Posenraum interpoliert. Um die Erscheinung eines Objekts zu verändern, wird ein Verfahren vorgestellt, das den Austausch von Texturen ohne Kenntnis der zugrundeliegenden Szeneneigenschaften ermöglicht. Texturdeformation und Schattierung werden über Bildregistrierung zu einem geeigneten Referenzbild extrahiert. Im Gegensatz zu klassischen Bild-basierten Verfahren, in denen die Synthese auf Blickpunktänderung beschränkt und eine Veränderung des Objekts nicht möglich ist, erlauben die vorgestellten Verfahren komplexe Animationen und Texturmodifikation. Beide Verfahren basieren auf örtlichen und photometrischen Abbildungen zwischen Bildern. Diese Abbildungen werden basierend auf einem angepassten Brightness Constancy Constraint mit Gitternetz-basierten Modellen optimiert. Die vorgestellten Verfahren verlagern einen großen Teil des Rechenaufwands von der Darstellungsphase in die vorangegangene Trainingsphase und erlauben eine realistische Visualisierung von Kleidung inklusive charakteristischer Details, ohne die zugrundeliegenden Szeneneigenschaften aufwändig zu simulieren. / One principal intention of computer graphics is the achievement of photorealism. With physically-based methods, achieving photorealism is still computationally demanding. This dissertation proposes new approaches for image-based visualization of complex objects, concentrating on clothes. The developed methods use real images as appearance examples to guide complex animation or texture modification processes, combining the photorealism of images with the ability to animate or modify an object. Under the assumption that wrinkling depends on the pose of a human body (for tight-fitting clothes), a new image-based rendering approach is proposed, which synthesizes images of clothing from a database of images based on pose information. Pose-dependent appearance and shading information is extracted by image warps and interpolated in pose-space using scattered data interpolation. To allow for appearance changes in image-based methods, a retexturing approach is proposed, which enables texture exchange without a-priori knowledge of the underlying scene properties. Texture deformation and shading are extracted from the input image by a warp to an appropriate reference image. In contrast to classical image-based visualization methods, where animation is restricted to viewpoint change and appearance modification is not possible, the proposed methods allow for complex pose animations and appearance changes. Both approaches build on image warps, not only in the spatial but also in the photometric domain. A new framework for joint spatial and photometric warp optimization is introduced, which estimates mesh-based warp models under a modified brightness constancy assumption. The presented approaches shift computational complexity from the rendering to an a-priori training phase and allow a photo-realistic visualization and modification of clothes, including fine and characteristic details without computationally demanding simulation of the underlying scene and object properties.
26

Camera positioning for 3D panoramic image rendering

Audu, Abdulkadir Iyyaka January 2015 (has links)
Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects. To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality.
27

Modeling object identification and tracking errors on automated spatial safety assessment of earthmoving operations

Chi, Seok Ho 01 October 2010 (has links)
Recent research studies have been conducted for automating the safety assessment process in order to identify risks and safety hazards on a job site without human intervention. Regardless of the benefits of automated assessment, safety planners still face challenges selecting applicable devices, methods, and algorithms for safety assessment. This is due to the fact that (1) such devices, methods, and algorithms typically have measurement and processing errors, (2) construction operations and sites are unique and complex, and (3) the impact of the errors is different depending on workspaces. The primary objective of this research is to develop an error impact analysis method to model data collection and data processing errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. The literature review revealed the possible causes of accidents on earthmoving activities, investigated the spatial risk factors of these types of accident, and identified spatial data needs for safety assessment based on current safety regulations. Image-based data collection devices and algorithms for safety assessment were then evaluated. Analysis methods and rules for monitoring safety violations were also discussed. A testbed to model and simulate workspaces and related spatial safety violations was finally designed. Using the testbed, the impacts of image-based algorithm and device errors―more specifically, object identification and tracking errors―on the data collected and processed were investigated for the safety planning purpose. Field experiments assessed the feasibility of automated spatial data collection and analysis methods. Industrial project and safety experts verified the proposed safety rules and the testbed design. Computer simulations were conducted for testing the proposed testbed. The testbed was used to model several earthmoving operation scenarios, detect simulated safety violations using safety rules, and finally evaluate the impact of different object identification and tracking errors on the safety analyses. The result of this research could be used for improving site safety assessment and planning by assisting safety planners to understand workspaces and to evaluate errors related to the use of different image-based technologies for safety assessment of earthmoving and surface mining activities. / text
28

Incident Light Fields

Unger, Jonas January 2009 (has links)
Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene. This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased. The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.
29

An Analytic Image-Technology Inventory of National Tourism Organizations (NTOs)

Chang, Lung-chiuan 15 December 2006 (has links)
The Internet is playing an increasingly crucial role in destination marketing and it is used as a major marketing tool among National Tourism Organizations (NTOs). Website design is influential for consumers' Website preference and destination selection. This study is to understand the application of image-based technology by the major National Tourism Organizations (NTOs) through the collection and comparison of static images and dynamic images presented in their official tourism Websites. Data collected from the sampling of the world's top 25 tourism destination nations reveals that all National Tourism Organizations (NTOs) use either static images or dynamic images for their Websites, but the use of static images are far more popular than that of dynamic images.
30

Environnements lumineux naturels en mode : Spectral et Polarisé. Modélisation, Acquisition, Simulation / Spectral and Polarized Natural Light Environment

Porral, Philippe 16 December 2016 (has links)
Dans le domaine de la synthèse d'image, la simulation de l'apparence visuelle des matériaux nécessite, la résolution rigoureuse de l'équation du transport de la lumière. Cela implique d'incorporer dans les modèles tous les éléments pouvant avoir une influence sur la luminance spectrale énergétique reçue par l'œil humain. La caractérisation des propriétés de réflectance des matériaux, encore sujette à de nombreuses recherches, est très évoluée. Cependant, l'utilisation de cartes d'environnement, pour simuler leurs comportements visuels restent essentiellement trichromatiques. Caractériser la lumière naturelle avec précision, est une interrogation ancienne et il n'existe pas aujourd'hui de cartes d'environnement comportant à la fois les informations de luminance spectrale énergétique et de polarisations correspondant à des ciels réels. Il nous est donc apparu nécessaire, de proposer à la communauté de l'informatique graphique des environnements lumineux complets exploitables dans un moteur de rendu adapté en conséquence.Dans ce travail, nous exploitons des résultats issus d'autres domaines scientifiques tels que la météorologie, la climatologie..., pour proposer un modèle de ciel clair, c'est-à-dire sans nuage.Toutes les situations réelles ne pouvant pas être abordées par cette méthode, nous développons et caractérisons un dispositif de capture d'environnement lumineux incorporant à la fois, la gamme dynamique de l'éclairage, la répartition spectrale et les états de polarisation.Nous proposons, dans le but de standardiser les échanges, un format de données utilisable dans un moteur de rendu spectral, exploitant le formalisme de "Stokes - Mueller". / In the field of computer graphics, the simulation of the visual appearance of materials requires, a rigorous solving of the light transport equation. This implies to incorporate into models all elements that can influence the spectral received by human eyes. The characterization of the reflectance properties of materials, still subject to many researches is very advanced. However, the uses of environment maps, to simulate their visual behaviors remain essentially trichromaticity. Characterize the natural light with precision, is an old question. Today, there are no environment maps, including both spectral radiance and polarization informations, corresponding to a real sky. It was therefore necessary for us to design and propose to the computer graphics community a full of bright environments exploitable in a rendering engine adapted accordingly. In this work, we use the results of other scientific fields as meteorology, climatology..., to propose a new model of clear sky. As all actual situations are not addressed by this method, we develop and characterize an environment capturing device both incorporating the light dynamic range, the spectral distribution and the polarization states.

Page generated in 0.0619 seconds