• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 53
  • 53
  • 53
  • 24
  • 17
  • 16
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A true virtual window

Radikovic, Adrijan Silvester 17 February 2005 (has links)
Previous research from environmental psychology shows that human well-being suffers in windowless environments in many ways and a window view of nature is psychologically and physiologically beneficial to humans. Current window substitutes, still images and video, lack three dimensional properties necessary for a realistic viewing experience – primarily motion parallax. We present a new system using a head-coupled display and image-based rendering to simulate a photorealistic artificial window view of nature with motion parallax. Evaluation data obtained from human subjects suggest that the system prototype is a better window substitute than a static image and has significantly more positive effects on observers’ moods. The test subjects judged the system prototype as a good simulation of, and acceptable replacement for, a real window, and accorded it much higher ratings for realism and preference than a static image.
22

3D Reconstruction of Human Faces from Reflectance Fields

Johansson, Erik January 2004 (has links)
Human viewers are extremely sensitive to the appearanceof peoples faces, which makes the rendering of realistic human faces a challenging problem. Techniques for doing this have continuously been invented and evolved since more than thirty years. This thesis makes use of recent methods within the area of image based rendering, namely the acquisition of reflectance fields from human faces. The reflectance fields are used to synthesize and realistically render models of human faces. A shape from shading technique, assuming that human skin adheres to the Phong model, has been used to estimate surface normals. Belief propagation in graphs has then been used to enforce integrability before reconstructing the surfaces. Finally, the additivity of light has been used to realistically render the models. The resulting models closely resemble the subjects from which they were created, and can realistically be rendered from novel directions in any illumination environment.
23

Real-time Arbitrary View Rendering From Stereo Video And Time-of-flight Camera

Ates, Tugrul Kagan 01 January 2011 (has links) (PDF)
Generating in-between images from multiple views of a scene is a crucial task for both computer vision and computer graphics fields. Photorealistic rendering, 3DTV and robot navigation are some of many applications which benefit from arbitrary view synthesis, if it is achieved in real-time. Most modern commodity computer architectures include programmable processing chips, called Graphics Processing Units (GPU), which are specialized in rendering computer generated images. These devices excel in achieving high computation power by processing arrays of data in parallel, which make them ideal for real-time computer vision applications. This thesis focuses on an arbitrary view rendering algorithm by using two high resolution color cameras along with a single low resolution time-of-flight depth camera and matching the programming paradigms of the GPUs to achieve real-time processing rates. Proposed method is divided into two stages. Depth estimation through fusion of stereo vision and time-of-flight measurements forms the data acquisition stage and second stage is intermediate view rendering from 3D representations of scenes. Ideas presented are examined in a common experimental framework and practical results attained are put forward. Based on the experimental results, it could be concluded that it is possible to realize content production and display stages of a free-viewpoint system in real-time by using only low cost commodity computing devices.
24

A true virtual window

Radikovic, Adrijan Silvester 17 February 2005 (has links)
Previous research from environmental psychology shows that human well-being suffers in windowless environments in many ways and a window view of nature is psychologically and physiologically beneficial to humans. Current window substitutes, still images and video, lack three dimensional properties necessary for a realistic viewing experience – primarily motion parallax. We present a new system using a head-coupled display and image-based rendering to simulate a photorealistic artificial window view of nature with motion parallax. Evaluation data obtained from human subjects suggest that the system prototype is a better window substitute than a static image and has significantly more positive effects on observers’ moods. The test subjects judged the system prototype as a good simulation of, and acceptable replacement for, a real window, and accorded it much higher ratings for realism and preference than a static image.
25

A Prototype For An Interactive And Dynamic Image-Based Relief Rendering System / En prototyp för ett interaktivt och dynamisktbildbaserat relief renderingssystem

Bakos, Niklas January 2002 (has links)
<p>In the research of developing arbitrary and unique virtual views from a real- world scene, a prototype of an interactive relief texture mapping system capable of processing video using dynamic image-based rendering, is developed in this master thesis. The process of deriving depth from recorded video using binocular stereopsis is presented, together with how the depth information is adjusted to be able to manipulate the orientation of the original scene. When the scene depth is known, the recorded organic and dynamic objects can be seen from viewpoints not available in the original video.</p>
26

MÃtodo dinÃmico para troca de representaÃÃo em sistemas hÃbridos de renderizaÃÃo de multidÃes / A Dynamic Representation-Switch Method for Hybrid Crowd Rendering Systems

Erasmo Artur da Silva JÃnior 05 March 2013 (has links)
nÃo hà / Ambientes providos de multidÃes sÃo empregados em diversas aplicaÃÃes, como jogos, simuladores e editores. Muitas destas aplicaÃÃes nÃo requerem somente a renderizaÃÃo de agentes animados de forma realÃstica e detalhada, mas que seja executada suavemente em tempo real, tarefa que facilmente esgota os recursos do sistema (mesmo considerando hardware no estado da arte). Por conta disso,a renderizaÃÃo de multidÃes em tempo real permanece como um desafio dentro da computaÃÃo grÃfica. Abordagens explorando nÃvel de detalhe, descarte por visibilidade e renderizaÃÃo baseada em imagens foram propostas no intuito de viabilizar esta tarefa. As duas primeiras aumentam a eficiÃncia da renderizaÃÃo, mas as vezes nÃo sÃo suficientes para manter taxas de quadros por segundo interativas. Grande parte dos estudos acerca do tema se concentra em tÃcnicas de renderizaÃÃo baseadas em imagem, especificamente com o emprego de impostores. Neste trabalho à proposto um mÃtodo que faz o balanÃo da demanda computacional da renderizaÃÃo atravÃs da variaÃÃo da distÃncia do limiar onde ocorre a troca de representaÃÃo entre os modelos de geometria completa (malhas) e os baseados em imagem (impostores) de acordo com os recursos disponÃveis. / Environments populated with crowds are employed in various applications, such as games, simulators and editors. Many of these environments require not only a realistic and detailed rendering, but it must run smoothly in real-time. This task easily exhausts the systemâs resources, even considering the current state-of-the-art hardware. Therefore, crowd rendering in real-time remains a challenge in computer graphics. Approaches exploiting levels of detail, visibility culling and image-based rendering are presented in order to facilitate this task. The first two increase the efficiency of rendering, but sometimes are not enough to keep an interactive frame rate. Some researches on this subject focus on image-based rendering techniques, specifically with the use of impostors. In this work it is proposed a method that balances the computational demand of rendering job by varying the thresholdâs distance of the representation switch between full geometry (mesh) and image-based(impostors) models in accordance with the available resources.
27

A Prototype For An Interactive And Dynamic Image-Based Relief Rendering System / En prototyp för ett interaktivt och dynamisktbildbaserat relief renderingssystem

Bakos, Niklas January 2002 (has links)
In the research of developing arbitrary and unique virtual views from a real- world scene, a prototype of an interactive relief texture mapping system capable of processing video using dynamic image-based rendering, is developed in this master thesis. The process of deriving depth from recorded video using binocular stereopsis is presented, together with how the depth information is adjusted to be able to manipulate the orientation of the original scene. When the scene depth is known, the recorded organic and dynamic objects can be seen from viewpoints not available in the original video.
28

Image Vectorization

Price, Brian L. 31 May 2006 (has links) (PDF)
We present a new technique for creating an editable vector graphic from an object in a raster image. Object selection is performed interactively in subsecond time by calling graph cut with each mouse movement. A renderable mesh is then computed automatically for the selected object and each of its (sub)objects by (1) generating a coarse object mesh; (2) performing recursive graph cut segmentation and hierarchical ordering of subobjects; (3) applying error-driven mesh refinement to each (sub)object. The result is a fully layered object hierarchy that facilitates object-level editing without leaving holes. Object-based vectorization compares favorably with current approaches in the representation and rendering quality. Object-based vectorization and complex editing tasks are performed in a few 10s of seconds.
29

Learning Geometry-free Face Re-lighting

Moore, Thomas Brendan 01 January 2007 (has links)
The accurate modeling of the variability of illumination in a class of images is a fundamental problem that occurs in many areas of computer vision and graphics. For instance, in computer vision there is the problem of facial recognition. Simply, one would hope to be able to identify a known face under any illumination. On the other hand, in graphics one could imagine a system that, given an image, the illumination model could be identified and then used to create new images. In this thesis we describe a method for learning the illumination model for a class of images. Once the model is learnt it is then used to render new images of the same class under the new illumination. Results are shown for both synthetic and real images. The key contribution of this work is that images of known objects can be re-illuminated using small patches of image data and relatively simple kernel regression models. Additionally, our approach does not require any knowledge of the geometry of the class of objects under consideration making it relatively straightforward to implement. As part of this work we will examine existing geometric and image-based re-lighting techniques; give a detailed description of our geometry-free face re-lighting process; present non-linear regression and basis selection with respect to image synthesis; discuss system limitations; and look at possible extensions and future work.
30

Adapting Single-View View Synthesis with Multiplane Images for 3D Video Chat

Uppuluri, Anurag Venkata 01 December 2021 (has links) (PDF)
Activities like one-on-one video chatting and video conferencing with multiple participants are more prevalent than ever today as we continue to tackle the pandemic. Bringing a 3D feel to video chat has always been a hot topic in Vision and Graphics communities. In this thesis, we have employed novel view synthesis in attempting to turn one-on-one video chatting into 3D. We have tuned the learning pipeline of Tucker and Snavely's single-view view synthesis paper — by retraining it on MannequinChallenge dataset — to better predict a layered representation of the scene viewed by either video chat participant at any given time. This intermediate representation of the local light field — called a Multiplane Image (MPI) — may then be used to rerender the scene at an arbitrary viewpoint which, in our case, would match with the head pose of the watcher in the opposite, concurrent video frame. We discuss that our pipeline, when implemented in real-time, would allow both video chat participants to unravel occluded scene content and "peer into" each other's dynamic video scenes to a certain extent. It would enable full parallax up to the baselines of small head rotations and/or translations. It would be similar to a VR headset's ability to determine the position and orientation of the wearer's head in 3D space and render any scene in alignment with this estimated head pose. We have attempted to improve the performance of the retrained model by extending MannequinChallenge with the much larger RealEstate10K dataset. We present a quantitative and qualitative comparison of the model variants and describe our impactful dataset curation process, among other aspects.

Page generated in 0.056 seconds