• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 123
  • 72
  • 59
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 957
  • 368
  • 210
  • 137
  • 136
  • 130
  • 128
  • 127
  • 124
  • 116
  • 108
  • 92
  • 87
  • 80
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Ambient Occlusion for Dynamic Objects and Procedural Environments

Jansson, Joel January 2013 (has links)
In computer graphics, lighting is an important area. To simulate shadows from area light sources, indirect lighting and shadows from indirect light, a class of algorithms commonly known as global illumination algorithms can be used. Ambient occlusion is an approximation to global illumination that can emulate shadows from area light sources and shadows from indirect light, giving very soft shadows. For real-time applications, ambient occlusion can be precomputed and stored in maps or per vertex. However, that can only be done with good results if the geometry is static. Therefore, a number of methods that can handle more or less dynamic scenes have been introduced in the recent years. In this thesis, a collection of ambient occlusion methods for dynamic objects and procedural environments will be described. The main contribution is the introduction of a novel method that handles ambient occlusion for procedural environments. Another contribution is a description of an implementation of Screen Space Ambient Occlusion (SSAO). SSAO is an algorithm that calculates approximate ambient occlusion in real-time by using the depths of surrounding pixels. It handles completely dynamic scenes with good performance. The method for procedural environments handles the scenario where a number of building blocks are procedurally assembled at run-time. The idea is to precompute an ambient occlusion map for each building block where the self-occlusion is stored. In addition, an ambient occlusion grid is precomputed for each block to accommodate the inter-block occlusion. At run-time, after the building blocks have been assembled, the ambient occlusion from the grids is blended with the ambient occlusion from the maps to generate new maps, valid for the procedural environment. Following that, the environment can be rendered with high quality ambient occlusion at almost no cost, in the same fashion as for a static environment where the ambient occlusion maps can be completely precomputed.
522

Keys to Effectively Create Realistic Fur in Autodesk Maya

Normann, Annica January 2012 (has links)
The tools for creating realistic fur using a computer have continued to develop sincethe first computer-generated fur was accomplished. Tools for creating fur can befound inside of the software Autodesk Maya. Rendering fur is often a very timeconsuming process and therefore this thesis investigates how the relationship ofrealistic fur versus render time can be improved. When creating fur, there are severalaspects to take into account, for example shadowing, length, color and irregularities.This thesis assesses the question through a case study and includes experimentalresearch which was attempted simultaneously. It valuates the results through asurvey. The qualitative research does not include animated fur, only still images ofcomputer-generated fur. This research will hopefully improve the knowledge and actas a guide for others who are creating fur.
523

Real Time Ray Tracing

Huss, Niklas January 2004 (has links)
Ray tracing has for a long time been used to create photo realistic images, but due to complex calculations done per pixel and slow hardware, the time to render a frame has been counted in hours or even days and this can be drawback if a change of a scene cannot be seen instantly. When ray tracing a frame takes less than a second to render we call it “real time ray tracing” or “interactive ray tracing” and many solutions have been developed and some involves distributing the computation to different computers interconnected in a very fast network (100 Mbit or higher). There are some drawbacks with this approach because most people do not have more than one computer and if they have, the computers are most likely not connected to each other. Since the hardware of today is fast enough to render a pretty complex image within minutes it should be possible to achieve real time ray tracing by combining many different methods that has been developed and reduce the render time. This work will examine what has to be sacrificed in image quality and complexity of static scenes, in order to achieve real time frame rate with ray tracing on a single computer. Some of the methods that will be covered in this work are frame optimizations, secondary rays optimization, hierarchies, culling, shadow caching, and sub sampling.
524

Illustrative Visualization of Anatomical Structures

Jonsson, Erik January 2011 (has links)
Illustrative visualization is a term for visualization techniques inspired by traditional technical and medical illustration. These techniques are based on knowledge of the human perception and provide effective visual abstraction to make the visualizations more understandable. Within volume rendering these expressive visualizations can be achieved using non-photorealistic rendering that combines different levels of abstraction to convey the most important information to the viewer. In this thesis I will look at illustrative techniques and show how these can be used to visualize anatomical structures in a medical volume data. The result of the thesis is a prototype of an anatomy education application, that makes use of illustrative techniques to have a focus+context visualization with feature enhancement, tone shading and labels describing the anatomical structures. This results in an expressive visualization and interactive exploration of the human anatomy.
525

3D Teleconferencing : The construction of a fully functional, novel 3D Teleconferencing system / 3D Telekonferens : Konstruktionen av ett nytt, operativt 3D Teleconferanssystem

Lång, Magnus January 2009 (has links)
This report summarizes the work done to develop a 3D teleconferencing system, which enables remote participants anywhere in the world to be scanned in 3D, transmitted and displayed on a constructed 3D display with correct vertical and horizontal parallax, correct eye contact and eye gaze. The main focus of this report is the development of this system and especially how to in an efficient and general manner render to the novel 3D display. The 3D display is built out of modified commodity hardware and show a 3D scene for observers in up to 360 degrees around it and all heights. The result is a fully working 3D Teleconferencing system, resembling communication envisioned in movies such as holograms from Star Wars. The system transmits over the internet, at similar bandwidth requirements as concurrent 2D videoconferencing systems. / Project done at USC Institute for Creative Technologies, LA, USA. Presented at SIGGRAPH09.
526

Improving rendering times of Autodesk Maya Fluids using the GPU

Andersson, Jonas, Karlsson, David January 2008 (has links)
Fluid simulation is today a hot topic in computer graphics. New highly optimized algorithms have allowed complex systems to be simulated in high speed. This master thesis describes how the graphics processing unit, found in most computer workstations, can be used to optimize the rendering of volumetric fluids. The main aim of the work has been to develop a software that is capable of rendering fluids in high quality and with high performance using OpenGL. The software was developed at Filmgate, a digital effects company in Göteborg, and much time was spent making the interface and the workflow easy to use for people familiar with Autodesk Maya. The project resulted in a standalone rendering application, together with a set of plugins to exchange data between Maya and our renderer. Most of the goals have been reached when it comes to rendering features. The performance bottleneck turned out to be reading data from disc and this is an area suitable for future development of the software.
527

Occlusion Culling on the GPU : Inner Conservative Occluder Rasterization

Svensson, Marcus January 2016 (has links)
Context. Many occlusion culling algorithms have to cope with the task of balancing performance and accuracy. While it is desirable to accurately identify all occluded scene objects, settling with a rough estimate is often more beneficial for the overall performance. Algorithms that rely on a depth buffer can often gain a lot of performance by performing the occlusion culling at a lower resolution than the resolution of the screen. This calls for more advanced methods to render the depth buffer as the standard rasterizer will not guarantee inner coverage. Objectives. The goal of this thesis is to find a solution to generate a depth buffer where all rasterized pixels are fully covered by overlapping occluders. An algorithm is proposed that is based on previous work on inner conservative rasterization. The algorithm addresses some of the problems existing methods are suffering from, but also has some flaws of its own. Methods. The proposed algorithm is tested by comparing it to two methods that also produce conservative results. A GPU-based occlusion culling system is developed to conduct an experiment. The experiment is performed by measuring performance and culling efficiency in two different views of a scene. The scene is set up to represent an average setting in a game. Results. The results from the experiment show that the proposed algorithm outperforms its competitors in many cases. In the first scene view, the total frame time is 5% faster at a full screen resolution of 1366x768 pixels and 8% faster at a full screen resolution of 1920x1080 pixels. The depth buffer generated by the proposed algorithm is culling atleast as many occludees as its competitors and often surpasses them. In the second scene view, the total frame time is 2% faster at a full screen resolution of 1366x768 pixels and 3% faster at a full screen resolution of 1920x1080 pixels. The depth buffer generated by the proposed algorithm is often culling more occludees than its competitors, but is at lower resolutions less efficient, up to 3%. Conclusions. The conclusions show that the goal has been reached. The proposed algorithm lacks flexibility, but provides good performance and accuracy. Future work to improve the proposed algorithm is outlined.
528

Efficient and Accurate Volume Rendering on Face-Centered and Body-Centered Cubic Grids

Smed, Karl-Oskar January 2015 (has links)
The body centered cubic grid (BCC) and face centered cubic grid (FCC) offer improved sampling properties when compared to the cartesian grid. Despite this there is little software and hardware support for volume rendering of data stored in one of these grids. This project is a continuation of a project adding support for such grids to the volume rendering engine Voreen. This project has three aims. Firstly, to implement new interpolation methods capable of rendering at interactive frame rates. Secondly, to improve the software by adding an alternate volume storage format offering improved frame rates for BCC methods. And thirdly, because of the issues when comparing image quality between different grid types due to aliasing, to implement a method unbiased in terms of post-aliasing. The existing methods are compared to the newly implemented ones in terms of frame rate and image quality and the results show that the new volume format improve the frame rate significantly, that the new BCC interpolation method offers similar image quality at better performance compared to existing methods and that the unbiased method produces images of good quality at the expense of speed.
529

Creating Digital Photorealistic Material Renders by Observing Physical Material Properties

Johansson, Simon January 2014 (has links)
When creating materials in computer graphics, the most common method is to estimate the properties based on intuition. This seems like a flawed approach, seeing as a big part of the industry has already moved to a physically based workflow. A better method would be to observe real material properties, and use that data in the application. This research delves into the art of material creation by first explaining the theory behind the properties of materials through a literature review. The review also reveals techniques that separate and visually presents these properties to artists, giving them a better understanding of how a material behaves. Through action research, an empirical study then presents a workflow for creating photorealistic renders using data collected with these techniques. While the techniques still require subjective decisions when recreating the materials, they do help artists create more accurate renderings with less guesswork.
530

GPU accelerated rendering of vector based maps on iOS

Qvick Faxå, Alexander, Bromö, Jonas January 2014 (has links)
Digital maps can be represented as either raster (bitmap images) or vector data. Vector maps are often preferable as they can be stored more efficiently and rendered irrespective of screen resolution. Vector map rendering on demand can be a computationally intensive task and has to be implemented in an efficient manner to ensure good performance and a satisfied end-user, especially on mobile devices with limited computational resources. This thesis discusses different ways of utilizing the on-chip GPU to improve the vector map rendering performance of an existing iOS app. It describes an implementation that uses OpenGL ES 2.0 to achieve the same end-result as the old CPU-based implementation using the same underlying map infras- tructure. By using the OpenGL based map renderer as well as implementing other performance optimizations, the authors were able to achieve an almost fivefold increase in rendering performance on an iPad Air.

Page generated in 0.0627 seconds