Spelling suggestions: "subject:"gendering"" "subject:"lendering""
51 |
Multi resolution representations and interactive visualization of huge unstructured volume meshesSondershaus, Ralf, January 2007 (has links)
Tübingen, Univ., Diss., 2007.
|
52 |
Shape representations for image based applicationsHornung, Alexander January 2008 (has links)
Zugl.: Aachen, Techn. Hochsch., Diss., 2008
|
53 |
Strömungsvisualisierung mittels VolumenrenderingBardili, Nicolas. January 2002 (has links)
Stuttgart, Univ., Studienarb., 2002.
|
54 |
GPU based interactive visualization techniques with 11 tablesWeiskopf, Daniel. January 2007 (has links)
Univ., Habil.-Schr.--Stuttgart.
|
55 |
Light Performance Comparison betweenForward, Deferred and Tile-basedforward renderingPoliakov, Vladislav January 2020 (has links)
Background. In this experiment forward, deferred and tile-based forward rendering techniques are implemented to research about the light-rendering performance of these rendering techniques. Nowadays most games and programs contains a graphical content and this graphical content is done by using different kind of rendering operations. These rendering operations is being developed and optimized by graphic programmers in order to show better performance. Forward rendering is the standard technique that pushes the geometry data through the whole rendering pipeline to build up the final image. Deferred rendering on the other hand is divided into two passes where the first pass rasterizes the geometry data into g-buffers and the second pass, also called lighting pass, uses the data from g-buffers and rasterizes the lightsources to build up the final image. Next rendering technique is tile-based forward rendering, is also divided into two passes. The first pass creates a frustum grid and performs light culling. The second pass rasterizes all the geometry data to the screen as the standard forward rendering technique. Objectives. The objective is to implement three rendering techniques in order to find the optimal technique for light-rendering in different environments. When the implementation process is done, analyze the result from tests to answer the research questions and come to a conclusion. Methods. The problem was answered by using method "Implementation and Experimentation". A render engine with three different rendering techniques was implemented using C++ and OpenGL API. The tests were implemented in the render engine and the duration of each test was five minutes. The data from the tests was used to create diagrams for result evaluation. Results. The results showed that standard forward rendering was stronger than tile based forward rendering and deferred rendering with few lights in the scene.When the light amount became large deferred rendering showed the best light performance results. Tile-based forward rendering wasn’t that strong as expected and the reason can possibly be the implementation method, since different culling procedures were performed on the CPU-side. During the tests of tile-based forward rendering there were 4 tiles used in the frustum grid since this amount showed highest performance compared to other tile-configurations. Conclusions. After all this research a conclusion was formed as following, in environments with limited amount of lightsources the optimal rendering technique was the standard forward rendering. In environments with large amount of lightsources deferred rendering should be used. If tile-based forward rendering is used, then it should be used with 4 tiles in the frustum grid. The hypothesis of this study wasn’t fully confirmed since only the suggestion with limited amount of lights were confirmed, the other parts were disproven. The tile-based forward rendering wasn’t strong enough and the reason for this is possibly that the implementation was on the CPU-side.
|
56 |
Moment Based Painterly Rendering Using Connected Color ComponentsObaid, Mohammad Hisham Rashid January 2006 (has links)
Research and development of Non-Photorealistic Rendering algorithms has recently moved towards the use of computer vision algorithms to extract image features. The feature representation capabilities of image moments could be used effectively for the selection of brush-stroke characteristics for painterly-rendering applications. This technique is based on the estimation of local geometric features from the intensity distribution in small windowed images to obtain the brush size, color and direction. This thesis proposes an improvement of this method, by additionally extracting the connected components so that the adjacent regions of similar color are grouped for generating large and noticeable brush-stroke images. An iterative coarse-to-fine rendering algorithm is developed for painting regions of varying color frequencies. Improvements over the existing technique are discussed with several examples.
|
57 |
Communication expressive de la forme au travers de l’éclairement et du rendu au traitVergne, Romain 10 December 2010 (has links)
Le rendu expressif a pour objectif de développer des algorithmes qui donnent la possibilité aux utilisateurs de créer des images artistiques. Il permet non-seulement de reproduire des styles traditionnels, mais surtout de communiquer un message spécifique avec un style qui lui correspond. Dans ce manuscrit, nous proposons de nouvelles solutions pour réintroduire la forme, souvent masquée dans les images réalistes. Nous montrons tout d'abord comment extraire les informations de surface pertinentes sur des objets 3D dynamiques, en nous basant sur les caractéristiques du système visuel humain, de sorte à obtenir des informations qui fournissent des niveaux de détail automatiques tout en prenant le point de vue en compte. Dans un deuxième temps, nous utilisons ces données extraites à la surface des objets 3D pour les intégrer en temps réel dans des styles variés, allant du rendu minimaliste noir et blanc au dessin au trait, en passant par des résultats réalistes. / Expressive rendering aims at designing algorithms that give users the possibility to create artistic images. It allows to produce traditional styles, but also to convey a specific message with its corresponding style. In this thesis, we propose new solutions for enhancing shape, often hidden in realistic images. We first show how to extract relevant surface features on 3D dynamic scenes, taking the human visual system into account, in order to be able to control level-of-details. In a second step, we integrate this information in a variety of styles: minimalist black and white, realistic, or line-based renderings.
|
58 |
Non-photorealistic rendering with coherence for augmented realityChen, Jiajian 16 July 2012 (has links)
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization.
Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.
|
59 |
Fast spectral multiplication for real-time renderingWaddle, C Allen 02 May 2018 (has links)
In computer graphics, the complex phenomenon of color appearance, involving the interaction of light, matter and the human visual system, is modeled by the multiplication of RGB triplets assigned to lights and materials. This efficient heuristic produces plausible images because the triplets assigned to materials usually function as color specifications. To predict color, spectral rendering is required, but the O(n) cost of computing reflections with n-dimensional point-sampled spectra is prohibitive for real-time rendering.
Typical spectra are well approximated by m-dimensional linear models, where m << n, but computing reflections with this representation requires O(m^2) matrix-vector multiplication. A method by Drew and Finlayson [JOSA A 20, 7 (2003), 1181-1193], reduces this cost to O(m) by “sharpening” an n x m orthonormal basis with a linear transformation, so that the new basis vectors are approximately disjoint. If successful, this transformation allows approximated reflections to be computed as the products of coefficients of lights and materials. Finding the m x m change of basis matrix requires solving m eigenvector problems, each needing a choice of wavelengths in which to sharpen the corresponding basis vector. These choices, however, are themselves an optimization problem left unaddressed by the method's authors.
Instead, we pose a single problem, expressing the total approximation error incurred across all wavelengths as the sum of dm^2 squares for some number d, where, depending on the inherent dimensionality of the rendered reflectance spectra, m <= d << n, a number that is independent of the number of approximated reflections. This problem may be solved in real time, or nearly, using standard nonlinear optimization algorithms. Results using a variety of reflectance spectra and three standard illuminants yield errors at or close to the best lower bound attained by projection onto the leading m characteristic vectors of the approximated reflections. Measured as CIEDE2000 color differences, a heuristic proxy for image difference, these errors can be made small enough to be likely imperceptible using values of 4 <= m <= 9.
An examination of this problem reveals a hierarchy of simpler, more quickly solved subproblems whose solutions yield, in the typical case, increasingly inaccurate approximations. Analysis of this hierarchy explains why, in general, the lowest approximation error is not attained by simple spectral sharpening, the smallest of these subproblems, unless the spectral power distributions of all light sources in a scene are sufficiently close to constant functions. Using the methods described in this dissertation, spectra can be rendered in real time as the products of m-dimensional vectors of sharp basis coefficients at a cost that is, in a typical application, a negligible fraction above the cost of RGB rendering. / Graduate
|
60 |
Impostor Rendering with Oculus Rift / Impostorrendering med Oculus RiftNiemelä, Jimmy January 2014 (has links)
This report studies impostor rendering for use with the virtual reality head mounted display Oculus Rift. The technique is replacing 3D models with 2D versions to speed up rendering, in a 3D engine. It documents the process of developing a prototype in C++ and DirectX11 and the required research needed to complete the assignment. Included in this report are also the steps involved in getting Oculus Rift support to work in a custom 3D engine and measuring the impact of impostor rendering when rendering to two screens of the head mounted display. The goal was to find the maximum models the engine could draw, while keeping the frame rate locked at 60 frames per second. 2 testers at Nordicstation came to the conclusion that 40-50 meters was the optimal distance for impostor rendering. Any closer and the flatness was noticeable. The results showed a clear improvement in frame rate when rendering a graphically intensive scene. The end result showed that the goal could be achieved at a maximum of 3000 trees with 1000 leaves. Impostor rendering was deemed effective when drawing beyond 500 trees at a time. Less than that and the technique was not needed to achieve 60 frames per second. / Denna rapport undersöker renderingstekniken impostors när den används i en simpel 3D motor tillsammans med virtuella verklighetshjälmen Oculus Rift. Impostors betyder på engelska bedragare och tekniken går ut på att den byter ut avancerade 3D modeller mot simpla 2D versioner när de är ett visst avstånd ifrån användarens virtuella kamera. Om den är korrekt implementerad ska användaren inte märka att vissa modeller är platta och tekniken sparar på resurser då grafikmotorn inte behöver rita ut alla modeller. Rapporten går igenom vad som undersöktes i förundersökningen för att kunna utveckla en prototyp med utvecklingspråket C++ och DirectX 11. I rapporten står även hur prototypen utvecklades och hur stöd för Oculus Rift lades till. De slutliga resultaten visade att impostors hade en stor påverkan på uppdateringshastigheten när antalet 3D modeller som skulle ritas var många, annars hade tekniken ingen påverkan för att nå 60 bilder per sekund. 2 testare från Nordicstation kom fram till att ett avstånd på 40-50 meter från spelarens kamera till utritning av impostors var lämplig, för att dölja att de endast är platta versioner av 3d modeller. Testet visade att motorn kunde rita ut 3000 träd, med 1000 löv på varje, och hålla 60 bilder per sekund, vilket var målet. Detta på ett avstånd av 40m på impostors. Impostorrendering var effektiv när man ritade ut fler än 500 träd åt gången. Mindre antal gav ingen märkbar effekt på testdatorn som användes för testet.
|
Page generated in 0.0608 seconds