Spelling suggestions: "subject:"gendering"" "subject:"lendering""
261 |
En jämförande studie av algoritmer för visualisering av volumetrisk data i datorspelsmiljö.Fredell, Henrik January 2002 (has links)
I denna studie undersöks två algoritmer för att rendera volumetrisk data. Studien syftar till att undersöka om det är möjligt att använda dessa algoritmer i en datorspelsmiljö. De båda algoritmerna, som kallas för shear-warp-algoritmen och footprint-algoritmen, jämförs mot varandra för att ta reda på vilken som är bäst på att lösa uppgiften. Algoritmerna testas också för att se om de är lämpade att verka i en datorspelsmiljö med avseende på om de kan generera tillräckligt antal bilder per sekund. Resultatet av studien visar på att ingen av de två algoritmerna är lämpade inom den domän de är tänkta att verka inom. Även om shear-warp-algoritmen är effektivare på att lösa sin uppgift än vad footprint-algoritmen är så visar studien att ingen av dessa två algoritmer är lämpade i den domän de är undersökta i.
|
262 |
Complex Transformative Portal InteractionTillman, Markus January 2015 (has links)
Context. A portal in computer graphics is an opening which connects two spaces together. Portals can be used for occlusion culling for indoor environments or wormhole-like effects. This thesis address the latter and focus on how objects interact with such portals. Objectives. The objectives are to provide a solution to how objects can interact with complex portals in real-time with focus on visual (and physical) correctness and also present a background to how simple and complex portals work. Methods. A hybrid approach of a geometry and image technique is used to render portals. Intersection techniques and a technique related to constructive solid geometry is used to solve object-portal interactions. The research methodology used is implementation and simple analysis of the results is performed. Results. The results show that the implementation of the object-portal interaction scales exponentially. In the worst case it has a complexity of O(n² * m²) where n and m are the number of triangles in the object and portal respectively. Increasing the number of triangles in the object shape is more costly than increasing the number of triangles in the portal shape by the same amount. The results were not compared to previous knowledge as no results have been published of other object-portal interaction methods. The rendering of portals scales linearly with the number of triangles used to represent it. Conclusions. This thesis extends the state-of-the-art portal rendering system and adds a solution to object-portal interaction of complex shapes. It also provides a detailed background into the fundamentals of portals and their nature. The thesis is of interest to those who want object-portal interaction of both simple and complex portals used in gameplay and special effects without restriction on portal placement and shape, with the exception that portals may not have holes in their shape in the direction an intersecting object is moving.
|
263 |
Optimization of Sampling Structure Conversion Methods for Color Mosaic DisplaysZheng, Xiang January 2014 (has links)
Although many devices can be used to capture images of high resolution, there is still a need to show these images on displays with low resolution. Existing methods of subpixel-based down-sampling are reviewed in this thesis and their limitations are described. A new approach to optimizing sampling structure conversion for color mosaic displays is developed. Full color images are filtered by a set of optimal filters before down-sampling, resulting in better image quality according to the SCIELAB measure, a spatial extension of the CIELAB metric measuring perceptual color difference. The typical RGB stripe display pattern is tested to get the optimal filters using least-squares filter design. The new approach is also implemented on a widely used two-dimensional display pattern, the Pentile RGBG. Clear images are produced and color fringing artifacts are reduced. Quality of down-sampled images are compared using SCIELAB and by visual inspection.
|
264 |
NetLight: Cloud Baked Indirect IlluminationZabriskie, Nathan Andrew 01 November 2018 (has links)
Indirect lighting drastically increases the realism of rendered scenes but it has traditionally been very expensive to calculate. This has long precluded its use in real-time rendering applications such as video games which have mere milliseconds to respond to user input and produce a final image. As hardware power continues to increase, however, some recently developed algorithms have started to bring real-time indirect lighting closer to reality. Of specific interest to this paper, cloud-based rendering systems add indirect lighting to real-time scenes by splitting the rendering pipeline between a server and one or more connected clients. However, thus far they have been limited to static scenes and/or require expensive precomputation steps which limits their utility in game-like environments. In this paper we present a system capable of providing real-time indirect lighting to fully dynamic environments. This is accomplished by modifying the light gathering step in previous systems to be more resilient to changes in scene geometry and providing indirect light information in multiple forms, depending on the type of geometry being lit. We deploy it in several scenes to measure its performance, both in terms of speed and visual appeal, and show that it produces high quality images with minimum impact on the client machine.
|
265 |
WebGL2 renderer ve WebAssembly / WebGL2 Renderer in WebAssemblyRežňák, Pavel January 2018 (has links)
This thesis is focused on fast rendering of the 3D scene in a web browser with usage of modern technologies, for instance WebGL and WebAssembly. In this thesis you will find out how to compile an application which was written in C++ language into WebAssembly via Emscripten compilator and how to insert this code into a web page. Futhermore, you will find out how to communicate between C++ language and JavaScript, how to call functions, create instances and how to share memory between them. During design of a rendering core you will learn a few methods how to improve rendering performance. In the end the performance of this technologies is compared.
|
266 |
Fotorealistické zobrazování metodou "Photon Mapping" / Photorealistic Rendering Using "Photon Mapping" MethodLysek, Tomáš January 2015 (has links)
This master thesis focuses on photon mapping rendering technique. A simple photon mapping was implemented as a baseline and then progressive photon mapping was prepared for CPU and GPU. After implementing progressive photon mapping on GPU, further acceleration techniques were proposed. Finally, in the thesis, genetic clustering algorithm for suitable clusters on GPU was proposed.
|
267 |
Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume OperationsSicat, Ronell Barrera 25 November 2015 (has links)
The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings.
Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters,
to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results.
In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
|
268 |
Hardware accelerated ray tracing of particle systemsLindau, Ludvig January 2020 (has links)
Background. Particle systems are a staple feature of most modern renderers. There are several technical challenges when it comes to rendering transparent particles. Particle sorting along the view direction is required for proper blending and casting shadows from particles requires non-standard shadow algorithms. A recent technology that could be used to adress these technical challenges is hardware accelerated ray tracing. However there is a lack of performance data gathered from this type of hardware. Objectives. The objective of this thesis is to measure the performance of a prototype that uses hardware accelerated ray tracing to render particles that cast shadows. Methods. A prototype is created and measurements of the ray tracing time are made. The scene used for the benchmark test is a densely packed particle volume of highly transparent particles, resulting in a scene that looks similar to smoke. Particles are sorted along a ray by repeatedly tracing rays against the scene and incrementing the ray origin past the previous intersection point until it has passed all the objects that lie along the ray. Results. Only a small number of particles can be rendered if real time rendering speeds are desired. High quality shadows can be produced in a way that is very simple compared to texture based methods. Conclusions. Future hardware speed ups can improve the rendering speeds but more sophisticated sorting methods are needed to render larger amounts of particles.
|
269 |
Volumetric Terrain Genereation on the GPU : A modern GPGPU approach to Marching Cubes / Volumetrisk terränggenerering på grafikkortet : En modern GPGPU implementation av Marching CubesPethrus Engström, Ludwig January 2015 (has links)
Volumetric visualization is something that has become more interesting during recent years. It has been something that was not feasible in an interactive environment due to its complexity in the 3D space. However, today's technology and access to the power of the graphics processing unit (GPU) has made it feasible to render volumetric data interactively. This thesis explores the possibilities to create and render large volumetric terrain using an implementation of Marching Cubes on the GPU. With the advent of general-purpose computing on the GPU (GPGPU) it has become far easier to implement tradition CPU tasks on the GPU. By utilizing newly available functions in DirectX it is possible to create an easier implementation on the GPU using global buffers. Three implementations are created inside the Unity game engine using compute shaders. The implementations are then compared based on creation time, render times and memory consumption. Then a deeper analysis of the time distribution is presented which suggests that Unity introduces some overhead since copying buffers from GPU to CPU is time consuming. It did however improve render times due to its culling and optimization techniques. The system could be used in applications such as games or medical visualization. Finally some future improvements for culling and level of detail (LOD) techniques are discussed. / Volumetrisk visualisering är en teknik som har fått mer uppmärksamhet dom senaste åren. Det har varit någonting som inte har varit rimligt att göra i en interaktiv miljö på grund av dess komplexitet i 3D rymden. Med dagens teknik och tillgänglighet till grafikkortet (GPU) är det nu möjligt att rendera volumetrisk data i en interaktiv miljö. Den här uppsatsen utforskar möjligheterna till att skapa och rendera stora terräng landskap genom en implementering av Marching Cubes på GPU:n. Med framkomsten av general-purpose computing på grafikkortet(GPGPU) har det blivit lättare att programmera på GPU:n. Genom att använda nya funktioner tillgängliga i DirectX är det möjligt att skapa en enklare implementering på GPU:n som använder sig av globala buffrar. Tre implementeringar har skapats i spelmotorn Unity som använder compute shaders. Implementeringarna är sedan jämförda baserad på tid för generering av terräng, renderings tid samt minnes konsumption. Detta följs av en djupare analys av tidsdistribueringen för skapandet som pekar på att Unity håller tillbaka systemets hastiget pga kopierande av minne från GPU:n till CPU:n. Renderingstiden blev dock bättre med hjälp av den inbyggda culling-teknikerna och optimerings tekniker. Detta system skulle kunna appliceras inom spel eller medicinsk visualisering. Slutligen diskuteras framtida förbättringar för culling-tekniker och level of detail (LOD) tekniker.
|
270 |
Dynamic allocation of servers for large scale rendering applicationAndersson, Samuel January 2021 (has links)
Cloud computing has been widely used for some time now, and its area of use is growing larger and larger year by year. It is very convenient for companies to use cloud computing when creating certain products, however it comes with a great price. In this thesis it will be evaluated if one could optimize the expenses for a product regardless of what platform that is used. And would it be possible to anticipate how much resources a product will need, and allocate those machines in a dynamic fashion? In this thesis the work of predicting the need of rendering machines based on response times from user requests, and dynamically allocate rendering machines to a product based on this need will be evaluated. The solution used will be based on machine learning, where different types of regression models will try to predict the response times of the future, and evaluate whether or not they are acceptable. During the thesis both a simulation and a replica of the real architecture will be implemented. The replica of the real architecture will be implemented by using AWS cloud services. The resulting regression model that turned out to be best, was the simplest possible. A linear regression model with response times as the independent variable, and the queue size per rendering machine was used as the dependent variable. The model performed very good in the region of realistic response times, but not necessarily that good at very high response times or at very low response times. That is not considered as a problem though, since response times in those regions should not be of concern for the purpose of the regression model. The effects of the usage of the regression model seems to be better than in the case of using a completely reactive scaling method. Although the effects are not really clear, since there is no user data available. In order for the effects to be evaluated in a fair way, there is a need of user patterns in terms of daily usage of the product. Because the requests in the used simulation are based on pure randomness, there is no correlation in what happened 10 minutes back in the simulation and what will happen 10 minutes in the future. The effect of that is that it is really hard to estimate how the dependent variable will change over time. And if that can not be estimated in a proper way, the results with the inclusion of the regression model can not be tested in a realistic scenario either.
|
Page generated in 0.087 seconds