Spelling suggestions: "subject:"gendering"" "subject:"lendering""
511 |
Deferred rendering using Compute shaders / Deferred rendering med Compute shadersGolba, Benjamin January 2010 (has links)
Game developers today are putting a lot of effort into their games. Consumers are hard to please and demand a game which can provide both fun and visual quality. This is why developers aim to make the most use of what hardware resources are available to them to achieve the best possible quality of the game. It is easy to use too many performance demanding techniques in a game, making the game unplayable. The hard part is to make the game look good without decreasing the performance. This can be done by using techniques in a smart way to make the graphics as smooth and efficient as they can be without compromising the visual quality. One of these techniques is deferred rendering. The latest version of Microsoft’s graphics platform, DirectX 11, comes with several new features. One of these is the Compute shader which is a feature making it easier to execute general computation on the graphics card. Developers do not need to use DirectX 11 cards to be able to use this feature though. Microsoft has made it available on graphic cards made for DirectX 10 as well. There are however a few differences between the two versions. The focus of this report will be to investigate the possible performance differences between these versions on when using deferred rendering. An application was made supporting both shader model 4 and 5 of the compute shader, to be able to investigate this.
|
512 |
Visualization using 3D Monitor / Visualisering vid använding av 3D MonitorHagdahl, Stefan January 2008 (has links)
Many companies over the years have been working with enhancing the visual effect of monitors and television with 3D glasses and such. There is a new form of 3D viewing right now; Spatial View is the one I know most about. Their technology includes a barrier panel technology which aligns the right and left eye simultaneously giving the person looking at the monitor a 3D viewing. Spatial View has developed an API that can be easily included in games and rendering applications to enable this 3D visualization and this thesis is about the computer performance cost. The API works in such a way that it takes 5 images of the current scene the camera is looking at in the game or rendering application and interlace them together to produce 1 image to be displayed on screen. Combining this with the monitor technique gives the visual effect. The 5 different camera angles that are produced can be a strain on the performance, meaning that the rendering API in this case Direct3D 9.0c has to render everything 5 times each frame. This can slow down the frame rate of the game, which is very important for the game to run smoothly. This thesis main focus is to understand the correlation between the number of camera angles and rendering time for Direct3D 9.0c, is it linear or exponential. By having access to Spatial View’s Direct3D 9.0c API, I was able to construct a test application which could answer the hypothesis. Six tests were used to investigate this with different numbers of camera angle to see the impact on rendering time. Using one, two and five camera angles for the test with large cubes (big enough to almost cover the screen) and small cubes (almost small enough to not see). After seeing the rendering time and understanding the API from Spatial View’s, a theory about reducing the rendering time arose. This theory will be explained throughout the thesis and discussed; it includes using Direct3D 10.0 with geometry instancing.
|
513 |
User Study of Quantized MIP Level Data In Normal Mapping TechniquesClementson, Martin, Augustsson, John January 2017 (has links)
The standard MIP mapping technique halves the resolution of textures for each level of the MIP chain. In this thesis the bits per pixel(bpp) is reduced as well. Normal maps are generally used with MIP maps, and todays industry standard for these are usually 24 bpp.The reduction is simulated as there is currently no support for the lower bpp in GPU hardware. Objectives: To render images of normal mapped objects with decreasing bpp for each level in a MIP chain and evaluate these against the standard MIP mapping technique using a subjective user study and an objective image comparison method. Methods: A custom software is implemented to render the images with quantized normal maps manually placed in a MIP chain. For the subjective experiment a 2AFC test is used, and the objective part consists of a PDIFF test for the images. Results: The results indicate that as the MIP level is increased and the bpp is lowered, users can increasingly see a difference. Conclusions: The results show that participants can see a difference as the bpp is reduced, which indicates normal mapping as not suitable for this method, however further study is required before this technique can be dismissed as an applicable method
|
514 |
Graphical User Interfaces for Volume Rendering Applications in Medical ImagingLindfors, Lisa, Lindmark, Hanna January 2002 (has links)
Volume rendering applications are used in medical imaging in order to facilitate the analysis of three-dimensional image data. This study focuses on how to improve the usability of graphical user interfaces of these systems, by gathering user requirements. This is achieved by evaluations of existing systems, together with interviews and observations at clinics in Sweden that use volume rendering to some extent. The usability of the applications of today is not sufficient, according to the users participating in this study. This is due to a wide range of reasons. One reason is that the graphical user interface is not intuitive. Another reason is that the users do not rely on the technique to produce sufficient results that can be used in the diagnostic process. The issue of user confidence is mainly due to the problem of the generation and user control of the transfer functions used in volume rendering. Based on the results of the evaluation a graphical user interface, including the most important and frequently used functions, is designed. A suggestion for how the transfer function can be generated is presented.
|
515 |
Color Coded Depth Information in Medical Volume RenderingEdsborg, Karin January 2003 (has links)
Contrast-enhanced magnetic resonance angiography (MRA) is used to obtain images showing the vascular system. To detect stenosis, which is narrowing of for example blood vessels, maximum intensity projection (MIP) is typically used. This technique often fails to demonstrate the stenosis if the projection angle is not suitably chosen. To improve identification of this region a color-coding algorithm could be helpful. The color should be carefully chosen depending on the vessel diameter. In this thesis a segmentation to produce a binary 3d-volume is made, followed by a distance transform to approximate the Euclidean distance from the centerline of the vessel to the background. The distance is used to calculate the smallest diameter of the vessel and that value is mapped to a color. This way the color information regarding the diameter would be the same from all the projection angles. Color-coded MIPs, where the color represents the maximum distance, are also implemented. The MIP will result in images with contradictory information depending on the angle choice. Looking in one angle you would see the actual stenosis and looking in another you would see a color representing the abnormal diameter.
|
516 |
Design of 3D Accelerator for Mobile PlatformRamachandruni, Radha Krishna January 2006 (has links)
Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.
|
517 |
Rendering av realistiska fågelfjädrar i realtidEdin, Henrik January 2007 (has links)
I den här rapporten så visas hur realtidsrendering av fågelfjäder kan implementaras. Ett lindenmayersystem (L-system) används för att skapa geometri med hjälp av ett fåtal bézierkurvor. Naturliga variationer hos fjädrar modelleras genom att introducera externa krafter som ackumuleras slumpmässigt när L-systemet genererar geometrin. Bidirectional texture functions (BTF) används för färgsättning och effektiv modellering av fjäderns finstruktur. BTF är en sexdimensionell struktur som kan representera verkliga material genom att innehålla, förutom de två vanliga texturkoordinaterna, koordinater för betraktnings- och belysningsvinklar. För att kunna använda BTF-texturer på grafikhårdvara så kompakteras dess representation så att den ryms i en tredimensionell textur. Anpassningar görs också för att stödja texturfiltrering och mip-mappning. För att ta fram informationen som BTF-texturen innehåller så modelleras finstrukturen i ett externt animationsverktyg, där ljuskälla och kamera animeras över de samplingspunkter som definierats. Strålföljning används sedan för att generera hur materialet ser ut vid dessa olika vinklar.
|
518 |
Potential of GPU Based Hybrid Ray Tracing For Real-Time GamesPoulsen, Henrik January 2009 (has links)
The development of Graphics Hardware Technology is blazing fast, with new and more improved models, that out spec the previous generations with leaps and bounds, before one has the time to digest the potential of the previous generations computing power. With the progression of this technology the computer games industry has always been quick to adapt this new power and all the features that emerge as the graphic card industry learn what the customers need from their products. The current generations of games use extraordinary visual effects to heighten the immersion into the games, all of which is thanks to the constant progress of the graphics hardware, which would have been an impossibility just a couple of years ago. Ray tracing has been used for years in the movie industry for creation of stunning special effects and whole movies completely made in 3D. This technique for giving realistic imagery has always been for usage exclusively for non-interactive entertainment, since this way of rendering an image is extremely expensive when it comes to computations. To generate one single image with Ray Tracing you might need several hundred millions of calculations, which so far haven’t been proven to work in real-time situations, such as for games. However, due to the continuous increase of processing power in Graphical Processing Units, GPUs, the limits of what can, and cannot, be done in real-time is constantly shifting further and further into the realm of possibility. So this thesis focuses upon finding out just how close we are to getting ray tracing into the realm of real-time games. Two tests were performed to find out the potential a current (2009) high-end computer system has when it comes to handling a raster - ray tracing hybrid implementation. The first test is to see how well a modern GPU handles rendering of a very simple scene with phong shading and ray traced shadows without any optimizations. And the second test is with the same scenario, but this time done with a basic optimization; this last test is to illustrate the impact that possible optimizations have on ray tracers. These tests were later compared to Intel’s results with ray tracing Enemy Territory: Quake Wars.
|
519 |
Performance aspects of layered displacement blending in real time applicationsPetersson, Tommy, Lindeberg, Marcus January 2013 (has links)
The purpose of this thesis is to investigate performance aspects of layered displacement blending; a technique used to render realistic and transformable objects in real time rendering systems using the GPU. Layered displacement blending is done by blending layers of color maps and displacement maps together based on values stored in an influence map. In this thesis we construct a theoretical and practical model for layered displacement blending. The model is implemented in a test bed application to enable measuring of performance aspects. The implementation is fed input with variations in triangle count, number of subdivisions, texture size and number of layers. The execution time for these different combinations are recorded and analyzed. The recorded execution times reveal that the amount of layers associated with an object has no impact on performance. Further analysis reveals that layered displacement blending is heavily dependent on the triangle count in the input mesh. The results show that layered displacement blending is a viable option to representing transformable objects in real time applications with respect to performance. This thesis provides; a theoretical model for layered displacement blending, an implementation of the model using the GPU and measurements of that implementation.
|
520 |
GPGPU separation of opaque and transparent mesh polygonsTännström, Ulf Nilsson January 2014 (has links)
Context: By doing a depth-prepass in a tiled forward renderer, pixels can be prevented from being shaded more than once. More aggressive culling of lights that might contribute to tiles can also be performed. In order to produce artifact-free rendering, only meshes containing fully opaque polygons can be included in the depth-prepass. This limits the benefit of the depth-prepass for scenes containing large, mostly opaque, meshes that has some portions of transparency in them. Objectives: The objective of this thesis was to classify the polygons of a mesh as either opaque or transparent using the GPU. Then to separate the polygons into two different vertex buffers depending on the classification. This allows all opaque polygons in a scene to be used in the depth-prepass, potentially increasing render performance. Methods: An implementation was performed using OpenCL, which was then used to measure the time it took to separate the polygons in meshes of different complexity. The polygon separation times were then compared to the time it took to load the meshes into the game. What effect the polygon separation had on rendering times was also investigated. Results: The results showed that polygon separation times were highly dependent on the number of polygons and the texture resolution. It took roughly 350ms to separate a mesh with 100k polygons and a 2048x2048 texture, while the same mesh with a 1024x1024 texture took a quarter of the time. In the test scene used the rendering times differed only slightly. Conclusions: If the polygon separation should be performed when loading the mesh or when exporting it depends on the game. For games with a lower geometrical and textural detail level it may be feasible to separate the polygons each time the mesh is loaded, but for most game it would be recommended to perform it once when exporting the mesh. / Att använda ett djup-prepass med en tiled forward renderare kan minska tiden det tar att rendera en scen. Dock kan inte meshar som innehåller transparenta delar inkluderas i detta pre-pass utan att introducera renderingsartefakter. Detta begränsar djup-prepassets användbarhet i scenarion med stora meshar som innehåller små delar av transparens. Denna uppsats försöker lösa detta genom att med hjälp av grafikkortet dela upp meshar i två delar. En del som enbart innehåller icke-transparenta polygoner och en del med enbart transparenta polygoner.
|
Page generated in 0.0933 seconds