• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Volumetric Terrain Genereation on the GPU : A modern GPGPU approach to Marching Cubes / Volumetrisk terränggenerering på grafikkortet : En modern GPGPU implementation av Marching Cubes

Pethrus Engström, Ludwig January 2015 (has links)
Volumetric visualization is something that has become more interesting during recent years. It has been something that was not feasible in an interactive environment due to its complexity in the 3D space. However, today's technology and access to the power of the graphics processing unit (GPU) has made it feasible to render volumetric data interactively. This thesis explores the possibilities to create and render large volumetric terrain using an implementation of Marching Cubes on the GPU. With the advent of general-purpose computing on the GPU (GPGPU) it has become far easier to implement tradition CPU tasks on the GPU. By utilizing newly available functions in DirectX it is possible to create an easier implementation on the GPU using global buffers. Three implementations are created inside the Unity game engine using compute shaders. The implementations are then compared based on creation time, render times and memory consumption. Then a deeper analysis of the time distribution is presented which suggests that Unity introduces some overhead since copying buffers from GPU to CPU is time consuming. It did however improve render times due to its culling and optimization techniques. The system could be used in applications such as games or medical visualization. Finally some future improvements for culling and level of detail (LOD) techniques are discussed. / Volumetrisk visualisering är en teknik som har fått mer uppmärksamhet dom senaste åren. Det har varit någonting som inte har varit rimligt att göra i en interaktiv miljö på grund av dess komplexitet i 3D rymden. Med dagens teknik och tillgänglighet till grafikkortet (GPU) är det nu möjligt att rendera volumetrisk data i en interaktiv miljö. Den här uppsatsen utforskar möjligheterna till att skapa och rendera stora terräng landskap genom en implementering av Marching Cubes på GPU:n. Med framkomsten av general-purpose computing på grafikkortet(GPGPU) har det blivit lättare att programmera på GPU:n. Genom att använda nya funktioner tillgängliga i DirectX är det möjligt att skapa en enklare implementering på GPU:n som använder sig av globala buffrar. Tre implementeringar har skapats i spelmotorn Unity som använder compute shaders. Implementeringarna är sedan jämförda baserad på tid för generering av terräng, renderings tid samt minnes konsumption. Detta följs av en djupare analys av tidsdistribueringen för skapandet som pekar på att Unity håller tillbaka systemets hastiget pga kopierande av minne från GPU:n till CPU:n. Renderingstiden blev dock bättre med hjälp av den inbyggda culling-teknikerna och optimerings tekniker. Detta system skulle kunna appliceras inom spel eller medicinsk visualisering. Slutligen diskuteras framtida förbättringar för culling-tekniker och level of detail (LOD) tekniker.
2

Improving rendering times of Autodesk Maya Fluids using the GPU

Andersson, Jonas, Karlsson, David January 2008 (has links)
<p>Fluid simulation is today a hot topic in computer graphics. New highly optimized algorithms have allowed complex systems to be simulated in high speed. This master thesis describes how the graphics processing unit, found in most computer workstations, can be used to optimize the rendering of volumetric fluids. The main aim of the work has been to develop a software that is capable of rendering fluids in high quality and with high performance using OpenGL. The software was developed at Filmgate, a digital effects company in Göteborg, and much time was spent making the interface and the workflow easy to use for people familiar with Autodesk Maya. The project resulted in a standalone rendering application, together with a set of plugins to exchange data between Maya and our renderer.</p><p>Most of the goals have been reached when it comes to rendering features. The performance bottleneck turned out to be reading data from disc and this is an area suitable for future development of the software.</p>
3

Improving rendering times of Autodesk Maya Fluids using the GPU

Andersson, Jonas, Karlsson, David January 2008 (has links)
Fluid simulation is today a hot topic in computer graphics. New highly optimized algorithms have allowed complex systems to be simulated in high speed. This master thesis describes how the graphics processing unit, found in most computer workstations, can be used to optimize the rendering of volumetric fluids. The main aim of the work has been to develop a software that is capable of rendering fluids in high quality and with high performance using OpenGL. The software was developed at Filmgate, a digital effects company in Göteborg, and much time was spent making the interface and the workflow easy to use for people familiar with Autodesk Maya. The project resulted in a standalone rendering application, together with a set of plugins to exchange data between Maya and our renderer. Most of the goals have been reached when it comes to rendering features. The performance bottleneck turned out to be reading data from disc and this is an area suitable for future development of the software.
4

Realistické zobrazování sněhu / Realistic Visualization of Snow

Chukir, Patrik January 2021 (has links)
This diploma thesis follows visualization of snow formations, which are called penitentes. This work includes also collecting the data needed to derive the optical properties of the penitentes material. Which are different phases between snow and ice. For visualization method is used Progressive Transient Photon Beams, that this work implements with the help of SmallUbpb.
5

Efficient Realistic Cloud Rendering using the Volumetric Rendering Technique : Science, Digital Game Development

Bengtsson, Adam January 2022 (has links)
With high quality in graphics being demanded a lot in modern video games, realistic clouds are noexception. In many video games, it is common that its rendering implementation is based on acollection of 2D cloud-images rendered into the scene. Through previously published work, it was found that while other techniques can be more appropriate depending on the project, volumetricrendering is the highest state-of-the-art in cloud rendering. The only lacking feature of this techniqueis the performance rate, as it is a very expensive technique. Two general problems regarding theperformance rate is that either the high quality of the clouds is not applicable to real-time rendering orthe quality has been pushed back to the point where the clouds lacked accuracy or realism in shape. There are three basic objectives to the project that were forumulated so that the aim can be completed. The objectives are listed as the following to satisfy the aim: Aim: Create a cloud generator with the volumetric rendering technique Objective 1: Create a 3D engine in OpenGL that generates clouds with volumetric rendering in real-time. Objective 2: Create different scenes that increase computational cost for the computer to render. Objective 3: Arrange tests across different computers running the engine and document the results in terms of performance. The project is created using the programming language C++ and the OpenGL library in Visual Studio. The code comes from a combination of other previously made projects regarding the subject ofrendering clouds in real-time. In order to save time in the project, two projects created by FedericoVaccaro and Sébastien Hillaire were used as references in order to quickly reach a solid foundation for experimenting with the performance rate of volumetric clouds. The resulting cloud implementation contains three of many cloud types and updates in real-time. It is possible to configure the clouds in real-time and have the density, coverage, light absorption and more be altered to generate between the three different cloud types. When changing the settings for the boxcontaining the clouds, as well as coloring and changing the position of the clouds and global light, the clouds updates in real-time. To conclude the project, rendering the clouds at the goal of above 60 FPS if only limiting the resultsdown to high-end computer was somewhat successful. The clouds visually looked realistic enough inthe scene and the efforts for improving the performance rate did not affect its overall quality. The high-end computer was able to render the clouds but the low-end computer was struggling with theclouds on their own
6

Interactive visualization of space weather data

Törnros, Martin January 2013 (has links)
This work serves to present the background, approach, and selected results for the initial master thesis and prototyping phase of Open Space, a joint visualization software development project by National Aeronautics and Space Administration (NASA), Linköping University (LiU) and the American Museum of Natural History (AMNH). The thesis report provides a theoretical introduction to heliophysics, modeling of space weather events, volumetric rendering, and an understanding of how these relate in the bigger scope of Open Space. A set of visualization tools that are currently used at NASA and AMNH are presented and discussed. These tools are used to visualize global heliosphere models, both for scientific studies and for public presentations, and are mainly making use of geometric rendering techniques. The paper will, in detail, describe a new approach to visualize the science models with volumetric rendering to better represent the volumetric structure of the data. Custom processors have been developed for the open source volumetric rendering engine Voreen, to load and visualize science models provided by the Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center (GSFC). Selected parts of the code are presented by C++ code examples. To best represent models that are defined in non-Cartesian space, a new approach to volumetric rendering is presented and discussed. Compared to the traditional approach of transforming such models to Cartesian space, this new approach performs no such model transformations, and thus minimizes the amount of empty voxels and introduces less interpolation artifacts. Final results are presented as rendered images and are discussed from a scientific visualization perspective, taking into account the physics representation, potential rendering artifacts, and the rendering performance.
7

FOLAR: A FOggy-LAser Rendering Method for Interaction in Virtual Reality / FOLAR: En FOggy-LAser Rendering Metod för Interaktion i Virtual Reality

Zhang, Tianli January 2020 (has links)
Current commercial Virtual Reality (VR) headsets give viewers immersion in virtual space with stereoscopic graphics and positional tracking. Developers can create VR applications in a working pipeline similar to creating 3D games using game engines. However, the characteristics of VR headsets give disadvantages to the rendering technique particle system with billboard sprites. In our study, we propose a rendering technique called FOggy-LAser Rendering method (FOLAR), which renders realistic laser in fog on billboard sprites. With this method, we can compensate for the disadvantages of using particle systems and still render the graphics in interactive performance for VR. We studied the characteristics of this method by performance benchmarks and comparing the rendered result to a baseline ray-casting method. User study and image similarity metrics are involved in the comparison study. As a result, we observed a satisfying performance and a similar rendering result compared to ray-casting. However, the user study still shows a significant difference in the rendered result between methods. These results imply that FOLAR is an acceptable method for its performance and ness in the rendered result, but still have inevitable trade-offs‌‌‌ in the graphics. / Nuvarande kommersiella Virtual Reality (VR) headset ger användare immersion i virtuellt utrymme med stereoskopisk grafik och positionsspårning. Utvecklare kan skapa VR-applikationer i en fungerande pipeline på ett liknande sätt som att skapa 3D-spel med hjälp av spelmotorer. Egenskaperna hos VR-headset ger emellertid nackdelar med renderingstekniken av billboard sprite partikelsystem. I vår studie föreslår vi en renderingsteknik som kallas FOggy-LAser Rendering method (FOLAR), som renderar realistiska lasrar i dimma på billboard sprites. Med denna metod kan vi kompensera för nackdelarna med att använda partikelsystem och fortfarande göra grafiken i interaktiv prestanda för VR. Vi studerade egenskaperna hos denna metod genom prestanda benchmarks och jämförde renderade resultatet med en baseline ray-cast metod. Användarstudie och image similarity mätvärden är involverade i jämförelsestudien. Som resultat observerade vi en tillfredsställande prestanda och liknande renderings resultat jämfört med ray-casting. Dock visar användarstudien fortfarande en signifikant skillnad i det gjorda resultaten mellan metoderna. Dessa resultat pekar på att FOLAR är en acceptabel metod för dess prestanda och korrekthet i det renderade resultatet, men har fortfarande oundvikliga avvägningar i grafiken.
8

Volumetric Rendering of the Inner Coma of a Theoretically Modelled Comet for Comet Interceptor Mission

Vinod, Amal January 2023 (has links)
The Comet Interceptor is a joint mission by European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) which seeks to perform a flyby over a Long Period Comet using a multi-element spacecraft. The Comet Interceptor comprises three spacecrafts- A, B1 and B2. All three spacecrafts will observe and map the comet at three different points on the coma of the comet, thereby making this mission the first ever multipoint mission dedicated to study a Long Period Comet. Out of the eleven instruments aboard the Comet Interceptor, the work done for this thesis aims to help the team designing the instrument-Optical Periscope Imager forComets (OPIC). The team designing OPIC uses the imaging simulation software Space Imaging Simulator for Proximity Operations (SISPO) to render images of theoretically modelled dust and gas densities of the coma of a comet to obtain prerequisite knowledge of the images which is to be taken by OPIC during its flyby. Using the theoretical model of the coma, a 3D model was created as part of the thesis which shall be later implemented in SISPO. The structure of the coma was made with the help of a sparse volumetric data manipulation tool OpenVDB, which was coded and run in Python. The generated data was imported in Blender to visualise the volumetric data with the help of Blender’s rendering engine-Cycles. To visualise the 3D model with utmost physical realism as the software Blender allows, a study on the scattering properties of the dust and gas model was done. Also, a motion blur was implemented in Blender to simulate the high relative velocity between the instrument and comet. Multiple approaches of varying complexities and time consumption were considered for importing and visualising the volumetric data. The final render images were brightness-matched with reference to images from previous cometary missions. Finally, a qualitative analysis was done by visually comparing the render images to the images from previous missions. With the help of this qualitative analysis, several features and characteristics were identified which were analogous to the real life images, thus establishing the correctness of the renders produced.
9

Modélisation par bruit procédural et rendu de détails volumiques de surfaces dans les scènes virtuelles / Procedural noise modeling and rendering of volumetric details over surfaces in virtual scenes

Pavie, Nicolas 03 November 2016 (has links)
L’augmentation de la puissance graphique des ordinateurs grands publics entraîne avec elle une demande croissante de qualité et de complexité des scènes virtuelles. La gestion de cette complexité est particulièrement difficile pour les objets naturels tels les arbres et les champs d’herbe ou encore pour les animaux, pour lesquels de très nombreux petits objets très similaires viennent décorer les surfaces. La diversité de ces détails de surfaces, nécessaire à un rendu réaliste dans le cas des objets naturels, se traduit par une augmentation du temps de modélisation, du coût en stockage et de la complexité d’évaluation. Nous nous sommes intéressés aux différentes représentations et méthodes de génération à la volée pouvant être utilisées pour la création et le rendu temps réel de ces détails sur de vastes surfaces. Nous avons concentré notre étude sur le cas particulier des champs d’herbe et des fourrures : De nombreux brins quasi-similaires, distribués aléatoirement sur la surface, forment une apparence visuelle très proche d’un motif de bruit incluant des éléments de structure. Nous présentons dans un premier temps un bruit procédural axé sur la modélisation spatiale interactive d’éléments quasi-similaires et de leur distribution. L’utilisation de fonctions gaussiennes elliptiques comme primitive de modélisation, et la distribution non-uniforme contrôlée des éléments créés, permet de produire des motifs aléatoires ou quasi-réguliers incluant des caractéristiques structurelles. Une méthode d’analyse par décomposition en ellipses permet de préconfigurer ce bruit pour une reproduction rapide d’un motif donné. Nous présentons ensuite une extension de ce bruit pour la modélisation procédurale d’une surcouche volumique composée de détails de surfaces tels que des brins ou des objets volumiques plus complexes. Pour conserver une modélisation interactive du motif, une première méthode de rendu d’ordre image et une seconde méthode d’ordre objet sont proposées pour une évaluation optimisée du bruit par une carte graphique. Ces deux méthodes permettent une visualisation interactive et visuellement convaincante du résultat. / The growing power of graphics processing units (GPU) in mainstream computers creates a need for a higher quality and complexity of virtual scenes. Managing this complexity for natural objects such as trees or grass fields or even animals is painstaking, due to the large amount of small objects decorating their surface. The diversity of such details, mandatory for realistic rendering of natural objects, translates in a longer authoring time, a higher memory requirement and a more complex evaluation. We review in this thesis the related works on data representations and on-the-fly generation methods used for the creation and real-time rendering of details over large surfaces. We focus our study on the particular case of grass fields and fur : the fuzzy visual appearance of those surfaces is obtained by the distribution of many self-similar blades or strands, creating a pattern closely related to a noise with structural features. We first present a procedural noise that aims at spatial modeling of self-similar elements and their distribution. The elliptical Gaussian function used as a modeling primitive and the controlled non-uniform distribution of elements allows for various type of patterns to be modeled, from stochastic to near-regular one, while including structural features. The by-example analysis process based on an ellipse fitting method allows a fast configuration of the noise for patterns reproduction. We further introduce an extension of this noise model for the authoring of procedural shell textures of strand-based or more complex volumetric details. For interactive authoring of such volumetric pattern, an image-order and an object-order rendering methods are proposed, both methods being optimized for an implementation on the GPU. Our rendering methods allow for interactive visualization of a visually-convincing result.
10

A Physically Based Pipeline for Real-Time Simulation and Rendering of Realistic Fire and Smoke / En fysiskt baserad rörledning för realtidssimulering och rendering av realistisk eld och rök

He, Yiyang January 2018 (has links)
With the rapidly growing computational power of modern computers, physically based rendering has found its way into real world applications. Real-time simulations and renderings of fire and smoke had become one major research interest in modern video game industry, and will continue being one important research direction in computer graphics. To visually recreate realistic dynamic fire and smoke is a complicated problem. Furthermore, to solve the problem requires knowledge from various areas, ranged from computer graphics and image processing to computational physics and chemistry. Even though most of the areas are well-studied separately, when combined, new challenges will emerge. This thesis focuses on three aspects of the problem, dynamic, real-time and realism, to propose a solution in form of a GPGPU pipeline, along with its implementation. Three main areas with application in the problem are discussed in detail: fluid simulation, volumetric radiance estimation and volumetric rendering. The weights are laid upon the first two areas. The results are evaluated around the three aspects, with graphical demonstrations and performance measurements. Uniform grids are used with Finite Difference (FD) discretization scheme to simplify the computation. FD schemes are easy to implement in parallel, especially with ComputeShader, which is well supported in Unity engine. The whole implementation can easily be integrated into any real-world applications in Unity or other game engines that support DirectX 11 or higher.

Page generated in 0.1198 seconds