Spelling suggestions: "subject:"gendering"" "subject:"lendering""
141 |
Forward plus rendering performance using the GPU vs CPU multi-threading. : A comparative study of culling process in Forward plusRahm, Marcus January 2017 (has links)
Context. The rendering techniques in games have the goal of shading the scene with as high of a quality as possible while being as efficient as possible. With more advanced tools being developed such as a compute shader. It has allowed for more efficient speed up of the shading process. One rendering technique that makes use of this, is Forward plus rendering. Forward plus rendering make use of a compute shader to perform a culling pass of all the lights. However, not all computers can make use of compute shaders. Objectives. The aims of this thesis are to investigate the performance of using the CPU to perform the light culling required by the Forward plus rendering technique, comparing it to the performance of a GPU implementation. With that, the aim is also to explore if the CPU can be an alternative solution for the light culling by the Forward plus rendering technique. Methods. The standard Forward plus is implemented using a compute shader. After which Forward plus is then implemented using CPU multithreaded to perform the light culling. Both versions of Forward plus are evaluated by sampling the frames per second during the tests with specific properties. Results. The results show that there is a difference in performance between the CPU and GPU implementation of Forward plus. This difference is fairly significant as with 256 lights rendered the GPU implementation has 126% more frames per second over the CPU implementation of Forward plus. However, the results show that the performance of the CPU implementation of Forward plus is viable. As the performance stays above 30 frames per second with less than 2048 lights in the scene. The performance also outperforms the performance of basic Forward rendering. Conclusions. The conclusion of this thesis shows that multi-threaded CPU can be used for culling lights for Forward plus rendering. It is also a viable chose over basic Forward rendering. With 64 lights the CPU implementation performs with 133% more frames per second over the basic Forward rendering.
|
142 |
Deferred Rendering : Jämförelse mellan traditionell deferred rendering och light pre-pass renderingBernhardsson, Johan January 1987 (has links)
Då scenkomplexitet och ett högre antal ljuskällor blir vanligare inom spel har ett behov av algortimer för att hantera dessa scener, med bra prestanda, uppståt. En allt vanligare algoritm för detta är Deferred Shading. Rapporten utvärderar två olika metoder för Deferred Shading (traditionell Deferred Shading och Light pre-pass rendering).
|
143 |
Design and Implementation of an Application. Programming Interface for Volume RenderingSelldin, Håkan January 2002 (has links)
To efficiently examine volumetric data sets from CT or MRI scans good volume rendering applications are needed. This thesis describes the design and implementation of an application programming interface (API) to be used when developing volume-rendering applications. A complete application programming interface has been designed. The interface is designed so that it makes writing application programs containing volume rendering fast and easy. The interface also makes created application programs hardware independent. Volume rendering using 3d-textures is implemented on Windows and Unix platforms. Rendering performance has been compared between different graphics hardware.
|
144 |
Técnica híbrida de visualização para exploração de dados volumétricos não estruturados / A hybrid visualization technique for exploring unstructured volumetric dataPatricia Shirley Herrera Cateriano 21 May 2003 (has links)
Este trabalho apresenta uma nova técnica de visualização que aproveita as vantagens do rendering volumétrico direto e do rendering de superfícies em um ambiente híbrido. O método faz uso de uma pré-visualização sobre o bordo do volume que viabiliza uma interação em tempo real com objetos volumétricos modelados por meio de malhas não estruturadas. Além disso, essa nova abordagem de visualização é paralelizável e pode se acelerada com placas gráficas comuns. / This work presents a new visualization technique that exploits the advantages of direct volume rendering and surface rendering in a hybrid environment. The method developed here makes use of a pre-visualization on the volume boundary to enable real time interaction with unstructured volumetric meshes. Furthermore, this new visualization approach can be implemented on existing parallel architectures and speed up by conventional graphical hardware.
|
145 |
Efektivní simulace šíření světla v opticky aktivních médiích pro barevný 3D tisk / Efficient light transport simulation of participating media in color 3D printing.Brečka, Bohuš January 2021 (has links)
A Monte Carlo light transport simulation is used in scattering-aware color 3D printing pipeline (Elek et al. [2017], Sumin et al. [2019]) to drive an iterative optimization loop. Its purpose is to find a material arrangement that yields the closest match in terms of surface appearance towards a target. As the light transport prediction takes up about 90% of the time it poses a significant bottleneck towards a practical application of this technology. The dense volumetric textures also require a lot of memory. Explicitly simulating every light interaction is particularly challenging in the setting of 3D printouts due to the heterogeneity, high density and high albedo of the media. In this thesis, we explore existing volumetric rendering techniques (Křivánek et al. [2014], Herholz et al. [2019]) and finally engineer a customized estimator for our setting, improving the performance considerably. Additionally, we investigate various storage solutions for the volumetric data and successfully reduce the memory footprint. All the algorithms are available in the form of Mitsuba renderer plugins.
|
146 |
Remote Assistance for Repair Tasks Using Augmented RealitySun, Lu 15 September 2020 (has links)
In the past three decades, using Augmented Reality (AR) in repair tasks has received a growing amount of attention from researchers, because AR provides the users with a more immersive experience than traditional methods, e.g., instructional booklets, and audio, and video content. However, traditional methods are mostly used today, because there are several key challenges to using AR in repair tasks. These challenges include device limita- tions, object pose tracking, human-computer interaction, and authoring. Fortunately, the research community is investigating these challenges actively.
The vision of this thesis is to move the AR technology towards being widely used in this field. Under this vision, I propose an AR platform for repair tasks and address the challenges of device limitations and authoring. The platform contains a new authoring ap- proach that tracks the real components on the expert’s side to monitor her or his operations. The proposed approach gives experts a novel authoring tool to specify 6DoF movements of a component and apply the geometrical and physical constraints in real-time. To ad- dress the challenge of device limitations, I present a hybrid remote rendering framework for applications on mobile devices. In my remote rendering approach, I adopt a client-server model, where the server is responsible for rendering high-fidelity models, encoding the ren- dering results and sending them to the client, while the client renders low-fidelity models and overlays the high-fidelity frames received from the server on its rendering results. With this configuration, we are able to minimize the bandwidth requirements and interaction latency, since only key models are rendered in high-fidelity mode.
I perform a quantitive analysis on the effectiveness of my proposed remote rendering method. Moreover, I conduct a user study on the subjective and objective effects of the remote rendering method on the user experience. The results show that key model fidelity has a significant influence on the objective task difficulty, while interaction latency plays an important role in the subjective task difficulty. The results of the user study show how my method can benefit the users while minimizing resource requirements. By conducting a user study for the AR remote assistance platform, I show that the proposed AR plat- form outperforms traditional instructional videos and sketching. Through questionnaires provided at the end of the experiment, I found that the proposed AR platform receives higher recommendation than sketching, and, compared to traditional instructional videos, it stands out in terms of instruction clarity, preference, recommendation and confidence of task completion. Moreover, as to the overall user experience, the proposed method has an advantage over the video method.
|
147 |
Hur kan Design Fiction användas för att skapa en visuellt informativ video om kolonier på Venus, för ungaWallhede, Johan, Franzén, Emma January 2022 (has links)
Denna artikel handlar om hur Design fiction kan användas för att skapa en visuell informativ video till unga. Förutom det handlar denna artikel om den tekniska vägen till att få en 3D animation som både kan visas i en domteater, VR 360o bild och platt skärm. Sedan är det en analys på både bilderna rent tekniskt där 3D bilderna visas på många olika medier som ger en validitet till resultatet. Förutom det har denna artikel en guide till det “best practices” för en domteater och vad du som designer ska tänka på när du ska göra ett projekt till en domteater. Läsaren ska vara medveten om att det kan finnas variabler i dessa “best practices” beroende på domteater, som inte tas upp i denna artikel. Artikeln handlar även om det tekniska utmaningar av att arbeta med 3D grafik och deras lösningar och farhågor. 3D grafik har gett undersökningen många förmågor men också många utmaningar på grund av det medium som har valts. / This article is about Design fiction and how it can be used to create a visual informative video for younger people. The article also describes the process to create a 3D graphic animation that can be shown in a theatre dome, VR 360o video as well as a normal computer screen. There is also an analysis of the technical achievement where the 3D rendered images can and are shown in different mediums as described above, it gives the project a certain validity. There is also a general “best practice” guide to how a production can be done for a dom theatre however keep in mind that there may be variables depending on the dom theatre that are not highlighted in this article. The article also describes the technical challenges and opportunities that come with working with 3D graphics, and as such can give a reader advice on what to avoid and do in such projects.
|
148 |
Real-Time Stylized Rendering for Large-Scale 3D ScenesPietrok, Jack 01 June 2021 (has links) (PDF)
While modern digital entertainment has seen a major shift toward photorealism in animation, there is still significant demand for stylized rendering tools. Stylized, or non-photorealistic rendering (NPR), applications generally sacrifice physical accuracy for artistic or functional visual output. Oftentimes, NPR applications focus on extracting specific features from a 3D environment and highlighting them in a unique manner. One application of interest involves recreating 2D hand-drawn art styles in a 3D-modeled environment. This task poses challenges in the form of spatial coherence, feature extraction, and stroke line rendering. Previous research on this topic has also struggled to overcome specific performance bottlenecks, which have limited use of this technology in real-time applications. Specifically, many stylized rendering techniques have difficulty operating on large-scale scenes, such as open-world terrain environments. In this paper, we describe various novel rendering techniques for mimicking hand-drawn art styles in a large-scale 3D environment, including modifications to existing methods for stroke rendering and hatch-line texturing. Our system focuses on providing various complex styles while maintaining real-time performance, to maximize user-interactability. Our results demonstrate improved performance over existing real-time methods, and offer a few unique style options for users, though the system still suffers from some visual inconsistencies.
|
149 |
Deep Learning Approaches for Automatic Colorization, Super-resolution, and Representation of Volumetric DataDevkota, Sudarshan 01 January 2023 (has links) (PDF)
This dissertation includes a collection of studies that aim to improve the way we represent and visualize volume data. The advancement of medical imaging has revolutionized healthcare, providing crucial anatomical insights for accurate diagnosis and treatment planning. Our first study introduces an innovative technique to enhance the utility of medical images, transitioning from monochromatic scans to vivid 3D representations. It presents a framework for reference-based automatic color transfer, establishing deep semantic correspondences between a colored reference image and grayscale medical scans. This methodology extends to volumetric rendering, eliminating the need for manual intervention in parameter tuning. Next, it delves into deep learning-based super-resolution for volume data. By leveraging color information and supplementary features, the proposed system efficiently upscales low-resolution renderings to achieve higher fidelity results. Temporal reprojection further strengthens stability in volumetric rendering. The third contribution centers on the compression and representation of volumetric data, leveraging coordinate-based networks and multi-resolution hash encoding. This approach demonstrates superior compression quality and training efficiency compared to other state-of-the-art neural volume representation techniques. Furthermore, we introduce a meta-learning technique for weight initialization to expedite convergence during training. These findings collectively underscore the potential for transformative advancements in large-scale data visualization and related applications.
|
150 |
MODELS AND ALGORITHMS FOR INTERACTIVE AUDIO RENDERINGTsingos, Nicolas 14 April 2008 (has links) (PDF)
Les systèmes de réalité virtuelle interactifs combinent des représentations visuelle, sonore et haptique, afin de simuler de manière immersive l'exploration d'un monde tridimensionnel représenté depuis le point de vue d'un observateur contrôlé en temps réel par l'utilisateur. La plupart des travaux effectués dans ce domaine ont historiquement port'e sur les aspects visuels (par exemple des méthodes d'affichage interactif de modèles 3D complexes ou de simulation réaliste et efficace de l'éclairage) et relativement peu de travaux ont été consacrés 'a la simulation de sources sonores virtuelles 'également dénommée auralisation. Il est pourtant certain que la simulation sonore est un facteur clé dans la production d'environnements de synthèse, la perception sonore s'ajoutant à la perception visuelle pour produire une interaction plus naturelle. En particulier, les effets sonores spatialisés, dont la direction de provenance est fidèlement reproduite aux oreilles de l'auditeur, sont particulièrement importants pour localiser les objets, séparer de multiples signaux sonores simultanés et donner des indices sur les caractéristiques spatiales de l'environnement (taille, matériaux, etc.). La plupart des systèmes de réalité virtuelle immersifs, des simulateurs les plus complexes aux jeux vidéo destin'es au grand public mettent aujourd'hui en œuvre des algorithmes de synthèse et spatialisation des sons qui permettent d'améliorer la navigation et d'accroître le réalisme et la sensation de présence de l'utilisateur dans l'environnement de synthèse. Comme la synthèse d'image dont elle est l'équivalent auditif, l'auralisation, appel'ee aussi rendu sonore, est un vaste sujet 'a la croisée de multiples disciplines : informatique, acoustique et 'électroacoustique, traitement du signal, musique, calcul géométrique mais également psycho-acoustique et perception audio-visuelle. Elle regroupe trois problématiques principales: synthèse et contrôle interactif de sons, simulation des effets de propagation du son dans l'environnement et enfin, perception et restitution spatiale aux oreilles de l'auditeur. Historiquement, ces trois problématiques émergent de travaux en acoustique architecturale, acoustique musicale et psycho-acoustique. Toutefois une différence fondamentale entre rendu sonore pour la réalité virtuelle et acoustique réside dans l'interaction multimodale et dans l'efficacité des algorithmes devant être mis en œuvre pour des applications interactives. Ces aspects importants contribuent 'a en faire un domaine 'a part qui prend une importance croissante, tant dans le milieu de l'acoustique que dans celui de la synthèse d'image/réalité virtuelle.
|
Page generated in 0.0633 seconds