• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 123
  • 72
  • 59
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 957
  • 368
  • 210
  • 137
  • 136
  • 130
  • 128
  • 127
  • 124
  • 116
  • 108
  • 92
  • 87
  • 80
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Remote Rendering for VR

Kelkkanen, Viktor January 2021 (has links)
The aim of this thesis is to study and advance technology relating to remote rendering of Virtual Reality (VR). In remote rendering, rendered content is commonly streamed as video images in network packets from a server to a client. Experiments are conducted with varying networks and configurations throughout this work as well as with different technologies that enable or improve remote VR experiences. As an introduction to the field, the thesis begins with related studies on 360-video. Here, a statistic based on throughput alone is proposed for use in light-weight performance monitoring of encrypted HTTPS 360-video streams. The statistic gives an indication of the potential of stalls in the video stream which may be of use for network providers wanting to allocate bandwidth optimally. Moving on from 360-video into real-time remote rendering, a wireless VR adapter, TPCAST, is studied and a method for monitoring the inputand video-throughput of this device is proposed and implemented. With the monitoring tool, it is for example possible to identify video stalls that occur in TPCAST and thus determine a baseline of its robustness in terms of video delivery. Having determined the baseline, we move on to developing a prototype remote rendering system for VR. The prototype has so far been used to study the bitrate requirements of remote VR and to develop a novel method that can be used to reduce the image size from a codec-perspective by utilizing the Hidden Area Mesh (HAM) that is unique to VR. By reducing the image size, codecs can run faster and time will therefore be saved each frame, potentially reducing the latency of the system.
272

Differentiable TEM Detector: Towards Differentiable Transmission Electron Microscopy Simulation

Liang, Feng 04 1900 (has links)
We propose to interpret Cryogenic Electron Microscopy (CryoEM) data as a supervision for learning parameters of CryoEM microscopes. Following this formulation, we present a differentiable version of Transmission Electron Microscopy (TEM) Simulator that provides differentiability of all continuous inputs in a simulation. We demonstrate the learning capability of our simulator with two examples, detector parameter estimation and denoising. With our differentiable simulator, detector parameters can be learned from real data without time-consuming handcrafting. Besides, our simulator enables new way to denoising micrographs. We develop this simulator with the combination of Taichi and PyTorch, exploiting kernel-based and operator-based parallel differentiable programming, which results in good speed, low memory footprint and expressive code. We call our work as Differentiable TEM Detector as there are still challenges to implement a fully differentiable transmission electron microscope simulator that can further differentiate with respect to particle positions. This work presents first steps towards a fully differentiable TEM simulator. Finally, as a subsequence of our work, we abstract out the fuser that connects Taichi and PyTorch as an open-source library, Stannum, facilitating neural rendering and differentiable rendering in a broader context. We publish our code on GitHub.
273

Modèles de rendu et animation émotionnelle en 3 D / 3D emotional rendering and animation models

Huang, Jing 26 February 2013 (has links)
L'animation et le rendu sont deux domaines de recherche importants dans l'informatique graphique. L'occlusion ambiante (OA) est un moyen très répandu pour simuler l'éclairage indirect. Nous présentons une approche rapide et facile à mettre en œuvre pour l'approximation de l'occlusion ambiante de l'espace d'affichage. On calcule l'OA pour chaque pixel en intégrant les valeurs angulaires des échantillonneurs autour de la position du pixel qui pourrait bloquer l'éclairage ambiant. Nous appliquons une méthode séparable afin de réduire la complexité du calcul. La simulation des rides expressives du visage peut être estimée sans changer l'information géométrique. Nous avons construit un modèle de rides en utilisant une technique graphique qui effectue des calculs seulement dans l'espace d'affichage. Les animations faciales sont beaucoup plus réalistes avec la présence des rides. Nous présentons une méthode de cinématique inverse rapide et facile à mettre en œuvre qui s'appuie sur un modèle masse-ressort et qui repose sur les interactions de forces entre les masses. Les interactions de forces entre les masses peuvent être vues comme un problème de minimisation de l'énergie. Elle offre une très bonne qualité visuelle en haute performance de vitesse. En se basant sur notre méthode d'IK, nous proposons un modèle de synthèse des gestes corporels expressifs intégrés dans notre plateforme d'agents conversationnels. Nous appliquons l'animation de tout le corps enrichi par l'aspect expressif. Ce système offre plus de flexibilité pour configurer la cinématique expressive directe ou indirecte. De façon globale, cette thèse présente notre travail sur le rendu et l'animation en 3D. / Animation and rendering are both important research domains in computer graphics. We present a fast easy-to-implement separable approximation to screen space ambient occlusion.We evaluate AO for each pixel by integrating angular values of samplers around the pixel position which potentially block the ambient lighting.We apply a separable fashion to reduce the complexity of the evaluation. Wrinkle simulation can also be approximated without changing geometry information.We built a wrinkles model by using a modern graphics technique which performs computations only in screen space.With the help of wrinkles, the facial animation can be more realistic. Several factors have been proved, and wrinkles can help to recognize action units with a higher rate. Inverse kinematics (IK) can be used to find the hierarchical posture solutions. We present a fast and easy-to-implement locally physics-based IK method. Our method builds upon a mass-spring model and relies on force interactions between masses. Our method offers convincing visual quality results obtained with high time performance. Base on our IK method, we propose our expressive body-gestures animation synthesis model for our Embodied Conversational Agent (ECA) technology. Our implementation builds upon a full body reach model using a hybrid kinematics solution. Generated animations can be enhanced with expressive qualities.This system offers more flexibility to configure expressive Forward and Inverse Kinematics (FK and IK). It can be extended to other articulated figures. Overall, this thesis presents our work in 3D rendering and animation. Several new approaches have been proposed to improve both the quality and the speed.
274

Multigraph visualization for feature classification of brain network data

Wang, Jiachen 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Multigraph is a set of graphs with a common set of nodes but different sets of edges. Multigraph visualization has not received much attention so far. In this thesis, I will introduce an interactive application in brain network data analysis that has a strong need for multigraph visualization. For this application, multigraph was used to represent brain connectome networks of multiple human subjects. A volumetric data set was constructed from the matrix representation of the multigraph. A volume visualization tool was then developed to assist the user to interactively and iteratively detect network features that may contribute to certain neurological conditions. I applied this technique to a brain connectome dataset for feature detection in the classification of Alzheimer's Disease (AD) patients. Preliminary results showed significant improvements when interactively selected features are used.
275

Evaluation and reference implementation of simulation focused rendering frameworks

Henningsson, Filip January 2022 (has links)
When creating physics simulations, one common issue that arises is the need to visualize the simulation progress and results. With recent advancements in ray tracing this problem has mostly been solved for non real-time applications. However, there still exists a need to find and evaluate real-timerendering options for applications that need to run at interactive frame rates. In this thesis, we look at some rendering frameworks and propose a set off our performance based metrics and one usability metric for evaluating these. In addition to the metrics, we propose a qualitative method for evaluating the realism of simulated scenes rendered using different frameworks. Finally, we select one of the frameworks and use the metrics and the qualitative method proposed to evaluate the overall fit of the framework for the use case of visualizing physics simulations. The results show similar scores for the performance metrics using the new framework as using a previously used reference framework while showing a slight tendency for increased realism in scenes using the new framework.
276

Phong Grass: A dynamic LOD approach to grass rendering

Klingspor, Oliver January 2023 (has links)
Background: Recreating nature in a virtual environment comes with many aspects and challenges. Grass rendering is a part of this area, and both performance and visual results must be taken into consideration. A common optimization approach is through static level-of-detail (LOD), however, a certain visual loss comes with the use of this. Dynamic LOD is an alternative technique and exists in various forms, such as phong tessellation. The thesis will investigate the performance and visual implications of utilizing phong tessellation in grass rendering. Objectives: The main objective is to examine the implications of utilizing phong tessellation in grass rendering in terms of performance and visual results. Different scenarios will be examined, providing an overview while identifying performance and visual patterns of the technique. Methods: An experiment conducted via a DirectX 11 implementation was used to collect all data in scenarios that varied in the number of blades and how the tessellation falloff rate was described. The data collected was the average frame time during a 30-second duration as well were rendered images saved to disk from two points of view. Two reference scenes existed that only used a single LOD of either low- or high-quality grass, while no tessellation was applied. The data of each scenario and scene were compared to one another, and the visual differences were identified using the image difference evaluator FLIP. Results: The results show that different scenarios provide different benefits. Some scenarios contain smaller visual errors, while others perform efficiently. Overall a linear detail-falloff rate both performs well at various blade amounts and produces similar or smaller visual differences to the other scenarios. However, the results also show that the technique does not fit every set of hardware, at least not in demanding scenarios. Conclusions: The findings of this thesis show that the method has the potential to be a valid option in terms of performance and visual quality. It is, however, important to consider the specific use case as it is not applicable in every situation and to consider what hardware the application will run on.
277

Streamsurface Smoke Effect for Visualizing Dragon Fly CFD Data in Modern OpenGL with an Emphasis on High Performance

Sipes, Jordan 24 May 2013 (has links)
No description available.
278

Rendering Realistic Cloud Effects for Computer Generated Films

Reimschussel, Cory A. 24 June 2011 (has links) (PDF)
This work addresses the problem of rendering clouds. The task of rendering clouds is important to film and video game directors who want to use clouds to further the story or create a specific atmosphere for the audience. While there has been significant progress in this area, other solutions to this problem are inadequate because they focus on speed instead of accuracy, or focus only on a few specific properties of rendered clouds while ignoring others. Another common shortcoming with other methods is that they are not integrated into existing rendering pipelines. We propose a solution to this problem based on creating a point cloud to represent the cloud volume, then calculating light scattering events between the points. The key insight is blending isotropic and anisotropic scattering events to mimic realistic light scattering of anisotropic participating media. Rendered images are visually plausible representations of how light interacts with clouds.
279

Wavelets In Real-time Rendering

Sun, Weifeng 01 January 2006 (has links)
Interactively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting environment. Meanwhile, pre-computation based soft shadow algorithms are proven effective under low-frequency lighting environment. Under the most practical yet popular all-frequency natural lighting environment, however, real-time rendering of dynamic scenes still remains a challenging problem. In this dissertation, we propose a systematic approach to render dynamic glossy objects under the general all-frequency lighting environment. In our framework, lighting integration is reduced to two rather basic mathematical operations, efficiently computing multi-function product and product integral. The main contribution of our work is a novel mathematical representation and analysis of multi-function product and product integral in the wavelet domain. We show that, multi-function product integral in the primal is equivalent to summation of the product of basis coefficients and integral coefficients. In the dissertation, we give a novel Generalized Haar Integral Coefficient Theorem. We also present a set of efficient algorithms to compute multi-function product and product integral. In the dissertation, we demonstrate practical applications of these algorithms in the interactive rendering of dynamic glossy objects under distant time-variant all-frequency environment lighting with arbitrary view conditions. At each vertex, the shading integral is formulated as the product integral of multiple operand functions. By approximating operand functions in the wavelet domain, we demonstrate rendering dynamic glossy scenes interactively, which is orders of magnitude faster than previous work. As an important enhancement to the popular Pre-computation Based Radiance Transfer (PRT) approach, we present a novel Just-in-time Radiance Transfer (JRT) technique, and demonstrate its application in real-time realistic rendering of dynamic all-frequency shadows under general lighting environment. Our work is a significant step towards real-time rendering of arbitrary scenes under general lighting environment. It is also of great importance to general numerical analysis and signal processing.
280

Algorithms For Rendering Optimization

Johnson, Jared 01 January 2012 (has links)
This dissertation explores algorithms for rendering optimization realizable within a modern, complex rendering engine. The first part contains optimized rendering algorithms for ray tracing. Ray tracing algorithms typically provide properties of simplicity and robustness that are highly desirable in computer graphics. We offer several novel contributions to the problem of interactive ray tracing of complex lighting environments. We focus on the problem of maintaining interactivity as both geometric and lighting complexity grows without effecting the simplicity or robustness of ray tracing. First, we present a new algorithm called occlusion caching for accelerating the calculation of direct lighting from many light sources. We cache light visibility information sparsely across a scene. When rendering direct lighting for all pixels in a frame, we combine cached lighting information to determine whether or not shadow rays are needed. Since light visibility and scene location are highly correlated, our approach precludes the need for most shadow rays. Second, we present improvements to the irradiance caching algorithm. Here we demonstrate a new elliptical cache point spacing heuristic that reduces the number of cache points required by taking into account the direction of irradiance gradients. We also accelerate irradiance caching by efficiently and intuitively coupling it with occlusion caching. In the second part of this dissertation, we present optimizations to rendering algorithms for participating media. Specifically, we explore the implementation and use of photon beams as an efficient, intuitive artistic primitive. We detail our implementation of the photon iii beams algorithm into PhotoRealistic RenderMan (PRMan). We show how our implementation maintains the benefits of the industry standard Reyes rendering pipeline, with proper motion blur and depth of field. We detail an automatic photon beam generation algorithm, utilizing PRMan shadow maps. We accelerate the rendering of camera-facing photon beams by utilizing Gaussian quadrature for path integrals in place of ray marching. Our optimized implementation allows for incredible versatility and intuitiveness in artistic control of volumetric lighting effects. Finally, we demonstrate the usefulness of photon beams as artistic primitives by detailing their use in a feature-length animated film.

Page generated in 0.2111 seconds