• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 7
  • 6
  • 6
  • 3
  • 2
  • 2
  • Tagged with
  • 67
  • 67
  • 67
  • 36
  • 30
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Non-photorealistic rendering with coherence for augmented reality

Chen, Jiajian 16 July 2012 (has links)
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization. Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.
32

Delay sensitive delivery of rich images over WLAN in telemedicine applications

Sankara Krishnan, Shivaranjani. January 2009 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
33

Modèles de rivières animées pour l'exploration interactive de paysages

Yu, Qizhi 17 November 2008 (has links) (PDF)
Dans cette thèse, nous avons proposé un modèle multi-échelle pour l'animation de rivière. Nous avons présenté un nouveau modèle pour chaque échelle. A l'échelle macro, nous avons proposé une méthode procédurale permettant de générer une rivière réaliste à la volée. A l'échelle méso nous avons amélioré un modèle phénoménologique basé sur une représentation vectorielle des ondes de choc près des obstacles, et proposé une methode pour la reconstruction adaptative de la surface de l'eau. A l'échelle micro, nous avons présenté une méthode adaptative pour texturer des surfaces de grande étendue avec des performances indépendantes de la scène. Nous avons également propos é une méthode d'advection de texture. Ces deux modèles reposent sur notre schéma d'échantillonnage adaptatif. En combinant ces modèles, nous avons pu animer des rivières de taille mondiale en temps réel, tout en étant contr?olable. Les performances de notre système sont indépendantes de la scène. La vitesse procédurale et l'échantillonage en espace écran permettent à notre système de fonctionner sur des domaines illimités. Les utilisateurs peuvent observer la rivière de très près ou de très loin à tout moment. Des vagues très détaillées peuvent être affichées. Les différents parties des rivières sont continues dans l'espace et dans le temps, même lors de l'exploration ou de l'édition de la rivière par un utilisateur. Cela signifie que l'utilisateur peut éditer les lits des rivières ou ajouter des îles à la volée sans interrompre l'animation. La vitesse de la rivière change dès que l'utilisateur en édite les caractéristiques, et l'utilisateur peut auss modifier son apparence avec des textures.
34

Level-Of-Details Rendering with Hardware Tessellation / Rendu de niveaux de détails avec la Tessellation Matérielle

Lambert, Thibaud 18 December 2017 (has links)
Au cours des deux dernières décennies, les applications temps réel ont montré des améliorations colossales dans la génération de rendus photoréalistes. Cela est principalement dû à la disponibilité de modèles 3D avec une quantité croissante de détails. L'approche traditionnelle pour représenter et visualiser des objets 3D hautement détaillés est de les décomposer en un maillage basse fréquence et une carte de déplacement encodant les détails. La tessellation matérielle est le support idéal pour implémenter un rendu efficace de cette représentation. Dans ce contexte, nous proposons une méthode générale pour la génération et le rendu de maillages multi-résolutions compatibles avec la tessellation matérielle. Tout d'abord, nous introduisons une métrique dépendant de la vue capturant à la fois les distorsions géométriques et paramétriques, permettant de sélectionner la le niveau de résolution approprié au moment du rendu. Deuxièmement, nous présentons une nouvelle représentation hiérarchique permettant d'une part des transitions temporelles et spatiales continues entre les niveaux et d'autre part une tessellation matérielle non uniforme. Enfin, nous élaborons un processus de simplification pour générer notre représentation hiérarchique tout en minimisant notre métrique d'erreur. Notre méthode conduit à d'énormes améliorations tant en termes du nombre de triangles affiché qu'en temps de rendu par rapport aux méthodes alternatives. / In the last two decades, real-time applications have exhibited colossal improvements in the generation of photo-realistic images. This is mainly due to the availability of 3D models with an increasing amount of details. Currently, the traditional approach to represent and visualize highly detailed 3D objects is to decompose them into a low-frequency mesh and a displacement map encoding the details. The hardware tessellation is the ideal support to implement an efficient rendering of this representation. In this context, we propose a general framework for the generation and the rendering of multi-resolution feature-aware meshes compatible with hardware tessellation. First, we introduce a view-dependent metric capturing both geometric and parametric distortions, allowing to select the appropriate resolution at rendertime. Second, we present a novel hierarchical representation enabling on the one hand smooth temporal and spatial transitions between levels and on the other hand a non-uniform hardware tessellation. Last, we devise a simplification process to generate our hierarchical representation while minimizing our error metric. Our framework leads to huge improvements both in terms of triangle count and rendering time in comparison to alternative methods.
35

Fast spectral multiplication for real-time rendering

Waddle, C Allen 02 May 2018 (has links)
In computer graphics, the complex phenomenon of color appearance, involving the interaction of light, matter and the human visual system, is modeled by the multiplication of RGB triplets assigned to lights and materials. This efficient heuristic produces plausible images because the triplets assigned to materials usually function as color specifications. To predict color, spectral rendering is required, but the O(n) cost of computing reflections with n-dimensional point-sampled spectra is prohibitive for real-time rendering. Typical spectra are well approximated by m-dimensional linear models, where m << n, but computing reflections with this representation requires O(m^2) matrix-vector multiplication. A method by Drew and Finlayson [JOSA A 20, 7 (2003), 1181-1193], reduces this cost to O(m) by “sharpening” an n x m orthonormal basis with a linear transformation, so that the new basis vectors are approximately disjoint. If successful, this transformation allows approximated reflections to be computed as the products of coefficients of lights and materials. Finding the m x m change of basis matrix requires solving m eigenvector problems, each needing a choice of wavelengths in which to sharpen the corresponding basis vector. These choices, however, are themselves an optimization problem left unaddressed by the method's authors. Instead, we pose a single problem, expressing the total approximation error incurred across all wavelengths as the sum of dm^2 squares for some number d, where, depending on the inherent dimensionality of the rendered reflectance spectra, m <= d << n, a number that is independent of the number of approximated reflections. This problem may be solved in real time, or nearly, using standard nonlinear optimization algorithms. Results using a variety of reflectance spectra and three standard illuminants yield errors at or close to the best lower bound attained by projection onto the leading m characteristic vectors of the approximated reflections. Measured as CIEDE2000 color differences, a heuristic proxy for image difference, these errors can be made small enough to be likely imperceptible using values of 4 <= m <= 9. An examination of this problem reveals a hierarchy of simpler, more quickly solved subproblems whose solutions yield, in the typical case, increasingly inaccurate approximations. Analysis of this hierarchy explains why, in general, the lowest approximation error is not attained by simple spectral sharpening, the smallest of these subproblems, unless the spectral power distributions of all light sources in a scene are sufficiently close to constant functions. Using the methods described in this dissertation, spectra can be rendered in real time as the products of m-dimensional vectors of sharp basis coefficients at a cost that is, in a typical application, a negligible fraction above the cost of RGB rendering. / Graduate
36

Recording Rendering API Commands for Instant Replay : A Runtime Overhead Comparison to Real-Time Video Encoding

Holmberg, Marcus January 2020 (has links)
Background. Instant replay allows an application to highlight events without exporting a video of the whole session. Hardware-accelerated video encoding allows replay footage to be encoded in real-time with less to no impact on the runtime performance of the actual simulation in the application. Hardware-accelerated video encoding, however, is not supported on all devices such as low-tier mobile devices, nor all platforms like web browsers. When hardware-acceleration is not supported, the replay has to be encoded using a software-implemented encoder instead. Objectives. To evaluate if recording rendering API commands is a suitable replacement of real-time encoding when hardware-accelerated video encoding is not supported. Method. An experimental research method is used to make quantitative measurements of the proposed approach, Reincore, and a real-time encoder. The measured metrics is frame time and memory consumption. The Godot game engine is modified with modules for real-time video encoding (H.264, H.265 and VP9 codecs) and rendering API command recording and replaying. The engine is also used to create test scenes to evaluate if object count, image motion, object loading/unloading, replay video resolution and replay video duration has any impact on the runtime overhead of frame time and memory consumption. Results. The implemented rendering API command replayer, Reincore, appears to have minimal to no impact on the frame time overhead in all scenarios, except for a spike in increased frame time when the replayer initializes synchronization. Reincore show to be overall inferior to real-time video encoding in terms of runtime memory overhead. Conclusions. Overall, real-time encoding using the H.264 or H.265 show a similar result in frame time as recording rendering commands. However, command recording implies a more significant overhead of memory usage than real-time encoding. The frame time of using the VP9 codec for real-time encoding is inferior to recording rendering API commands. / Bakgrund. Återspelning tillåter applikationer att visa upp händelser utan att exportera en video för hela sessionen. Hårdvaruaccelererad videokodning tillåter video av återspelning att kodas i realtid med minimal påverkan på applikationens prestanda för simulering. Hårdvaruaccelererad videokodning stöds dock inte alltid på alla enheter eller plattformar, så som lågt presterande mobila enheter eller webbläsare. När hårdvaruacceleration inte stöds, måste videokodning ske med en mjukvarubaserad implementering istället. Syfte. Att utvärdera om återspelning genom inspelade renderingskommandon som fördröjer arbetet för videokodning är ett lämpligt alternativ till videokodning i realtid, när hårdvaruacceleration inte stöds. Metod. En experimentel forskningsmetod används för att samla kvantitativ mätdata från den föreslagna tillvägagången, Reincore, and en realtidsvidekodare. Mätdatan består av bildtid och minnesanvändning. Genom att modifiera spelmotorn Godot skapas moduler för realtids-videokodning samt inspelning av renderingskommandon. Spelmotorn används också för att skapa testscener för att utvärdera om antal objekt, bildrörelse, skapande av objekt under körning, upplösning eller videolängd har någon inverkan på bildtid eller minnesanvändning. Resultat. Den implementerade renderingskommando-inspelaren, Reincore, visar minimal påverkan på bildtid, med undantag för en temporär ökning när återspelaren initierar synkronisering. Reincore visar sig vara underlägsen till realtids-videokodning när det gäller minnesanvändning. Slutsatser. Realtids-videokodning med H.264 eller H.265 som video-codec visar övergripande bättre resultat för återspelning än renderingskommandoinspelning, när det gäller både bildtid samt minnesanvändning. Bildtiden för VP9 video-codec för realtids-videokodning visar däremot sämre resultat än renderingskommandinspelning.
37

Real time rendering and modifiction of scenes with complex materials

Pugh, Christopher M. 01 January 2010 (has links)
Realistic rendering of 3D graphics scenes often requires large amounts of data and processing. High resolution texture data, complex BRDFs, surface modification, and global illumination effects are often necessary to realistically render a synthetic scene, but achieving such effects with a reasonable balance between performance and quality in real-time remains a challenge. Virtual texture techniques have been developed in order to manage extremely high resolution texture data. This thesis describes the implementation of a technique which allows writing of projected texture data to a virtual texture in real-time, allowing infinite numbers of permanent highly detailed surface modifications without the performance or accuracy limitations of decal techniques used in current games. It also describes an implementation of a real-time renderer which uses measured BRDF data, and discusses how applying virtual texturing to measured BRDF data may allow accurate, fast rendering with realistic materials. Finally, it discusses how the virtual decal system can be used to allow artists or game players to interactively alter the material composition of scenes with many distinct measured BRDFs.
38

FOLAR: A FOggy-LAser Rendering Method for Interaction in Virtual Reality / FOLAR: En FOggy-LAser Rendering Metod för Interaktion i Virtual Reality

Zhang, Tianli January 2020 (has links)
Current commercial Virtual Reality (VR) headsets give viewers immersion in virtual space with stereoscopic graphics and positional tracking. Developers can create VR applications in a working pipeline similar to creating 3D games using game engines. However, the characteristics of VR headsets give disadvantages to the rendering technique particle system with billboard sprites. In our study, we propose a rendering technique called FOggy-LAser Rendering method (FOLAR), which renders realistic laser in fog on billboard sprites. With this method, we can compensate for the disadvantages of using particle systems and still render the graphics in interactive performance for VR. We studied the characteristics of this method by performance benchmarks and comparing the rendered result to a baseline ray-casting method. User study and image similarity metrics are involved in the comparison study. As a result, we observed a satisfying performance and a similar rendering result compared to ray-casting. However, the user study still shows a significant difference in the rendered result between methods. These results imply that FOLAR is an acceptable method for its performance and ness in the rendered result, but still have inevitable trade-offs‌‌‌ in the graphics. / Nuvarande kommersiella Virtual Reality (VR) headset ger användare immersion i virtuellt utrymme med stereoskopisk grafik och positionsspårning. Utvecklare kan skapa VR-applikationer i en fungerande pipeline på ett liknande sätt som att skapa 3D-spel med hjälp av spelmotorer. Egenskaperna hos VR-headset ger emellertid nackdelar med renderingstekniken av billboard sprite partikelsystem. I vår studie föreslår vi en renderingsteknik som kallas FOggy-LAser Rendering method (FOLAR), som renderar realistiska lasrar i dimma på billboard sprites. Med denna metod kan vi kompensera för nackdelarna med att använda partikelsystem och fortfarande göra grafiken i interaktiv prestanda för VR. Vi studerade egenskaperna hos denna metod genom prestanda benchmarks och jämförde renderade resultatet med en baseline ray-cast metod. Användarstudie och image similarity mätvärden är involverade i jämförelsestudien. Som resultat observerade vi en tillfredsställande prestanda och liknande renderings resultat jämfört med ray-casting. Dock visar användarstudien fortfarande en signifikant skillnad i det gjorda resultaten mellan metoderna. Dessa resultat pekar på att FOLAR är en acceptabel metod för dess prestanda och korrekthet i det renderade resultatet, men har fortfarande oundvikliga avvägningar i grafiken.
39

Rendu de matériaux semi-transparents hétérogènes en temps réel

Blanchard, Eric 06 1900 (has links)
On retrouve dans la nature un nombre impressionnant de matériaux semi-transparents tels le marbre, le jade ou la peau, ainsi que plusieurs liquides comme le lait ou les jus. Que ce soit pour le domaine cinématographique ou le divertissement interactif, l'intérêt d'obtenir une image de synthèse de ce type de matériau demeure toujours très important. Bien que plusieurs méthodes arrivent à simuler la diffusion de la lumière de manière convaincante a l'intérieur de matériaux semi-transparents, peu d'entre elles y arrivent de manière interactive. Ce mémoire présente une nouvelle méthode de diffusion de la lumière à l'intérieur d'objets semi-transparents hétérogènes en temps réel. Le coeur de la méthode repose sur une discrétisation du modèle géométrique sous forme de voxels, ceux-ci étant utilisés comme simplification du domaine de diffusion. Notre technique repose sur la résolution de l'équation de diffusion à l'aide de méthodes itératives permettant d'obtenir une simulation rapide et efficace. Notre méthode se démarque principalement par son exécution complètement dynamique ne nécessitant aucun pré-calcul et permettant une déformation complète de la géométrie. / We find in nature several semi-transparent materials such as marble, jade or skin, as well as liquids such as milk or juices. Whether it be for digital movies or video games, having an efficient method to render these materials is an important goal. Although a large body of previous academic work exists in this area, few of these works provide an interactive solution. This thesis presents a new method for simulating light scattering inside heterogeneous semi-transparent materials in real time. The core of our technique relies on a geometric mesh voxelization to simplify the diffusion domain. The diffusion process solves the diffusion equation in order to achieve a fast and efficient simulation. Our method differs mainly from previous approaches by its completely dynamic execution requiring no pre-computations and hence allowing complete deformations of the geometric mesh.
40

[en] REAL-TIME SHADOW MAPPING TECHNIQUES FOR CAD MODELS / [pt] GERAÇÃO DE SOMBRAS EM TEMPO REAL PARA MODELOS CAD

VITOR BARATA RIBEIRO BLANCO BARROSO 21 May 2007 (has links)
[pt] O mapeamento de sombras é uma técnica de renderização amplamente utilizada para a geração de sombras de superfícies arbitrárias em tempo real. No entanto, devido a sua natureza amostrada, apresenta dois problemas de difícil resolução: o aspecto chamuscado de objetos e a aparência serrilhada das bordas das sombras. Em particular, o sombreamento de modelos CAD (Computer-Aided Design) apresenta desafios ainda maiores, devido à existência de objetos estreitos com silhuetas complexas e o elevado grau de complexidade em profundidade. Neste trabalho, fazemos uma análise detalhada dos problemas de chamuscamento e serrilhamento, revisando e completando trabalhos de diferentes autores. Apresentamos ainda algumas propostas para melhoria de algoritmos existentes: o alinhamento de amostras independente de programas de vértice, um parâmetro generalizado para o LiSPSM (Light- Space Perspective Shadow Map), e um esquema de particionamento adaptativo em profundidade. Em seguida, investigamos a eficácia de diferentes algoritmos quando aplicados a modelos CAD, avaliando-os em critérios como facilidade de implementação, qualidade visual e eficiência computacional. / [en] Shadow mapping is a widely used rendering technique for shadow generation on arbitrary surfaces. However, because of the limited resolution available for sampling the scene, the algorithm presents two difficult problems to be solved: the incorrect self-shadowing of objects and the jagged appearance of shadow borders, also known as aliasing. Generating shadows for CAD (Computer-Aided Design) models presents additional challenges, due to the existence of many thin complex-silhouette objects and the high depth complexity. In this work, we present a detailed analysis of self-shadowing and aliasing by reviewing and building on works from different authors. We also propose some impromevents to existing algorithms: sample alignment without vertex shaders, a generalized parameter for the LiSPSM (Light-Space Perspective Shadow Map) algorithm, and an adaptive z- partitioning scheme. Finally, we investigate the effectiveness of different algorithms when applied to CAD models, considering ease of implementation, visual quality and computational efficiency.

Page generated in 0.0547 seconds