• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 7
  • 6
  • 6
  • 3
  • 3
  • 2
  • Tagged with
  • 68
  • 68
  • 68
  • 36
  • 30
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

[en] CUTAWAY ALGORITHM WITH CONTEXT PRESERVATION FOR RESERVOIR MODEL VISUALIZATION / [pt] ALGORITMO DE CORTE COM PRESERVAÇÃO DE CONTEXTO PARA VISUALIZAÇÃO DE MODELOS DE RESERVATÓRIO

LUIZ FELIPE NETTO 11 January 2017 (has links)
[pt] A simulação numérica de reservatório de petróleo é um processo amplamente utilizado na indústria de óleo e gás. O reservatório é representado por um modelo de células hexaédricas com propriedades associadas, e a simulação numérica procura prever o fluxo de fluído dentro do modelo. Especialistas analisam os resultados dessas simulações através da inspeção, num ambiente gráfico interativo, do modelo tridimensional. Neste trabalho, propõe-se um novo algoritmo de corte com preservação de contexto para auxiliar a inspeção do modelo. O principal objetivo é permitir que o especialista visualize o entorno de poços. Os poços representam o objeto de interesse que deve estar visível e o modelo tridimensional (o contexto) é preservado na medida do possível no entorno desses poços. Desta forma, torna-se possível avaliar a variação de propriedades associadas às células na vizinhança do objeto de interesse. O algoritmo proposto explora programação em placa gráfica e é válido para objetos de interesse arbitrários. Propõe-se também uma extensão do algoritmo para que a seção de corte seja desacoplada da câmera, permitindo analisar o modelo cortado de outros pontos de vista. A eficácia do algoritmo proposto é demonstrada através de resultados baseados em modelos reais de reservatório. / [en] Numerical simulation of black oil reservoir is widely used in the oil and gas industry. The reservoir is represented by a model of hexahedral cells with associated properties, and the numerical simulation is used to predict the fluid behavior in the model. Specialists make analysis of such simulations by inspecting, in a graphical interactive environment, the tridimensional model. In this work, we propose a new cutaway algorithm with context preservation to help the inspection of the model. The main goal is to allow the specialist to visualize the wells and their vicinity. The wells represent the object of interest that must be visible while preserving the tridimensional model (the context) in the vicinity as far as possible. In this way, it is possible to visualize the distribution of cell property together with the object of interest. The proposed algorithm makes use of graphics processing units and is valid for arbitrary objects of interest. It is also proposed an extension to the algorithm to allow the cut section to be decoupled from the camera, allowing analysis of the cut model from different points of view. The effectiveness of the proposed algorithm is demonstrated by a set of results based on actual reservoir models.
52

Zobrazení scény pomocí hlubokých stínových map / Rendering Using Deep Shadowmaps

Rejent, Tomáš January 2014 (has links)
Rendering shadows of transparent objects in real-time applications is difficult. The number of usable methods is limited by the available computing power. Depth Peeling and Dual Depth Peeling methods are described in this document. These allow rendering of transparent objects without the need of sorting them. Deep Shadow Maps are described as a method for rendering shadows of transparent objects. These methods were used to create an demonstration application. This application provides rendering of transparent objects and their shadows, including colored ones. The Application is build upon OpenGL and Qt framework. Evaluation of rendering speed according to various parameters is also part of this work.
53

Real-time Realistic Rendering And High Dynamic Range Image Display And Compression

Xu, Ruifeng 01 January 2005 (has links)
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
54

The State of Live Facial Puppetry in Online Entertainment

Gren, Lisa, Lindberg, Denny January 2024 (has links)
Avatars are used more and more in online communication, in both games and socialmedia. At the same time technology for facial puppetry, where expressions of the user aretransferred to the avatar, has developed rapidly. Why is it that facial puppetry, despite this,is conspicuous by its absence? This thesis analyzes the available and upcoming solutions for facial puppetry, if a com-mon framework or library can exist and what can be done to simplify the process for de-velopers who wants to implement facial puppetry. A survey was conducted to get a better understanding of the technology. It showedthat there is no standard yet for how to describe facial expressions, but part of the marketis converging towards a common format. It also showed that there is no existing inter-face that can handle communication with tracking devices or translation between differentexpression formats. Several prototypes for recording and streaming facial expression data from differentsources were implemented as a practical test. This was done to evaluate the complexity ofimplementing real-time facial puppetry. It showed that it is not always possible to integratethe available tracking solutions into an existing project. When integration was possible itrequired a lot of work. The best way to get tracking right now seems to be to implement astandalone program for tracking that streams the tracked data to the main application. In summary it is the poor integrability of the solutions that makes it problematic forthe developers, together with a wide variety of facial expression formats. A software thatacts like a bridge between the tracking solutions and the game could allow for translationbetween different formats and simplify implementation of support. In the future, instead of working towards making all tracking solutions output stan-dardized tracking data, research further how to build a framework that can handle differ-ent configurations. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
55

Echantillonage d'importance des sources de lumières réalistes / Importance Sampling of Realistic Light Sources

Lu, Heqi 27 February 2014 (has links)
On peut atteindre des images réalistes par la simulation du transport lumineuse avec des méthodes de Monte-Carlo. La possibilité d’utiliser des sources de lumière réalistes pour synthétiser les images contribue grandement à leur réalisme physique. Parmi les modèles existants, ceux basés sur des cartes d’environnement ou des champs lumineuse sont attrayants en raison de leur capacité à capter fidèlement les effets de champs lointain et de champs proche, aussi bien que leur possibilité d’être acquis directement. Parce que ces sources lumineuses acquises ont des fréquences arbitraires et sont éventuellement de grande dimension (4D), leur utilisation pour un rendu réaliste conduit à des problèmes de performance.Dans ce manuscrit, je me concentre sur la façon d’équilibrer la précision de la représentation et de l’efficacité de la simulation. Mon travail repose sur la génération des échantillons de haute qualité à partir des sources de lumière par des estimateurs de Monte-Carlo non-biaisés. Dans ce manuscrit, nous présentons trois nouvelles méthodes.La première consiste à générer des échantillons de haute qualité de manière efficace à partir de cartes d’environnement dynamiques (i.e. qui changent au cours du temps). Nous y parvenons en adoptant une approche GPU qui génère des échantillons de lumière grâce à une approximation du facteur de forme et qui combine ces échantillons avec ceux issus de la BRDF pour chaque pixel d’une image. Notre méthode est précise et efficace. En effet, avec seulement 256 échantillons par pixel, nous obtenons des résultats de haute qualité en temps réel pour une résolution de 1024 × 768. La seconde est une stratégie d’échantillonnage adaptatif pour des sources représente comme un "light field". Nous générons des échantillons de haute qualité de manière efficace en limitant de manière conservative la zone d’échantillonnage sans réduire la précision. Avec une mise en oeuvre sur GPU et sans aucun calcul de visibilité, nous obtenons des résultats de haute qualité avec 200 échantillons pour chaque pixel, en temps réel et pour une résolution de 1024×768. Le rendu est encore être interactif, tant que la visibilité est calculée en utilisant notre nouvelle technique de carte d’ombre (shadow map). Nous proposons également une approche totalement non-biaisée en remplaçant le test de visibilité avec une approche CPU. Parce que l’échantillonnage d’importance à base de lumière n’est pas très efficace lorsque le matériau sous-jacent de la géométrie est spéculaire, nous introduisons une nouvelle technique d’équilibrage pour de l’échantillonnage multiple (Multiple Importance Sampling). Cela nous permet de combiner d’autres techniques d’échantillonnage avec le notre basé sur la lumière. En minimisant la variance selon une approximation de second ordre, nous sommes en mesure de trouver une bonne représentation entre les différentes techniques d’échantillonnage sans aucune connaissance préalable. Notre méthode est pertinence, puisque nous réduisons effectivement en moyenne la variance pour toutes nos scènes de test avec différentes sources de lumière, complexités de visibilité et de matériaux. Notre méthode est aussi efficace par le fait que le surcoût de notre approche «boîte noire» est constant et représente 1% du processus de rendu dans son ensemble. / Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process.
56

Adaptive rendering of celestial bodies in WebGL

Zeitler, Jonas January 2015 (has links)
This report covers theory and comparison of techniques for rendering massive scale 3D geospa- tial planet data in a web browser. It also presents implementation details of a few of these tech- niques in WebGL and Javascript, using the Three.js [1] 3D library. The thesis project is part of the implementation of Unitea, a web based education platform for interactive astronomy visualizations. Unitea is a derivative of Uniview, which is a fulldome interactive simulation of the universe. A major part of this thesis is dedicated to the implementa- tion of Hierarchical Level of Detail (HLOD) modules for Three.js based on the theory presented by T. Ulrich [2] and later generalized by Cozzi and Ring [3]. HLOD techniques are dynamic level of detail algorithms that represent the surface of objects as accurately as possible from a certain viewing angle. By using space partitioning tree-structures, view based error metrics and culling techniques detailed representations of the objects (in this case planets) can be efficiently rendered in real-time. The modules developed provide a general-purpose library for rendering planets (or other spher- ical objects) with dynamic level of detail in Three.js. The library also features connections to online web map services (WMS) and tile services.
57

Calcul et représentation de l'information de visibilité pour l'exploration interactive de scènes tridimensionnelles / Representation and computation of the visibility information for the interactive exploration of tridimensional scenes

Haumont, Dominique 29 May 2006 (has links)
La synthèse d'images, qui consiste à développer des algorithmes pour générer des images à l'aide d'un ordinateur, est devenue incontournable dans de nombreuses disciplines. <p><p>Les méthodes d'affichage interactives permettent à l'utilisateur d'explorer des environnements virtuels en réalisant l'affichage des images à une cadence suffisamment élevée pour donner une impression de continuité et d'immersion. Malgré les progrès réalisés par le matériel, de nouveaux besoins supplantent toujours les capacités de traitement, et des techniques d'accélération sont nécessaires pour parvenir à maintenir une cadence d'affichage suffisante. Ce travail s'inscrit précisemment dans ce cadre. Il est consacré à la problématique de l'élimination efficace des objets masqués, en vue d'accélérer l'affichage de scènes complexes. Nous nous sommes plus particulièrement intéressé aux méthodes de précalcul, qui effectuent les calculs coûteux de visibilité durant une phase de prétraitement et les réutilisent lors de la phase de navigation interactive. Les méthodes permettant un précalcul complet et exact sont encore hors de portée à l'heure actuelle, c'est pourquoi des techniques approchées leur sont préférée en pratique. Nous proposons trois méthodes de ce type.<p><p>La première, présentée dans le chapitre 4, est un algorithme permettant de déterminer de manière exacte si deux polygones convexes sont mutuellement visibles, lorsque des écrans sont placés entre eux. Nos contributions principales ont été de simplifier cette requête, tant du point de vue théorique que du point de vue de l'implémentation, ainsi que d'accélérer son temps moyen d'exécution à l'aide d'un ensemble de techniques d'optimisation. Il en résulte un algorithme considérablement plus simple à mettre en oeuvre que les algorithmes exacts existant dans la littérature. Nous montrons qu'il est également beaucoup plus efficace que ces derniers en termes de temps de calcul.<p><p><p>La seconde méthode, présentée dans le chapitre 5, est une approche originale pour encoder l'information de visibilité, qui consiste à stocker l'ombre que générerait chaque objet de la scène s'il était remplacé par une source lumineuse. Nous présentons une analyse des avantages et des inconvénients de cette nouvelle représentation. <p><p>Finalement, nous proposons dans le chapitre 6 une méthode de calcul de visibilité adaptée aux scènes d'intérieur. Dans ce type d'environnements, les graphes cellules-portails sont très répandus pour l'élimination des objets masqués, en raison de leur faible coût mémoire et de leur grande efficacité. Nous reformulons le problème de la génération de ces graphes en termes de segmentation d'images, et adaptons un algorithme classique, appelé «watershed», pour les obtenir de manière automatique. Nous montrons que la décomposition calculée de la sorte est proche de la décomposition classique, et qu'elle peut être utilisée pour l'élimination des objets masqués.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
58

Modèles de représentation multi-résolution pour le rendu photo-réaliste de matériaux complexes

Baril, Jérôme 11 January 2010 (has links)
The emergence of digital capture devices have enabled the developmentof 3D acquisition to scan the properties of a real object : its shape and itsappearance. This process provides a dense and accurate representation of realobjects and allows to avoid the costly process of physical simulation to modelan object. Thus, the issues have evolved and are no longer focus on modelingthe characteristics of a real object only but on the treatment of data fromacquisition to integrate a copy of reality in a process of image synthesis. In this thesis, we propose new representations for appearance functions from the acquisition with the aim of defining a set of multicale models of low complexity in size working in real time on the today's graphics hardware / L'émergence des périphériques de capture numériques ont permis le développement de l'acquisition 3D pour numériser les propriétés d'un objet réel : sa forme et son apparence. Ce processus fournit une représentation dense et précise d'objets réels et permet de s'abstraire d'un processus des imulation physique coûteux pour modéliser un objet. Ainsi, les problématiquesont évolué et portent non plus uniquement sur la modélisation descaractéristiques d'un objet réel mais sur les traitements de données issues de l'acquisition pour intégrer une copie de la réalité dans un processus de synthèse d'images. Dans ces travaux de thèse, nous proposons de nouvelles représentations pour les fonctions d'apparence issues de l'acquisition dont le but est de définir un ensemble de modèles multi-échelles, de faible complexité en taille, capable d'e^tre visualisé en temps réel sur le matériel graphique actuel.
59

[en] VISUALIZATION OF COMPLEX NATURAL BLACK OIL RESERVOIR MODELS / [pt] VISUALIZAÇÃO DE MODELOS MASSIVOS DE RESERVATÓRIOS NATURAIS DE PETRÓLEO

26 January 2017 (has links)
[pt] Os avanços recentes na tecnologia de simulação paralela de reservatórios de petróleo têm permitido a simulação numérica de domínios cada vez mais discretizados. Essas simulações produzem um volume de dados sem precedentes, que precisam ser visualizados em ambientes 3D, possibilitando assim a análise e inspeção cuidadosa do modelo. Tais modelos tornam as técnicas convencionais de visualização inviáveis, criando a necessidade de se desenvolver soluções escaláveis de visualização. A necessidade de se visualizar dados tão complexos introduz diversos problemas computacionais que precisam ser tratados para visualizar o modelo com taxas de renderização interativas, como a impossibilidade de armazenar o dado todo em memória principal. Existem duas linhas principais para o tratamento de modelos de tal magnitude: renderização distribuída e técnicas de multi-resolução. Nesse trabalho são propostas soluções para a visualização de modelos massivos de reservatório de petróleo em cada uma dessas frentes de pesquisa, e é feita uma discussão acerca das vantagens e limitações de cada solução. Na primeira parte do trabalho, é proposto um sistema distribuído com ordenação no fim para a renderização de tais modelos em agrupamentos de PCs, onde cada PC é equipado com múltiplas GPUs. Dado o uso eficiente de cada GPU e de um estágio de composição parcial, nossa proposta trata dos problemas de escalabilidade que surgem em todo sistema com ordenação no fim em clusters de médio a grande porte. Na segunda parte do trabalho, é proposta uma estrutura hierárquica de multi-resolução de malhas de reservatórios de petróleo, com um novo algoritmo de simplificação feito especificamente para tais malhas. A estrutura hierárquica traz novidades em relação a trabalhos relacionados, fazendo uma estimativa de erro projetado menos conservadora. É feita uma proposta para a renderização com multi-resolução com garantia de uma taxa mínima de renderização, que é o objetivo principal de tais sistemas. Além disso, é feita uma proposta para odesenho do wireframe e das propriedades associadas à malha original do modelo de reservatório mapeados sobre as malhas simplificadas, o que torna a estrutura de multi-resolução independente das propriedades geradas pela simulação, garantindo o seu reuso ao longo de múltiplas simulações do mesmo modelo. Nossos experimentos computacionais demonstram a eficiência das soluções propostas. / [en] Recent advances in parallel architectures for the numerical simulation of natural black oil reservoirs have allowed the simulation of very discretized domains. As a consequence, these simulations produce an unprecedented volume of data, which must be visualized in 3D environments for careful analysis and inspection of the model. Conventional scientific visualization techniques of such very large models are not viable, creating a demand for the development of scalable visualization solutions. The need for the visualization of such complex data introduces several computational issues which must be addressed in order to achieve interactive rendering rates, such as the impossibility of storing the entire data in main memory. There are two main research areas which propose solutions for the visualization of models with such magnitude: distributed rendering and multi-resolution techniques. This work proposes solutions for the visualization of massively complex reservoir models in each of these research areas, and a discussion over the advantages and limitations of each solution is made. In the first part of the work, we propose a distributed system based on a sort-last approach for the rendering of such models in PC clusters, where each PC is equipped with multiple GPUs. Given an efficient use of the available GPUs, combined with a pipelined implementation and the use of partial image compositions on the cluster nodes, our proposal tackles the scalability issues that arise when using mid-to-large GPU clusters. The second part of the work brings the proposal of a hierarchical multi-resolution structure of black oil reservoir meshes, with a new simplification algorithm designed specifically for such meshes. The hierarchical structure brings some new approaches in relation to related work, doing a much less conservative projected error estimation. We propose a minimum refresh rate guarantee strategy for our multiresolution rendering, which is the main goal for such systems. Afterwards, we introduce a proposal for the rendering of data associated with the original reservoir mesh mapped over the simplified meshes, such as the original model grid wireframe and reservoir properties. This proposal guarantees an independence between the multi-resolution structure and the properties generated by a simulation, which guarantees the reuse of the structure among several simulations of the same model. Experimental results demonstrate the effectiveness of the proposed solutions.
60

Tvorba grafické knihovny pro zásuvné moduly VST / Creation of the Graphic Library for VST Plug-Ins

Dufka, Filip January 2019 (has links)
Master‘s thesis covers use of graphical user interface in audio plug-ins. In first part structure and rendering techniques of audio plug-ins graphical libraries are described. Their efficiency and their way of memory utilization is questioned. Next part puts these techniques in comparison with the state of the art methods used in computer graphics and gaming industry. Their possible use in audio graphical interafaces is analyzed. In the following part, thesis finds solutions to uneffective methods in frequently used situations by presenting deferred shading into audio parameter editor with the goal of photorealistic rendering. Second introduced technique of „Knob Normal Maps“ reduces number of images needed for rendering of turning knob from hundereds to six with comparable results. Goal of diploma thesis was to create graphical library. Graphical library with name RealBox was created, and introduced techniques are the core features. Library reduces work needed to achieve graphical user interfaces for 2D and 3D cases of use. Full class and method documentation for RealBox library was assembled. Library was tested during creation of three VST plugins, with different approaches and emphasis on quick work and fine rendering. Graphical library offers new, faster way of creating audio plug-in interfaces.

Page generated in 0.0264 seconds