• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

BioSpec: A Biophysically-Based Spectral Model of Light Interaction with Human Skin

Krishnaswamy, Aravind January 2005 (has links)
Despite the notable progress in physically-based rendering, there is still a long way to go before we can automatically generate predictable images of biological materials. In this thesis, we address an open problem in this area, namely the spectral simulation of light interaction with human skin, and propose a novel biophysically-based model that accounts for all components of light propagation in skin tissues, namely surface reflectance, subsurface reflectance and transmittance, and the biological mechanisms of light absorption by pigments in these tissues. The model is controlled by biologically meaningful parameters, and its formulation, based on standard Monte Carlo techniques, enables its straightforward incorporation into realistic image synthesis frameworks. Besides its biophysicallybased nature, the key difference between the proposed model and the existing skin models is its comprehensiveness, i. e. , it computes both spectral (reflectance and transmittance) and scattering (bidirectional surface-scattering distribution function) quantities for skin specimens. In order to assess the predictability of our simulations, we evaluate their accuracy by comparing results from the model with actual skin measured data. We also present computer generated images to illustrate the flexibility of the proposed model with respect to variations in the biological input data, and its applicability not only in the predictive image synthesis of different skin tones, but also in the spectral simulation of medical conditions.
2

BioSpec: A Biophysically-Based Spectral Model of Light Interaction with Human Skin

Krishnaswamy, Aravind January 2005 (has links)
Despite the notable progress in physically-based rendering, there is still a long way to go before we can automatically generate predictable images of biological materials. In this thesis, we address an open problem in this area, namely the spectral simulation of light interaction with human skin, and propose a novel biophysically-based model that accounts for all components of light propagation in skin tissues, namely surface reflectance, subsurface reflectance and transmittance, and the biological mechanisms of light absorption by pigments in these tissues. The model is controlled by biologically meaningful parameters, and its formulation, based on standard Monte Carlo techniques, enables its straightforward incorporation into realistic image synthesis frameworks. Besides its biophysicallybased nature, the key difference between the proposed model and the existing skin models is its comprehensiveness, i. e. , it computes both spectral (reflectance and transmittance) and scattering (bidirectional surface-scattering distribution function) quantities for skin specimens. In order to assess the predictability of our simulations, we evaluate their accuracy by comparing results from the model with actual skin measured data. We also present computer generated images to illustrate the flexibility of the proposed model with respect to variations in the biological input data, and its applicability not only in the predictive image synthesis of different skin tones, but also in the spectral simulation of medical conditions.
3

Alternating Physically Based Renderingin Low-lit Areas

Kupersmidt, Itamar January 2018 (has links)
Background The increase in screen resolution has increased from HD to Ultra-HDduring the last decade. A modern game today with Ultra-HD resolution has overeight million pixels that need to be shaded, combined with the expensive shadingmethod Physically Based Rendering the equations needed to calculate each pixel arenumerous. Objectives This the study aims to remove complexity from the Physically BasedRendering shading method in the form of roughness in low-lit areas. The low-lit areaswill instead be rendered without the roughness attribute. By removing roughnessless calculations will be performed. Methods To remove roughness from low-lit areas the light had to be approximatedusing a diffuse model. The pixel was later converted via Hue Saturation PerceivedBrightness to calculate the brightness. If the pixel was under the given threshold,the pixel was shaded using a low-complexity Physically Based Rendering implemen-tation without roughness. A user study was conducted using Unity game enginewith eight participants being asked to compare different stimuli all rendered withdifferent thresholds for darkness with a reference picture. The aim of the study wasto ascertain if the stimuli without roughness had any perceivable difference from thereference. Results The results of the study show the majority of the participants noticinga difference when comparing the stimuli with the reference. The areas affected wasnot only the low-lit areas but the whole scene. The energy conversion without theroughness value made the whole scene appear darker. Conclusions The roughness value is an integral part of energy conversion andwithout it, the scene will appear much darker. While the majority of participantsnoticed a difference, the lowest threshold resembled the original the most
4

Modélisation de l'apparence visuelle des matériaux : Rendu Physiquement réaliste / Modeling of the visual appearance of materials : Physically Based Rendering

Dumazet, Sylvain 26 February 2010 (has links)
Placé à la frontière entre l'informatique graphique et la physique, le rendu d'image physiquement réaliste est un domaine qui tente de créer des images en simulant le comportement optique des matériaux. Les applications sont multiples : restauration virtuelle d'oeuvre du patrimoine, simulation d'effets optiques, rendu industriel pour la conception, voire même, conception assistée par ordinateur de la couleur. Cette thèse présente les travaux réalisés au cours du projet Virtuelium, un logiciel de rendu d'image physiquement réaliste dont la quatrième version a été développée dans le cadre de cette thèse. Elle en présente les principes et méthodologies utilisés pour les mesures et la validation des résultats. Nous présentons aussi plusieurs travaux réalisés durant cette thèse avec cet outil : de la restauration virtuelle à la bio-photonique sans oublier un aperçu de rendu de "matériaux à effet", pour des applications industrielles (peintures, encres, cosmétiques, etc.). / Laying between computer graphics and physics, the rendering of physically based images is a domain that attempt to create images by simulating the optical behaviour of materials interacting with light (either natural or not). There are several applications : virtual restoration of cultural heritage artefacts, simulation of optical effects, industrial rendering for design and even computer-aided design of colour. This thesis presents the works realised within Virtuelium project framework, a physically based rendering software. The fourth version was developed during this thesis. Here will be presented the principles and methodologies involved for the measurements and the validation of the obtained results. We present also several works that have been done during that thesis using Virtuelium: from virtual restoration to bio-photonics, including an overview of iridescent pigments rendered for an industrial purpose.
5

Polarizační verze lesklých BRDF modelů / Polarising Versions of Glossy BRDF Models

Bártová, Kristina January 2014 (has links)
The goal of computer graphics is to precisely model the appearance of real objects. It includes of interactions of light with various materials. Polarisation is one of the fundamental properties of light. Incorporating polarisation parameter into an illumination model can significantly enhance the physical realism of rendered images in the case of scenes including multiple light bounces via specular surfaces, etc. However, recent rendering systems do not take polarisation into account because of complexity of such a solution. The key component for obtaining physically correct images are realistic, polarisation capable BRDF (Bidirectional Reflectance Distribution Function) models. Within this thesis, polarising versions of the following BRDF models were theoretically derived: Torrance Sparrow, He-Torrance-Sillion-Greenberg and Weidlich-Wilkie. For each of these models, Mueller matrices (the mathematical construct used to describe polarising surface reflectance) were systematically derived and their behaviour tested under various input parameters using Wolfram Mathematica. Derived polarising glossy BRDF models were further implemented using a rendering research system, ART (Advanced Rendering Toolkit). As far as we know, it is the very first usage of these BRDF models in a polarisation renderer....
6

Learning from 3D generated synthetic data for unsupervised anomaly detection

Fröjdholm, Hampus January 2021 (has links)
Modern machine learning methods, utilising neural networks, require a lot of training data. Data gathering and preparation has thus become a major bottleneck in the machine learning pipeline and researchers often use large public datasets to conduct their research (such as the ImageNet [1] or MNIST [2] datasets). As these methods begin being used in industry, these challenges become apparent. In factories objects being produced are often unique and may even involve trade secrets and patents that need to be protected. Additionally, manufacturing may not have started yet, making real data collection impossible. In both cases a public dataset is unlikely to be applicable. One possible solution, investigated in this thesis, is synthetic data generation. Synthetic data generation using physically based rendering was tested for unsupervised anomaly detection on a 3D printed block. A small image dataset was gathered of the block as control and a data generation model was created using its CAD model, a resource most often available in industrial settings. The data generation model used randomisation to reduce the domain shift between the real and synthetic data. For testing the data, autoencoder models were trained, both on the real and synthetic data separately and in combination. The material of the block, a white painted surface, proved challenging to reconstruct and no significant difference between the synthetic and real data could be observed. The model trained on real data outperformed the models trained on synthetic and the combined data. However, the synthetic data combined with the real data showed promise with reducing some of the bias intentionally introduced in the real dataset. Future research could focus on creating synthetic data for a problem where a good anomaly detection model already exists, with the goal of transferring some of the synthetic data generation model (such as the materials) to a new problem. This would be of interest in industries where they produce many different but similar objects and could reduce the time needed when starting a new machine learning project.
7

An Analysis of Real-Time Ray Tracing Techniques Using the Vulkan® Explicit API

Souza, Elleis C 01 June 2021 (has links) (PDF)
In computer graphics applications, the choice and implementation of a rendering technique is crucial when targeting real-time performance. Traditionally, rasterization-based approaches have dominated the real-time sector. Other algorithms were simply too slow to compete on consumer graphics hardware. With the addition of hardware support for ray-intersection calculations on modern GPUs, hybrid ray tracing/rasterization and purely ray tracing approaches have become possible in real-time as well. Industry real-time graphics applications, namely games, have been exploring these different rendering techniques with great levels of success. The addition of ray tracing into the graphics developer’s toolkit has without a doubt increased what level of graphical fidelity is achievable in real-time. In this thesis, three rendering techniques are implemented in a custom rendering engine built on the Vulkan® Explicit API. Each technique represents a different family of modern real-time rendering algorithms. A largely rasterization-based method, a hybrid ray tracing/rasterization method, and a method solely using ray tracing. Both the hybrid and ray tracing exclusive approach rely on the ReSTIR algorithm for lighting calculations. Analysis of the performance and render quality of these approaches reveals the trade-offs incurred by each approach, alongside the performance viability of each in a real-time setting.
8

Echantillonage d'importance des sources de lumières réalistes / Importance Sampling of Realistic Light Sources

Lu, Heqi 27 February 2014 (has links)
On peut atteindre des images réalistes par la simulation du transport lumineuse avec des méthodes de Monte-Carlo. La possibilité d’utiliser des sources de lumière réalistes pour synthétiser les images contribue grandement à leur réalisme physique. Parmi les modèles existants, ceux basés sur des cartes d’environnement ou des champs lumineuse sont attrayants en raison de leur capacité à capter fidèlement les effets de champs lointain et de champs proche, aussi bien que leur possibilité d’être acquis directement. Parce que ces sources lumineuses acquises ont des fréquences arbitraires et sont éventuellement de grande dimension (4D), leur utilisation pour un rendu réaliste conduit à des problèmes de performance.Dans ce manuscrit, je me concentre sur la façon d’équilibrer la précision de la représentation et de l’efficacité de la simulation. Mon travail repose sur la génération des échantillons de haute qualité à partir des sources de lumière par des estimateurs de Monte-Carlo non-biaisés. Dans ce manuscrit, nous présentons trois nouvelles méthodes.La première consiste à générer des échantillons de haute qualité de manière efficace à partir de cartes d’environnement dynamiques (i.e. qui changent au cours du temps). Nous y parvenons en adoptant une approche GPU qui génère des échantillons de lumière grâce à une approximation du facteur de forme et qui combine ces échantillons avec ceux issus de la BRDF pour chaque pixel d’une image. Notre méthode est précise et efficace. En effet, avec seulement 256 échantillons par pixel, nous obtenons des résultats de haute qualité en temps réel pour une résolution de 1024 × 768. La seconde est une stratégie d’échantillonnage adaptatif pour des sources représente comme un "light field". Nous générons des échantillons de haute qualité de manière efficace en limitant de manière conservative la zone d’échantillonnage sans réduire la précision. Avec une mise en oeuvre sur GPU et sans aucun calcul de visibilité, nous obtenons des résultats de haute qualité avec 200 échantillons pour chaque pixel, en temps réel et pour une résolution de 1024×768. Le rendu est encore être interactif, tant que la visibilité est calculée en utilisant notre nouvelle technique de carte d’ombre (shadow map). Nous proposons également une approche totalement non-biaisée en remplaçant le test de visibilité avec une approche CPU. Parce que l’échantillonnage d’importance à base de lumière n’est pas très efficace lorsque le matériau sous-jacent de la géométrie est spéculaire, nous introduisons une nouvelle technique d’équilibrage pour de l’échantillonnage multiple (Multiple Importance Sampling). Cela nous permet de combiner d’autres techniques d’échantillonnage avec le notre basé sur la lumière. En minimisant la variance selon une approximation de second ordre, nous sommes en mesure de trouver une bonne représentation entre les différentes techniques d’échantillonnage sans aucune connaissance préalable. Notre méthode est pertinence, puisque nous réduisons effectivement en moyenne la variance pour toutes nos scènes de test avec différentes sources de lumière, complexités de visibilité et de matériaux. Notre méthode est aussi efficace par le fait que le surcoût de notre approche «boîte noire» est constant et représente 1% du processus de rendu dans son ensemble. / Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process.

Page generated in 0.1227 seconds