Spelling suggestions: "subject:"3engendering"" "subject:"3gendering""
1 |
Designmöjligheter vid visualisering av nybyggnationer : En studie kring gestaltandet av dagsljusets flöde i en interiör över tidJohansson, Oscar January 2011 (has links)
Det här arbetet har berört visualisering av nybyggda bostäder, där ljussättning av interiör över tid har varit centralt. Ett problem bland svenska företags visualiseringar har uppdagas i form av att de är bristfälliga i sin information om dagsljus. Användare som intresserar sig för en bostad och som tar del av visualiseringar av denne, går därmed miste om information om hur bostaden ser ut under olika tider på dygnet samt olika tider på året. Sådan information om dagsljus kan vara av stort intresse för användare som till exempel köper en nybyggd bostad innan de först kan se den fysiskt. Bland de åtgärder som utförts för att skapa bättre visualiseringar, finns förslaget att använda animation för att visa flödet av dagsljus i en interiör. Ett realistiskt bildmanér framställt i programvaran 3D Studio Max användes vid gestaltningen. Detta eftersom det har haft fördelar såsom att realistiskt kunna illustrera ljusförhållanden. I gestaltningsarbetet har en nybyggnation i Eskilstuna visualiserats. Totalt har tolv olika klipp på nybyggnationens interiör framställts, ett klipp för varje månad på året. I varje klipp visas dagsljusets flöde i interiören under en 24 timmars period. De visar på cirka tio sekunder, att solen går upp tills att solen går ner, och interiörens ljusförhållanden där emellan.
|
2 |
An expansion of the 3D rendering discussionWidell, Alex January 2018 (has links)
My work consists of a written part including images of practical investigations and references. The thesis is best read in chronological order, including images.
|
3 |
Impostor Rendering with Oculus Rift / Impostorrendering med Oculus RiftNiemelä, Jimmy January 2014 (has links)
This report studies impostor rendering for use with the virtual reality head mounted display Oculus Rift. The technique is replacing 3D models with 2D versions to speed up rendering, in a 3D engine. It documents the process of developing a prototype in C++ and DirectX11 and the required research needed to complete the assignment. Included in this report are also the steps involved in getting Oculus Rift support to work in a custom 3D engine and measuring the impact of impostor rendering when rendering to two screens of the head mounted display. The goal was to find the maximum models the engine could draw, while keeping the frame rate locked at 60 frames per second. 2 testers at Nordicstation came to the conclusion that 40-50 meters was the optimal distance for impostor rendering. Any closer and the flatness was noticeable. The results showed a clear improvement in frame rate when rendering a graphically intensive scene. The end result showed that the goal could be achieved at a maximum of 3000 trees with 1000 leaves. Impostor rendering was deemed effective when drawing beyond 500 trees at a time. Less than that and the technique was not needed to achieve 60 frames per second. / Denna rapport undersöker renderingstekniken impostors när den används i en simpel 3D motor tillsammans med virtuella verklighetshjälmen Oculus Rift. Impostors betyder på engelska bedragare och tekniken går ut på att den byter ut avancerade 3D modeller mot simpla 2D versioner när de är ett visst avstånd ifrån användarens virtuella kamera. Om den är korrekt implementerad ska användaren inte märka att vissa modeller är platta och tekniken sparar på resurser då grafikmotorn inte behöver rita ut alla modeller. Rapporten går igenom vad som undersöktes i förundersökningen för att kunna utveckla en prototyp med utvecklingspråket C++ och DirectX 11. I rapporten står även hur prototypen utvecklades och hur stöd för Oculus Rift lades till. De slutliga resultaten visade att impostors hade en stor påverkan på uppdateringshastigheten när antalet 3D modeller som skulle ritas var många, annars hade tekniken ingen påverkan för att nå 60 bilder per sekund. 2 testare från Nordicstation kom fram till att ett avstånd på 40-50 meter från spelarens kamera till utritning av impostors var lämplig, för att dölja att de endast är platta versioner av 3d modeller. Testet visade att motorn kunde rita ut 3000 träd, med 1000 löv på varje, och hålla 60 bilder per sekund, vilket var målet. Detta på ett avstånd av 40m på impostors. Impostorrendering var effektiv när man ritade ut fler än 500 träd åt gången. Mindre antal gav ingen märkbar effekt på testdatorn som användes för testet.
|
4 |
Post-production of holoscopic 3D imageAbdul Fatah, Obaidullah January 2015 (has links)
Holoscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation.
|
5 |
Audio and Visual Rendering with Perceptual FoundationsBonneel, Nicolas 15 October 2009 (has links) (PDF)
Realistic visual and audio rendering still remains a technical challenge. Indeed, typical computers do not cope with the increasing complexity of today's virtual environments, both for audio and visuals, and the graphic design of such scenes require talented artists. In the first part of this thesis, we focus on audiovisual rendering algorithms for complex virtual environments which we improve using human perception of combined audio and visual cues. In particular, we developed a full perceptual audiovisual rendering engine integrating an efficient impact sounds rendering improved by using our perception of audiovisual simultaneity, a way to cluster sound sources using human's spatial tolerance between a sound and its visual representation, and a combined level of detail mechanism for both audio and visuals varying the impact sounds quality and the visually rendered material quality of the objects. All our crossmodal effects were supported by the prior work in neuroscience and demonstrated using our own experiments in virtual environments. In a second part, we use information present in photographs in order to guide a visual rendering. We thus provide two different tools to assist “casual artists” such as gamers, or engineers. The first extracts the visual hair appearance from a photograph thus allowing the rapid customization of avatars in virtual environments. The second allows for a fast previewing of 3D scenes reproducing the appearance of an input photograph following a user's 3D sketch. We thus propose a first step toward crossmodal audiovisual rendering algorithms and develop practical tools for non expert users to create virtual worlds using photograph's appearance.
|
6 |
Optimalizace pro stereoskopické zobrazení / Optimalization for Stereoscopic VisualizationZelníček, Leoš January 2009 (has links)
This work envolves basic options of human visual sense, describes matter of depth perception. We analyze binocular vision of humans, its limits, which must be considered, and followed when trying to project stereoscopic pictures. Then, we mention the most common methods of stereoscopic projection, reader is made acquainted with their options, advantages, and difficulties. The biggest part of this work is dedicated to the optimalizations themselves. They are focused on the most effective rendering of 3D scenes using stereoscopic technology. The effort is to make the best stereoscopic engine as possible and to inspect and improve displayed scene, so that the stereoeffect would be the best. Trivisio ARvision-3D HMD was used for testing purposes.
|
7 |
Deep Learning Approaches for Automatic Colorization, Super-resolution, and Representation of Volumetric DataDevkota, Sudarshan 01 January 2023 (has links) (PDF)
This dissertation includes a collection of studies that aim to improve the way we represent and visualize volume data. The advancement of medical imaging has revolutionized healthcare, providing crucial anatomical insights for accurate diagnosis and treatment planning. Our first study introduces an innovative technique to enhance the utility of medical images, transitioning from monochromatic scans to vivid 3D representations. It presents a framework for reference-based automatic color transfer, establishing deep semantic correspondences between a colored reference image and grayscale medical scans. This methodology extends to volumetric rendering, eliminating the need for manual intervention in parameter tuning. Next, it delves into deep learning-based super-resolution for volume data. By leveraging color information and supplementary features, the proposed system efficiently upscales low-resolution renderings to achieve higher fidelity results. Temporal reprojection further strengthens stability in volumetric rendering. The third contribution centers on the compression and representation of volumetric data, leveraging coordinate-based networks and multi-resolution hash encoding. This approach demonstrates superior compression quality and training efficiency compared to other state-of-the-art neural volume representation techniques. Furthermore, we introduce a meta-learning technique for weight initialization to expedite convergence during training. These findings collectively underscore the potential for transformative advancements in large-scale data visualization and related applications.
|
8 |
Machine Learning for 3D Visualisation Using Generative ModelsTaif, Khasrouf M.M. January 2020 (has links)
One of the state-of-the-art highlights of deep learning in the past ten years is the introduction of generative adversarial networks (GANs), which had achieved great success in their ability to generate images comparable to real photos with minimum human intervention. These networks can generalise to a multitude of desired outputs, especially in image-to-image problems and image syntheses. This thesis proposes a computer graphics pipeline for 3D rendering by utilising generative adversarial networks (GANs).
This thesis is motivated by regression models and convolutional neural networks (ConvNets) such as U-Net architectures, which can be directed to generate realistic global illumination effects, by using a semi-supervised GANs model (Pix2pix) that is comprised of PatchGAN and conditional GAN which is then accompanied by a U-Net structure. Pix2pix had been chosen for this thesis for its ability for training as well as the quality of the output images. It is also different from other forms of GANs by utilising colour labels, which enables further control and consistency of the geometries that comprises the output image.
The series of experiments were carried out with laboratory created image sets, to pursue the possibility of which deep learning and generative adversarial networks can lend a hand to enhance the pipeline and speed up the 3D rendering process. First, ConvNet is applied in combination with Support Vector Machine (SVM) in order to pair 3D objects with their corresponding shadows, which can be applied in Augmenter Reality (AR) scenarios. Second, a GANs approach is presented to generate shadows for non-shadowed 3D models, which can also be beneficial in AR scenarios. Third, the possibility of generating high quality renders of image sequences from low polygon density 3D models using GANs. Finally, the possibility to enhance visual coherence of the output image sequences of GAN by utilising multi-colour labels.
The results of the adopted GANs model were able to generate realistic outputs comparable to the lab generated 3D rendered ground-truth and control group output images with plausible scores on PSNR and SSIM similarity index metrices.
|
9 |
Estudio y evolución estética de la animación tridimensional dentro del género de acción en la industria del cine y el videojuego.Sancán Lapo, Milton Elías 28 October 2024 (has links)
[ES] Este documento revisa desde una amplia bibliografía técnica e histórica la evolución de la
simulación tridimensional sobre superficies planas; luego, el uso de la imagen animada y del
aprovechamiento de la electrónica y las computadoras para crear imagen digital 3D, tanto para
investigación y entretenimiento. En el capítulo I se detallan los primeros intentos artesanos y artísticos
usados para representar el movimiento y el volumen tridimensional, explorado inicialmente desde
culturas antiguas. Luego, se explica por qué el observar y describir el comportamiento de la luz provocó
en científicos europeos del siglo XV al XVIII el deseo de combinar óptica y mecánica para simular figuras
animadas y luego el desplazamiento de objetos inertes. El capítulo III muestra el contexto técnico, donde
se profundiza en tecnologías para render, articulación de geometrías digitales, modelado, color y
texturizado, junto con los primeros hitos en computación, hardware y software, en algoritmos que
permitieron desde la electrónica y el píxel reproducir líneas, imágenes texturas y luz digital. El último
capítulo muestra paso a paso el flujo de trabajo, las técnicas y los recursos de producción usados en dos
proyectos 3D, realizados por el autor de este documento: el primero es la completa producción (en
modelado, texturizado, color e interacción) de un personaje con forma humana realista en 3D interactivo
y su simulación realista de un ambiente selvático, basado en la cultura amazónica huaorani. El segundo
proyecto expone la completa producción de un personaje robot del animé japonés en 3D, junto con su
ambientación. Se concluye que el deseo de transformar la materia inerte, más la suma de disciplinas
técnico-artísticas diversas, confluyó en tres grandes fundamentos de la animación y del 3D digital: el
deseo de simular volumen y movimiento, la acción de alterar el plano, y la actividad divulgadora de la
ciencia y sus resultados. / [CA] Aquest document revisa, des d'una àmplia bibliografia tècnica i històrica, l'evolució de la simulació
tridimensional sobre superfícies planes; i després, l'ús de la imatge animada i l'aprofitament de l'electrònica
i els ordinadors per crear imatge digital 3D, tant per a la recerca com per a l'entreteniment. Al capítol I es
detallen els primers intents artesanals i artístics utilitzats per representar el moviment i el volum
tridimensional, explorat inicialment des de cultures antigues. Després, s'explica per què l'observar i
descriure el comportament de la llum va provocar en científics europeus del segle XV al XVIII el desig
de combinar òptica i mecànica per simular figures animades i després el desplaçament d'objectes inerts.
El capítol III mostra el context tècnic, on s'aprofundeix en tecnologies per renderitzar, articulació de
geometries digitals, modelat, color i texturat, juntament amb els primers fites en computació, maquinari
i programari, en algoritmes que van permetre des de l'electrònica i el píxel reproduir línies, imatges,
textures i llum digital. El darrer capítol mostra pas a pas el flux de treball, les tècniques i els recursos de
producció utilitzats en dos projectes 3D realitzats per l'autor d'aquest document: el primer és la completa
producció (en modelat, texturat, color i interacció) d'un personatge amb forma humana realista en 3D
interactiu i la seva simulació realista d'un ambient selvàtic, basat en la cultura amazònica huaorani. El
segon projecte exposa la completa producció d'un personatge robot de l'animació japonesa en 3D,
juntament amb la seva ambientació. Es conclou que el desig de transformar la matèria inerta, més la suma
de disciplines tècnicoartístiques diverses, va confluïr en tres grans fonaments de l'animació i del 3D digital:
el desig de simular volum i moviment, l'acció d'alterar el plànol, i l'activitat divulgadora de la ciència i els
seus resultats / [EN] This document reviews, from a broad technical and historical bibliography, the evolution of
three-dimensional simulation on flat surfaces; and subsequently, the use of animated imagery and the
exploitation of electronics and computers to create three-dimensional digital imagery, both for research
and entertainment purposes. Chapter I details the earliest artisanal and artistic attempts used to represent
movement and three-dimensional volume, initially explored by ancient cultures. It then explains why the
observation and description of light behavior led European scientists from the fifteenth to the eighteenth
centuries to desire to combine optics and mechanics to simulate animated figures and later the movement
of inert objects. Chapter II presents the technical context, delving into technologies for rendering,
articulation of digital geometries, modeling, color, and texturing, along with the earliest milestones in
computing, hardware, and software, in algorithms that allowed lines, images, textures, and digital light to
be reproduced from electronics and pixels. The final chapter outlines step by step the workflow,
techniques, and production resources used in two 3D projects carried out by the author of this document:
the first is the complete production (in modeling, texturing, coloring, and interaction) of a realistically
shaped human character in interactive 3D and its realistic simulation of a jungle environment, based on
the Huaorani culture of the Amazon. The second project showcases the complete production of a robot
character from Japanese anime in 3D, along with its setting. It is concluded that the desire to transform
inert matter, combined with the amalgamation of various technical-artistic disciplines, converged into
three main foundations of animation and digital 3D: the desire to simulate volume and movement, the
action of altering the plane, and the science communication activity and its results. / Sancán Lapo, ME. (2024). Estudio y evolución estética de la animación tridimensional dentro del género de acción en la industria del cine y el videojuego [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/211186
|
10 |
Etude en vue de la multirésolution de l’apparenceHadim, Julien 11 May 2009 (has links)
Les fonctions de texture directionnelle "Bidirectional Texture Function" (BTF) ont rencontrés un certain succès ces dernières années, notamment pour le rendu temps-réel d'images de synthèse, grâce à la fois au réalisme qu'elles apportent et au faible coût de calcul nécessaire. Cependant, un inconvénient de cette approche reste la taille gigantesque des données : de nombreuses méthodes ont été proposées afin de les compresser. Dans ce document, nous proposons une nouvelle représentation des BTFs qui améliore la cohérence des données et qui permet ainsi une compression plus efficace. Dans un premier temps, nous étudions les méthodes d'acquisition et de génération des BTFs et plus particulièrement, les méthodes de compression adaptées à une utilisation sur cartes graphiques. Nous réalisons ensuite une étude à l'aide de notre logiciel "BTFInspect" afin de déterminer parmi les différents phénomènes visuels dans les BTFs, ceux qui influencent majoritairement la cohérence des données par texel. Dans un deuxième temps, nous proposons une nouvelle représentation pour les BTFs, appelées Flat Bidirectional Texture Function (Flat-BTFs), qui améliore la cohérence des données d'une BTF et donc la compression des données. Dans l'analyse des résultats obtenus, nous montrons statistiquement et visuellement le gain de cohérence obtenu ainsi que l'absence d'une perte significative de qualité en comparaison avec la représentation d'origine. Enfin, dans un troisième temps, nous démontrons l'utilisation de notre nouvelle représentation dans des applications de rendu en temps-réel sur cartes graphiques. Puis, nous proposons une compression de l'apparence grâce à une méthode de quantification sur GPU et présentée dans le cadre d'une application de diffusion de données 3D entre un serveur contenant des modèles 3D et un client désirant visualiser ces données. / In recent years, Bidirectional Texture Function (BTF) has emerged as a flexible solution for realistic and real-time rendering of material with complex appearance and low cost computing. However one drawback of this approach is the resulting huge amount of data: several methods have been proposed in order to compress and manage this data. In this document, we propose a new BTF representation that improves data coherency and allows thus a better data compression. In a first part, we study acquisition and digital generation methods of BTFs and more particularly, compression methods suitable for GPU rendering. Then, We realise a study with our software BTFInspect in order to determine among the different visual phenomenons present in BTF which ones induce mainly the data coherence per texel. In a second part, we propose a new BTF representation, named Flat Bidirectional Texture Function (Flat-BTF), which improves data coherency and thus, their compression. The analysis of results show statistically and visually the gain in coherency as well as the absence of a noticeable loss of quality compared to the original representation. In a third and last part, we demonstrate how our new representation may be used for realtime rendering applications on GPUs. Then, we introduce a compression of the appearance thanks to a quantification method on GPU which is presented in the context of a 3D data streaming between a server of 3D data and a client which want visualize them.
|
Page generated in 0.0591 seconds