• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Designmöjligheter vid visualisering av nybyggnationer : En studie kring gestaltandet av dagsljusets flöde i en interiör över tid

Johansson, Oscar January 2011 (has links)
Det här arbetet har berört visualisering av nybyggda bostäder, där ljussättning av interiör över tid har varit centralt. Ett problem bland svenska företags visualiseringar har uppdagas i form av att de är bristfälliga i sin information om dagsljus. Användare som intresserar sig för en bostad och som tar del av visualiseringar av denne, går därmed miste om information om hur bostaden ser ut under olika tider på dygnet samt olika tider på året. Sådan information om dagsljus kan vara av stort intresse för användare som till exempel köper en nybyggd bostad innan de först kan se den fysiskt. Bland de åtgärder som utförts för att skapa bättre visualiseringar, finns förslaget att använda animation för att visa flödet av dagsljus i en interiör. Ett realistiskt bildmanér framställt i programvaran 3D Studio Max användes vid gestaltningen. Detta eftersom det har haft fördelar såsom att realistiskt kunna illustrera ljusförhållanden. I gestaltningsarbetet har en nybyggnation i Eskilstuna visualiserats. Totalt har tolv olika klipp på nybyggnationens interiör framställts, ett klipp för varje månad på året. I varje klipp visas dagsljusets flöde i interiören under en 24 timmars period. De visar på cirka tio sekunder, att solen går upp tills att solen går ner, och interiörens ljusförhållanden där emellan.
2

An expansion of the 3D rendering discussion

Widell, Alex January 2018 (has links)
My work consists of a written part including images of practical investigations and references. The thesis is best read in chronological order, including images.
3

Impostor Rendering with Oculus Rift / Impostorrendering med Oculus Rift

Niemelä, Jimmy January 2014 (has links)
This report studies impostor rendering for use with the virtual reality head mounted display Oculus Rift. The technique is replacing 3D models with 2D versions to speed up rendering, in a 3D engine. It documents the process of developing a prototype in C++ and DirectX11 and the required research needed to complete the assignment. Included in this report are also the steps involved in getting Oculus Rift support to work in a custom 3D engine and measuring the impact of impostor rendering when rendering to two screens of the head mounted display. The goal was to find the maximum models the engine could draw, while keeping the frame rate locked at 60 frames per second. 2 testers at Nordicstation came to the conclusion that 40-50 meters was the optimal distance for impostor rendering. Any closer and the flatness was noticeable. The results showed a clear improvement in frame rate when rendering a graphically intensive scene. The end result showed that the goal could be achieved at a maximum of 3000 trees with 1000 leaves. Impostor rendering was deemed effective when drawing beyond 500 trees at a time. Less than that and the technique was not needed to achieve 60 frames per second. / Denna rapport undersöker renderingstekniken impostors när den används i en simpel 3D motor tillsammans med virtuella verklighetshjälmen Oculus Rift. Impostors betyder på engelska bedragare och tekniken går ut på att den byter ut avancerade 3D modeller mot simpla 2D versioner när de är ett visst avstånd ifrån användarens virtuella kamera. Om den är korrekt implementerad ska användaren inte märka att vissa modeller är platta och tekniken sparar på resurser då grafikmotorn inte behöver rita ut alla modeller. Rapporten går igenom vad som undersöktes i förundersökningen för att kunna utveckla en prototyp med utvecklingspråket C++ och DirectX 11. I rapporten står även hur prototypen utvecklades och hur stöd för Oculus Rift lades till. De slutliga resultaten visade att impostors hade en stor påverkan på uppdateringshastigheten när antalet 3D modeller som skulle ritas var många, annars hade tekniken ingen påverkan för att nå 60 bilder per sekund. 2 testare från Nordicstation kom fram till att ett avstånd på 40-50 meter från spelarens kamera till utritning av impostors var lämplig, för att dölja att de endast är platta versioner av 3d modeller. Testet visade att motorn kunde rita ut 3000 träd, med 1000 löv på varje, och hålla 60 bilder per sekund, vilket var målet. Detta på ett avstånd av 40m på impostors. Impostorrendering var effektiv när man ritade ut fler än 500 träd åt gången. Mindre antal gav ingen märkbar effekt på testdatorn som användes för testet.
4

Post-production of holoscopic 3D image

Abdul Fatah, Obaidullah January 2015 (has links)
Holoscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation.
5

Audio and Visual Rendering with Perceptual Foundations

Bonneel, Nicolas 15 October 2009 (has links) (PDF)
Realistic visual and audio rendering still remains a technical challenge. Indeed, typical computers do not cope with the increasing complexity of today's virtual environments, both for audio and visuals, and the graphic design of such scenes require talented artists. In the first part of this thesis, we focus on audiovisual rendering algorithms for complex virtual environments which we improve using human perception of combined audio and visual cues. In particular, we developed a full perceptual audiovisual rendering engine integrating an efficient impact sounds rendering improved by using our perception of audiovisual simultaneity, a way to cluster sound sources using human's spatial tolerance between a sound and its visual representation, and a combined level of detail mechanism for both audio and visuals varying the impact sounds quality and the visually rendered material quality of the objects. All our crossmodal effects were supported by the prior work in neuroscience and demonstrated using our own experiments in virtual environments. In a second part, we use information present in photographs in order to guide a visual rendering. We thus provide two different tools to assist “casual artists” such as gamers, or engineers. The first extracts the visual hair appearance from a photograph thus allowing the rapid customization of avatars in virtual environments. The second allows for a fast previewing of 3D scenes reproducing the appearance of an input photograph following a user's 3D sketch. We thus propose a first step toward crossmodal audiovisual rendering algorithms and develop practical tools for non expert users to create virtual worlds using photograph's appearance.
6

Optimalizace pro stereoskopické zobrazení / Optimalization for Stereoscopic Visualization

Zelníček, Leoš January 2009 (has links)
This work envolves basic options of human visual sense, describes matter of depth perception. We analyze binocular vision of humans, its limits, which must be considered, and followed when trying to project stereoscopic pictures. Then, we mention the most common methods of stereoscopic projection, reader is made acquainted with their options, advantages, and difficulties. The biggest part of this work is dedicated to the optimalizations themselves. They are focused on the most effective rendering of 3D scenes using stereoscopic technology. The effort is to make the best stereoscopic engine as possible and to inspect and improve displayed scene, so that the stereoeffect would be the best. Trivisio ARvision-3D HMD was used for testing purposes.
7

Deep Learning Approaches for Automatic Colorization, Super-resolution, and Representation of Volumetric Data

Devkota, Sudarshan 01 January 2023 (has links) (PDF)
This dissertation includes a collection of studies that aim to improve the way we represent and visualize volume data. The advancement of medical imaging has revolutionized healthcare, providing crucial anatomical insights for accurate diagnosis and treatment planning. Our first study introduces an innovative technique to enhance the utility of medical images, transitioning from monochromatic scans to vivid 3D representations. It presents a framework for reference-based automatic color transfer, establishing deep semantic correspondences between a colored reference image and grayscale medical scans. This methodology extends to volumetric rendering, eliminating the need for manual intervention in parameter tuning. Next, it delves into deep learning-based super-resolution for volume data. By leveraging color information and supplementary features, the proposed system efficiently upscales low-resolution renderings to achieve higher fidelity results. Temporal reprojection further strengthens stability in volumetric rendering. The third contribution centers on the compression and representation of volumetric data, leveraging coordinate-based networks and multi-resolution hash encoding. This approach demonstrates superior compression quality and training efficiency compared to other state-of-the-art neural volume representation techniques. Furthermore, we introduce a meta-learning technique for weight initialization to expedite convergence during training. These findings collectively underscore the potential for transformative advancements in large-scale data visualization and related applications.
8

Machine Learning for 3D Visualisation Using Generative Models

Taif, Khasrouf M.M. January 2020 (has links)
One of the state-of-the-art highlights of deep learning in the past ten years is the introduction of generative adversarial networks (GANs), which had achieved great success in their ability to generate images comparable to real photos with minimum human intervention. These networks can generalise to a multitude of desired outputs, especially in image-to-image problems and image syntheses. This thesis proposes a computer graphics pipeline for 3D rendering by utilising generative adversarial networks (GANs). This thesis is motivated by regression models and convolutional neural networks (ConvNets) such as U-Net architectures, which can be directed to generate realistic global illumination effects, by using a semi-supervised GANs model (Pix2pix) that is comprised of PatchGAN and conditional GAN which is then accompanied by a U-Net structure. Pix2pix had been chosen for this thesis for its ability for training as well as the quality of the output images. It is also different from other forms of GANs by utilising colour labels, which enables further control and consistency of the geometries that comprises the output image. The series of experiments were carried out with laboratory created image sets, to pursue the possibility of which deep learning and generative adversarial networks can lend a hand to enhance the pipeline and speed up the 3D rendering process. First, ConvNet is applied in combination with Support Vector Machine (SVM) in order to pair 3D objects with their corresponding shadows, which can be applied in Augmenter Reality (AR) scenarios. Second, a GANs approach is presented to generate shadows for non-shadowed 3D models, which can also be beneficial in AR scenarios. Third, the possibility of generating high quality renders of image sequences from low polygon density 3D models using GANs. Finally, the possibility to enhance visual coherence of the output image sequences of GAN by utilising multi-colour labels. The results of the adopted GANs model were able to generate realistic outputs comparable to the lab generated 3D rendered ground-truth and control group output images with plausible scores on PSNR and SSIM similarity index metrices.
9

Etude en vue de la multirésolution de l’apparence

Hadim, Julien 11 May 2009 (has links)
Les fonctions de texture directionnelle "Bidirectional Texture Function" (BTF) ont rencontrés un certain succès ces dernières années, notamment pour le rendu temps-réel d'images de synthèse, grâce à la fois au réalisme qu'elles apportent et au faible coût de calcul nécessaire. Cependant, un inconvénient de cette approche reste la taille gigantesque des données : de nombreuses méthodes ont été proposées afin de les compresser. Dans ce document, nous proposons une nouvelle représentation des BTFs qui améliore la cohérence des données et qui permet ainsi une compression plus efficace. Dans un premier temps, nous étudions les méthodes d'acquisition et de génération des BTFs et plus particulièrement, les méthodes de compression adaptées à une utilisation sur cartes graphiques. Nous réalisons ensuite une étude à l'aide de notre logiciel "BTFInspect" afin de déterminer parmi les différents phénomènes visuels dans les BTFs, ceux qui influencent majoritairement la cohérence des données par texel. Dans un deuxième temps, nous proposons une nouvelle représentation pour les BTFs, appelées Flat Bidirectional Texture Function (Flat-BTFs), qui améliore la cohérence des données d'une BTF et donc la compression des données. Dans l'analyse des résultats obtenus, nous montrons statistiquement et visuellement le gain de cohérence obtenu ainsi que l'absence d'une perte significative de qualité en comparaison avec la représentation d'origine. Enfin, dans un troisième temps, nous démontrons l'utilisation de notre nouvelle représentation dans des applications de rendu en temps-réel sur cartes graphiques. Puis, nous proposons une compression de l'apparence grâce à une méthode de quantification sur GPU et présentée dans le cadre d'une application de diffusion de données 3D entre un serveur contenant des modèles 3D et un client désirant visualiser ces données. / In recent years, Bidirectional Texture Function (BTF) has emerged as a flexible solution for realistic and real-time rendering of material with complex appearance and low cost computing. However one drawback of this approach is the resulting huge amount of data: several methods have been proposed in order to compress and manage this data. In this document, we propose a new BTF representation that improves data coherency and allows thus a better data compression. In a first part, we study acquisition and digital generation methods of BTFs and more particularly, compression methods suitable for GPU rendering. Then, We realise a study with our software BTFInspect in order to determine among the different visual phenomenons present in BTF which ones induce mainly the data coherence per texel. In a second part, we propose a new BTF representation, named Flat Bidirectional Texture Function (Flat-BTF), which improves data coherency and thus, their compression. The analysis of results show statistically and visually the gain in coherency as well as the absence of a noticeable loss of quality compared to the original representation. In a third and last part, we demonstrate how our new representation may be used for realtime rendering applications on GPUs. Then, we introduce a compression of the appearance thanks to a quantification method on GPU which is presented in the context of a 3D data streaming between a server of 3D data and a client which want visualize them.

Page generated in 0.0649 seconds