• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 489
  • 123
  • 72
  • 54
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 943
  • 363
  • 209
  • 137
  • 131
  • 130
  • 127
  • 123
  • 122
  • 114
  • 108
  • 90
  • 87
  • 78
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Vizualizace a editace voxelů pro 3D tisk v real-time / Real-time voxel visualization and editing for 3D printing

Kužel, Vojtěch January 2021 (has links)
In this thesis, we explore detailed voxel scene compression methods and editing thereof with the goal to design an interactive voxel viewer/editor, for e.g. a 3D printing appli- cation. We present state-of-the-art GPU compatible data structures and compare them. On top of the chosen data structure, we build standard editing tools known from 2D, capable of changing voxel color in real-time even on lower end machines. 1
32

L'espace métaphorique du montage cinématographique : vers un nouveau rituel architectural

Pelletier, Louise, 1963- January 1990 (has links)
No description available.
33

Model-based and Learned, Inverse Rendering for 3D Scene Reconstruction and View Synthesis

Li, Rui 24 July 2023 (has links)
Recent advancements in inverse rendering have exhibited promising results for 3D representation, novel view synthesis, scene parameter reconstruction, and direct graphical asset generation and editing. Inverse rendering attempts to recover the scene parameters of interest from a set of camera observations by optimizing the photometric error between rendering model output and the true observation with appropriate regularization. The objective of this dissertation is to study inverse problems from several perspectives: (1) Software Framework: the general differentiable pipeline for solving physically-based or neural-based rendering problems, (2) Closed-Form: efficient and closed-form solutions in specific condition in inverse problems, (3) Representation Structure: hybrid 3D scene representation for efficient training and adaptive resource allocation, and (4) Robustness: enhanced robustness and accuracy from controlled lighting aspect. We aim to solve the following tasks: 1. How to address the challenge of rendering and optimizing scene parameters such as geometry, texture, and lighting, while considering multiple viewpoints from physically-based or neural 3D representations. To this end, we present a comprehensive software toolkit that provides support for diverse ray-based sampling and tracing schemes that enable the optimization of a wide range of targeting scene parameters. Our approach emphasizes the importance of maintaining differentiability throughout the entire pipeline to ensure efficient and effective optimization of the desired parameters. 2. Is there a 3D representation that has a fixed computational complexity or closed-form solution for forward rendering when the target has specific geometry or simplified lighting cases for better relaxing computational problems or reducing complexity. We consider multi-bounce reflection inside the plane transparent medium, and design differentiable polarization simulation engine that jointly optimize medium's parameters as well as the polarization state of reflection and transmission light. 3. How can we use our hybrid, learned 3D scene representation to solve inverse rendering problems for scene reconstruction and novel view synthesis, with a particular interest in several scientific fields, including density, radiance field, signed distance function, etc. 4. Unknown lighting condition significantly influence object appearance, to enhance the robustness of inverse rendering, we adopt invisible co-located lighting to further control lighting and suppress unknown lighting by jointly optimize separated channels of RGB and near infrared light, and enable accurate all scene parameters reconstruction from wider application environment. We also demonstrate the visually and quantitatively improved results for the aforementioned tasks and make comparisons with other state-of-the-art methods to demonstrate superior performance on representation and reconstruction tasks.
34

Creating physically accurate visual stimuli for free: Spectral rendering with RADIANCE.

Ruppertsberg, Alexa I., Bloj, Marina January 2008 (has links)
no / Visual psychophysicists, who study object, color, and light perception, have a demand for software that produces complex but, at the same time, physically accurate stimuli for their experiments. The number of computer graphic packages that simulate the physical interaction of light and surfaces is limited, and mostly they require the purchase of a license. RADIANCE (Ward, 1994), however, is freely available and popular in the visual perception community, making it a prime candidate. We have shown previously that RADIANCE¿s simulation accuracy is greatly improved when color is coded by spectra, rather than by the originally envisaged RGB triplets (Ruppertsberg & Bloj, 2006). Here, we present a method for spectral rendering with RADIANCE to generate hyperspectral images that can be converted to XYZ images (CIE 1931 system) and then to machine-dependent RGB images. Generating XYZ stimuli has the added advantage of making stimulus images independent of display devices and, thereby, facilitating the process of reproducing results across different labs. Materials associated with this article may be downloaded from www.psychonomic.org.
35

01 Setting Vectorworks Preferences

Taylor, Jonathan 01 January 2022 (has links)
https://dc.etsu.edu/theatre-videos-oer/1001/thumbnail.jpg
36

Comparing Perception of Animated Imposters and 3D Models / Jämförelse av uppfattningsförmåga mellan animerade imposters och 3D modeller

Eriksson, Oliver, Lindblom, William January 2020 (has links)
In modern 3D games and movies, large character crowds are commonly rendered which can be expensive with regard to rendering times. As character complexity increases, so does the need for optimizations. Level of Detail (LOD) techniques are used to optimize rendering by reducing geometric complexity in a scene. One such technique is reducing a complex character to a textured flat plane, a so called imposter. Previous research has shown that imposters are a good way of optimizing 3D-rendering, and can be done without decreasing visual fidelity compared to 3D-models if rendered statically up to a one- to-one pixel to texel ratio. In this report we look further into using imposers as an LOD technique by investigating how animation, in particular rotation, of imposters at different distances affects human perception when observing character crowds. The results, with regards to static non rotating characters, goes in line with previous research showing that imposters are indistinguishable from 3D-models when standing still. When introducing rotation, slow rotation speed is shown to be a dominant factor compared to distance which reveals crowds of imposters. On the other hand, the results suggest that fast movements could be used as a means for hiding flaws in pre-rendered imposters, even at near distances, where non moving imposters otherwise could be distinguishable. / I moderna 3D-spel och filmer är rendering av stora mängder karaktärer vanligt förekommande, vilket kan vara kostsamt med avseende på renderingstider. Allt eftersom karaktärernas komplexitet ökar så ökar behovet av optimeringar. Level of Detail (LOD) tekniker används för att optimera rendering genom att reducera geometrisk komplexitet i en scen. En sådan teknik bygger på att reducera en komplex karaktär till ett texturtäckt plan, en så kallad imposter. Tidigare forskning har visat att imposters är ett bra sätt att optimera 3D-rendering, och kan användas utan att minska visuell trohet jämfört med 3D-modeller om de renderas statiskt upp till ett förhållande av en-till-en pixel per texel. I den här rapporten tittar vi vidare på imposters som en LOD teknik genom att undersöka hur animering, i synnerhet rotation, av imposters vid olika avstånd påverkar mänsklig iaktagelseförmåga när folkmassor av karaktärer observeras. Resultaten, med hänsyn till statiska icke-roterande karaktärer, går i linje med tidigare forskning och visar att imposters inte är urskiljbara från 3D-modeller när de står stilla. När rotation introduceras visar det sig att långsam rotation är en dominerande faktor jämfört med avstånd som avslöjar folkmassor av imposters. Å andra sidan tyder resultaten på att snabba rörelser skulle kunna användas för att dölja brister hos förrenderade imposters, även vid små avstånd, där stillastående imposters annars kan vara urskiljbara.
37

Rzsweep: A New Volume-Rendering Technique for Uniform Rectilinear Datasets

Chaudhary, Gautam 10 May 2003 (has links)
A great challenge in the volume-rendering field is to achieve high-quality images in an acceptable amount of time. In the area of volume rendering, there is always a trade-off between speed and quality. Applications where only high-quality images are acceptable often use the ray-casting algorithm, but this method is computationally expensive and typically achieves low frame rates. The work presented here is RZSweep, a new volume-rendering algorithm for uniform rectilinear datasets, that gives high-quality images in a reasonable amount of time. In this algorithm a plane sweeps the vertices of the implicit grid of regular datasets in depth order, projecting all the implicit faces incident on each vertex. This algorithm uses the inherent properties of a rectilinear datasets. RZSweep is an object-order, back-toront, direct volume rendering, face projection algorithm for rectilinear datasets using the cell approach. It is a single processor serial algorithm. The simplicity of the algorithm allows the use of the graphics pipeline for hardware-assisted projection, and also, with minimum modification, a version of the algorithm that is graphics-hardware independent. Lighting, color and various opacity transfer functions are implemented for giving realism to the final resulting images. Finally, an image comparison is done between RZSweep and a 3D texture-based method for volume rendering using standard image metrics like Euclidian and geometric differences.
38

Volume Visualisation Via Variable-Detail Non-Photorealistic Illustration

McKinley, Joanne January 2002 (has links)
The rapid proliferation of 3D volume data, including MRI and CT scans, is prompting the search within computer graphics for more effective volume visualisation techniques. Partially because of the traditional association with medical subjects, concepts borrowed from the domain of scientific illustration show great promise for enriching volume visualisation. This thesis describes the first general system dedicated to creating user-directed, variable-detail, scientific illustrations directly from volume data. In particular, using volume segmentation for explicit abstraction in non-photorealistic volume renderings is a new concept. The unique challenges and opportunities of volume data require rethinking many non-photorealistic algorithms that traditionally operate on polygonal meshes. The resulting 2D images are qualitatively different from but complementary to those normally seen in computer graphics, and inspire an analysis of the various artistic implications of volume models for scientific illustration.
39

Fast Extraction of BRDFs and Material Maps from Images

Jaroszkiewicz, Rafal January 2003 (has links)
The bidirectional reflectance distribution function has a four dimensional parameter space and such high dimensionality makes it impractical to use it directly in hardware rendering. When a BRDF has no analytical representation, common solutions to overcome this problem include expressing it as a sum of basis functions or factorizing it into several functions of smaller dimensions. This thesis describes factorization extensions that significantly improve factor computation speed and eliminate drawbacks of previous techniques that overemphasize low sample values. The improved algorithm is used to calculate factorizations and material maps from colored images. The technique presented in this thesis allows interactive definition of arbitrary materials, and although this method is based on physical parameters, it can be also used for achieving a variety of non-photorealistic effects.
40

View Rendering for 3DTV

Muddala, Suryanarayana Murthy January 2013 (has links)
Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.   Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.   The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.

Page generated in 0.0905 seconds