Spelling suggestions: "subject:"nonphotorealistic"" "subject:"nonphototrealistic""
31 |
Creating Digital Photorealistic Material Renders by Observing Physical Material PropertiesJohansson, Simon January 2014 (has links)
When creating materials in computer graphics, the most common method is to estimate the properties based on intuition. This seems like a flawed approach, seeing as a big part of the industry has already moved to a physically based workflow. A better method would be to observe real material properties, and use that data in the application. This research delves into the art of material creation by first explaining the theory behind the properties of materials through a literature review. The review also reveals techniques that separate and visually presents these properties to artists, giving them a better understanding of how a material behaves. Through action research, an empirical study then presents a workflow for creating photorealistic renders using data collected with these techniques. While the techniques still require subjective decisions when recreating the materials, they do help artists create more accurate renderings with less guesswork.
|
32 |
Nefotorealistické zobrazování / Non-Photorealistic RenderingMágr, Martin January 2011 (has links)
The purpose of this diploma thesis is to analyze possibilities of non-photorealistic rendering focused on area of image transformation and afterwards design algorithm for video postprocessing to it's cartoon representation. The thesis describes problem analysis, design of the algorithm and it's implementation.
|
33 |
Enhancing Autodesk Maya´s rendering capabilities: : Development and integration of a real-time render plug-in incorporating the extended feature of Toon-ShadingKarlsson, Zannie, Yan, Liye January 2023 (has links)
Background- Autodesk Maya is by its long existence one of the most established 3D-modeling software that enables users to create meshes and the software can handle a majority of processes associated with graphic models, animation, and rendering. Although there are arguably different third-party plug-ins that can be used to enhance the efficiency of Maya. Maya’s own built-in rendering functions, especially its real-time rendering engine feel less efficient than other available real-time rendering options, which additionally commonly provide different rendering techniques that can be used to give a desired style to the modeled scene. Objectives- Maya in its built-in rendering engines themselves does not offer much in terms of non-realistic rendering techniques; therefore, rendering in, for example, Toon-shading requires more work and effort. The objective is to implement a prototype plug-in to that can do real-time rendering of a realistic as well as non-photorealistic rendering technique inside of Autodesk Maya 2023. Its future aim is to address the non-effective and time-consuming task of viewing the results of light adjustments and setting the scene up for stylized renders in Maya. Methods- Through the method of implementation, a basic plug-in to Autodesk Maya was constructed in Visual Studio using C++ and DirectX 11 library. It employs Qt-window to render the Maya scene in real-time and, additionally, has the function of Toon-shading. The prototype plug-in is then put through a simple test using manual assessment. The prototype’s visual rendered output, rendering times, processing usage, and memory usage are presented and compared to the results from Maya 2023’s built-in rendering options when rendering a constructed test-scene to find out where the plug-in requires further adjustments to its implementation. Results- The results show that a real-time plug-in with the additional function of Toon-shading was implementedusing the defined method of implementation. From the later test, the prototype’s rendered results arepresented and compared to the results of Autodesk Maya 2023’s built-in rendering options when rendering the constructed test-scene. Conclusion- The prototype by collecting information from the Maya scene and running the same data through the DirectX pipeline allows for different rendering styles to be developed and displayed through the user-friendly graphical user interface developed with the Qt-library. With the press of a button different implemented rendering styles like the one of Toon-shading can be applied to the prototype’s window display of the Maya scene. Its real-time rendering allows the user to see the implemented graphical attributes done to the scene without time delay. Which makes the job of finding the right angle for the intended render more efficient. The intended rendered scene can then easily be saved by the press of another button. The time and workflow no longer require the 3D-model to be imported to another rendering software or to apply different materials to all parts of the different Maya 3D-models when trying to achieve a non-photorealistic rendering style. The implemented prototype is very basic, andmore implementation is required before the prototype can be used as an efficient rendering alternative for stylized rendering in Maya.
|
34 |
Algorithmen zur automatisierten Dokumentation und Klassifikation archäologischer GefäßeHörr, Christian 30 September 2011 (has links) (PDF)
Gegenstand der vorliegenden Dissertation ist die Entwicklung von Algorithmen und Methoden mit dem Ziel, Archäologen bei der täglichen wissenschaftlichen Arbeit zu unterstützen.
Im Teil I werden Ideen präsentiert, mit denen sich die extrem zeitintensive und stellenweise stupide Funddokumentation beschleunigen lässt. Es wird argumentiert, dass das dreidimensionale Erfassen der Fundobjekte mittels Laser- oder Streifenlichtscannern trotz hoher Anschaffungskosten wirtschaftlich und vor allem qualitativ attraktiv ist. Mithilfe von nicht fotorealistischen Visualisierungstechniken können dann wieder aussagekräftige, aber dennoch objektive Bilder generiert werden. Außerdem ist speziell für Gefäße eine vollautomatische und umfassende Merkmalserhebung möglich.
Im II. Teil gehen wir auf das Problem der automatisierten Gefäßklassifikation ein. Nach einer theoretischen Betrachtung des Typbegriffs in der Archäologie präsentieren wir eine Methodologie, in der Verfahren sowohl aus dem Bereich des unüberwachten als auch des überwachten Lernens zum Einsatz kommen. Besonders die letzteren haben sich dabei als überaus praktikabel erwiesen, um einerseits unbekanntes Material einer bestehenden Typologie zuzuordnen, andererseits aber auch die Struktur der Typologie selbst kritisch zu hinterfragen. Sämtliche Untersuchungen haben wir beispielhaft an den bronzezeitlichen Gräberfeldern von Kötitz, Altlommatzsch (beide Lkr. Meißen), Niederkaina (Lkr. Bautzen) und Tornow (Lkr. Oberspreewald-Lausitz) durchgeführt und waren schließlich sogar in der Lage, archäologisch relevante Zusammenhänge zwischen diesen Fundkomplexen herzustellen. / The topic of the dissertation at hand is the development of algorithms and methods aiming at supporting the daily scientific work of archaeologists.
Part I covers ideas for accelerating the extremely time-consuming and often tedious documentation of finds. It is argued that digitizing the objects with 3D laser or structured light scanners is economically reasonable and above all of high quality, even though those systems are still quite expensive. Using advanced non-photorealistic visualization techniques, meaningful but at the same time objective pictures can be generated from the virtual models. Moreover, specifically for vessels a fully-automatic and comprehensive feature extraction is possible.
In Part II, we deal with the problem of automated vessel classification. After a theoretical consideration of the type concept in archaeology we present a methodology, which employs approaches from the fields of both unsupervised and supervised machine learning. Particularly the latter have proven to be very valuable in order to assign unknown entities to an already existing typology, but also to challenge the typology structure itself. All the analyses have been exemplified by the Bronze Age cemeteries of Kötitz, Altlommatzsch (both district of Meißen), Niederkaina (district of Bautzen), and Tornow (district Oberspreewald-Lausitz). Finally, we were even able to discover archaeologically relevant relationships between these sites.
|
35 |
Synthesis and evaluation of geometric texturesAlMeraj, Zainab January 2013 (has links)
Two-dimensional geometric textures are the geometric analogues of raster (pixel-based) textures and consist of planar distributions of discrete shapes with an inherent structure.
These textures have many potential applications in art, computer graphics, and cartography.
Synthesizing large textures by hand is generally a tedious task. In raster-based synthesis, many algorithms have been developed to limit the amount of manual effort required. These algorithms take in a small example as a reference and produce larger similar textures using a wide range of approaches.
Recently, an increasing number of example-based geometric synthesis algorithms have been proposed. I refer to them in this dissertation as Geometric Texture Synthesis (GTS) algorithms. Analogous to their raster-based counterparts, GTS algorithms synthesize arrangements that ought to be judged by human viewers as “similar” to the example inputs.
However, an absence of conventional evaluation procedures in current attempts demands an inquiry into the visual significance of synthesized results.
In this dissertation, I present an investigation into GTS and report on my findings from three projects. I start by offering initial steps towards grounding texture synthesis techniques more firmly with our understanding of visual perception through two psychophysical studies. My observations throughout these studies result in important visual cues used by people when generating and/or comparing similarity of geometric arrangements as well a set of strategies adopted by participants when generating arrangements.
Based on one of the generation strategies devised in these studies I develop a new geometric synthesis algorithm that uses a tile-based approach to generate arrangements. Textures synthesized by this algorithm are comparable to the state of the art in GTS and provide an additional reference in subsequent evaluations.
To conduct effective evaluations of GTS, I start by collecting a set of representative examples, use them to acquire arrangements from multiple sources, and then gather them into a dataset that acts as a standard for the GTS research community. I then utilize this dataset in a second set of psychophysical studies that define an effective methodology for comparing current and future geometric synthesis algorithms.
|
36 |
Communication expressive de la forme au travers de l’éclairement et du rendu au traitVergne, Romain 10 December 2010 (has links)
Le rendu expressif a pour objectif de développer des algorithmes qui donnent la possibilité aux utilisateurs de créer des images artistiques. Il permet non-seulement de reproduire des styles traditionnels, mais surtout de communiquer un message spécifique avec un style qui lui correspond. Dans ce manuscrit, nous proposons de nouvelles solutions pour réintroduire la forme, souvent masquée dans les images réalistes. Nous montrons tout d'abord comment extraire les informations de surface pertinentes sur des objets 3D dynamiques, en nous basant sur les caractéristiques du système visuel humain, de sorte à obtenir des informations qui fournissent des niveaux de détail automatiques tout en prenant le point de vue en compte. Dans un deuxième temps, nous utilisons ces données extraites à la surface des objets 3D pour les intégrer en temps réel dans des styles variés, allant du rendu minimaliste noir et blanc au dessin au trait, en passant par des résultats réalistes. / Expressive rendering aims at designing algorithms that give users the possibility to create artistic images. It allows to produce traditional styles, but also to convey a specific message with its corresponding style. In this thesis, we propose new solutions for enhancing shape, often hidden in realistic images. We first show how to extract relevant surface features on 3D dynamic scenes, taking the human visual system into account, in order to be able to control level-of-details. In a second step, we integrate this information in a variety of styles: minimalist black and white, realistic, or line-based renderings.
|
37 |
Non-photorealistic rendering with coherence for augmented realityChen, Jiajian 16 July 2012 (has links)
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization.
Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.
|
38 |
Synthesis and evaluation of geometric texturesAlMeraj, Zainab January 2013 (has links)
Two-dimensional geometric textures are the geometric analogues of raster (pixel-based) textures and consist of planar distributions of discrete shapes with an inherent structure.
These textures have many potential applications in art, computer graphics, and cartography.
Synthesizing large textures by hand is generally a tedious task. In raster-based synthesis, many algorithms have been developed to limit the amount of manual effort required. These algorithms take in a small example as a reference and produce larger similar textures using a wide range of approaches.
Recently, an increasing number of example-based geometric synthesis algorithms have been proposed. I refer to them in this dissertation as Geometric Texture Synthesis (GTS) algorithms. Analogous to their raster-based counterparts, GTS algorithms synthesize arrangements that ought to be judged by human viewers as “similar” to the example inputs.
However, an absence of conventional evaluation procedures in current attempts demands an inquiry into the visual significance of synthesized results.
In this dissertation, I present an investigation into GTS and report on my findings from three projects. I start by offering initial steps towards grounding texture synthesis techniques more firmly with our understanding of visual perception through two psychophysical studies. My observations throughout these studies result in important visual cues used by people when generating and/or comparing similarity of geometric arrangements as well a set of strategies adopted by participants when generating arrangements.
Based on one of the generation strategies devised in these studies I develop a new geometric synthesis algorithm that uses a tile-based approach to generate arrangements. Textures synthesized by this algorithm are comparable to the state of the art in GTS and provide an additional reference in subsequent evaluations.
To conduct effective evaluations of GTS, I start by collecting a set of representative examples, use them to acquire arrangements from multiple sources, and then gather them into a dataset that acts as a standard for the GTS research community. I then utilize this dataset in a second set of psychophysical studies that define an effective methodology for comparing current and future geometric synthesis algorithms.
|
39 |
Interactive Image-space Point Cloud Rendering with Transparency and ShadowsDobrev, Petar, Rosenthal, Paul, Linsen, Lars 24 June 2011 (has links) (PDF)
Point-based rendering methods have proven to be effective for the display of large point cloud surface models. For a realistic visualization of the models, transparency and shadows are essential features. We propose a method for point cloud rendering with transparency and shadows at interactive rates. Our approach does not require any global or local surface reconstruction method, but operates directly on the point cloud. All passes are executed in image space and no pre-computation steps are required. The underlying technique for our approach is a depth peeling method for point cloud surface representations. Having detected a sorted sequence of surface layers, they can be blended front to back with given opacity values to obtain renderings with transparency. These computation steps achieve interactive frame rates. For renderings with shadows, we determine a point cloud shadow texture that stores for each point of a point cloud whether it is lit by a given light source. The extraction of the layer of lit points is obtained using the depth peeling technique, again. For the shadow texture computation, we also apply a Monte-Carlo integration method to approximate light from an area light source, leading to soft shadows. Shadow computations for point light sources are executed at interactive frame rates. Shadow computations for area light sources are performed at interactive or near-interactive frame rates depending on the approximation quality.
|
40 |
Preenchimento e iluminação interativa de modelos 2.5 DMarques, Bruno Augusto Dorta January 2014 (has links)
Orientador: Prof. Dr. João Paulo Gois / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2015. / Os avanços recentes para a criação de desenhos animados têm incorporado características
que fazem alusão a profundidade e orientação tanto de efeitos de iluminação
e sombreamento, como também a simulação de transformações geométricas 3D. Esses
recursos melhoram a percepção visual de modelos cartoons e permitem a utilização de
efeitos distintos e únicos. Um avanço que ganhou atenção nos últimos anos é o Modelo
2.5D, que simula transformações 3D a partir de um conjunto de imagens vetoriais
2D. Com isso, é criada não somente a percepção de orientação 3D, mas também a automatização do processo de criação de quadros intermediários (in-betweening) em uma
animação. Entretanto, as técnicas atuais de modelagem 2.5D não permitem o uso de
efeitos interativos de iluminação e preenchimento. Neste trabalho, é apresentado uma
resolução ao problema de aplicar efeitos de iluminação em modelos 2.5D. A técnica
proposta procura explorar, de forma inédita, a flexibilidade da GPU para inferir relevo
e simular transformações 3D nos efeitos de preenchimento e iluminação de modelos
2D em tempo real. Demonstramos a aplicação de diversos efeitos, entre eles Phong shading,
Cartoon shading, environment mapping, simulação de pelo (fur shading), mapeamento
de texturas estáticas e dinâmicas e hatching shading. / Recent advances for designing and animating cartoons have incorporated depth and
orientation cues such as shading and lighting effects, as well as the simulation of 3D
geometrical transformations. These features improve the visual perception of cartoon
models while increase the artists flexibility to achieve distinctive design styles. A recent
advance that has gained attention in the last years is the 2.5D modeling, which simulates
3D transformations from a set of 2D vector arts. Therefore it creates not only the
perception of animated 3D orientation, but also automatizes the inbetweening process.
However, current 2.5D modeling techniques do not allow the use of interactive shading
effects. In this work we approach the problem of delivering interactive 3D shading
effects to 2.5D modeling. Our technique relies on the graphics pipeline to infer relief
and to simulate the 3D transformations of the shading effect inside the 2D models in
real-time. We demonstrate the application on Phong, Gooch and cel shadings, as well
as environment mapping, fur simulation, animated texture mapping and (object-space
and screen-space) texture hatchings.
|
Page generated in 0.0536 seconds