• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 24
  • 20
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Preenchimento e iluminação interativa de modelos 2.5 D

Marques, Bruno Augusto Dorta January 2014 (has links)
Orientador: Prof. Dr. João Paulo Gois / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2015. / Os avanços recentes para a criação de desenhos animados têm incorporado características que fazem alusão a profundidade e orientação tanto de efeitos de iluminação e sombreamento, como também a simulação de transformações geométricas 3D. Esses recursos melhoram a percepção visual de modelos cartoons e permitem a utilização de efeitos distintos e únicos. Um avanço que ganhou atenção nos últimos anos é o Modelo 2.5D, que simula transformações 3D a partir de um conjunto de imagens vetoriais 2D. Com isso, é criada não somente a percepção de orientação 3D, mas também a automatização do processo de criação de quadros intermediários (in-betweening) em uma animação. Entretanto, as técnicas atuais de modelagem 2.5D não permitem o uso de efeitos interativos de iluminação e preenchimento. Neste trabalho, é apresentado uma resolução ao problema de aplicar efeitos de iluminação em modelos 2.5D. A técnica proposta procura explorar, de forma inédita, a flexibilidade da GPU para inferir relevo e simular transformações 3D nos efeitos de preenchimento e iluminação de modelos 2D em tempo real. Demonstramos a aplicação de diversos efeitos, entre eles Phong shading, Cartoon shading, environment mapping, simulação de pelo (fur shading), mapeamento de texturas estáticas e dinâmicas e hatching shading. / Recent advances for designing and animating cartoons have incorporated depth and orientation cues such as shading and lighting effects, as well as the simulation of 3D geometrical transformations. These features improve the visual perception of cartoon models while increase the artists flexibility to achieve distinctive design styles. A recent advance that has gained attention in the last years is the 2.5D modeling, which simulates 3D transformations from a set of 2D vector arts. Therefore it creates not only the perception of animated 3D orientation, but also automatizes the inbetweening process. However, current 2.5D modeling techniques do not allow the use of interactive shading effects. In this work we approach the problem of delivering interactive 3D shading effects to 2.5D modeling. Our technique relies on the graphics pipeline to infer relief and to simulate the 3D transformations of the shading effect inside the 2D models in real-time. We demonstrate the application on Phong, Gooch and cel shadings, as well as environment mapping, fur simulation, animated texture mapping and (object-space and screen-space) texture hatchings.
32

Considerations on Technical Sketch Generation from 3D Scanned Cultural Heritage

Hörr, Christian, Lindinger, Elisabeth, Brunnett, Guido 14 September 2009 (has links) (PDF)
Drawing sketches is certainly one of the most important but at the same time elaborate parts of archaeological work. Currently, 3D scanning technology is affording a number of new applications, and only one of them is using virtual copies instead of the originals as the basis for documentation. Our major contribution are methods for automatically generating stylized images from 3D models. These are not only intuitive and easy to read but also more objective and accurate than traditional drawings. Besides some other useful tools we show several examples from our daily work proving that the system accelerates the whole documentation process considerably.
33

Výuková hra na platformě XNA / Educational Game on XNA Platform

Vlková, Lenka January 2010 (has links)
This work deals with design and implementation of a game based on the XNA platform. It describes the platform and its possibilities of game development for both the PC and the Xbox 360 console. The implemented game is called NanoHeal and it is about the treatment of various health problems. The work also investigates graphic techniques such as non-photorealistic rendering, the depth of field effect and particle systems. These techniques are used to achieve the distinguished look of the game. Key features of the game were evaluated in the online questionnaire.
34

Realistické zobrazení mraků a kouře / Realistic Rendering of Smoke and Clouds

Kopidol, Jan January 2008 (has links)
This work discourses about methods of rendering volumetric data such as clouds or smoke in computer graphics and implementation of this feature to existing application. The first part is summary of techniques and tricks used in computer graphics to display such objects in scene, their pros and cons and the most used techniques of displaying volumetric data. Next part is more closely focused to choosed technique of rendering volumetric data with consideration of light behavior inside the volume (also called participating media) and basic relationships used used in computation. In following part of work there is short list of applications - renderers used to realistic rendering of scene, which are suitable for implementation of selected volumetric data rendering algorithm. Selected application - Blender is describled more deeply including its inner structure, especially rendering engine. Last part of work is dedicated to design, implementation and integration of rendering algorithm itself.
35

Image Vectorization

Price, Brian L. 31 May 2006 (has links) (PDF)
We present a new technique for creating an editable vector graphic from an object in a raster image. Object selection is performed interactively in subsecond time by calling graph cut with each mouse movement. A renderable mesh is then computed automatically for the selected object and each of its (sub)objects by (1) generating a coarse object mesh; (2) performing recursive graph cut segmentation and hierarchical ordering of subobjects; (3) applying error-driven mesh refinement to each (sub)object. The result is a fully layered object hierarchy that facilitates object-level editing without leaving holes. Object-based vectorization compares favorably with current approaches in the representation and rendering quality. Object-based vectorization and complex editing tasks are performed in a few 10s of seconds.
36

Photorealistic Rendering with V-ray

Rackwitz, Anja, Sterner, Markus January 2007 (has links)
<p>What makes an image photorealistic and how to pinpoint and understand how our mind interprets different elements in an image conditions? It is proposed that the phrase "imperfect makes perfect" is the key for the photorealistic goal in today’s 3D. There is a review of all the elements for the creation of one perfect image, such as Global Illumination, Anti-Aliasing and also a basic review of photography, how a scene is set up, color temperature and the nature of the real light. To put different theories to a test, the common three dimensional software 3D Studio Max was used with the V-Ray renderer. On a field trip to IKEA communications, we were assigned a project of a room scene containing a kitchen, with a finished scene model. A kitchen was created and experimented to reach a result where there is no visible difference between a computer generated image and the photography. Our result was not what we had hoped for due to many problems with our scene. We ourselves see this as a first step toward a scientific explanation to photorealism and what makes something photorealistic.</p>
37

Machine Learning Algorithms for Geometry Processing by Example

Kalogerakis, Evangelos 18 January 2012 (has links)
This thesis proposes machine learning algorithms for processing geometry by example. Each algorithm takes as input a collection of shapes along with exemplar values of target properties related to shape processing tasks. The goal of the algorithms is to output a function that maps from the shape data to the target properties. The learned functions can be applied to novel input shape data in order to synthesize the target properties with style similar to the training examples. Learning such functions is particularly useful for two different types of geometry processing problems. The first type of problems involves learning functions that map to target properties required for shape interpretation and understanding. The second type of problems involves learning functions that map to geometric attributes of animated shapes required for real-time rendering of dynamic scenes. With respect to the first type of problems involving shape interpretation and understanding, I demonstrate learning for shape segmentation and line illustration. For shape segmentation, the algorithms learn functions of shape data in order to perform segmentation and recognition of parts in 3D meshes simultaneously. This is in contrast to existing mesh segmentation methods that attempt segmentation without recognition based only on low-level geometric cues. The proposed method does not require any manual parameter tuning and achieves significant improvements in results over the state-of-the-art. For line illustration, the algorithms learn functions from shape and shading data to hatching properties, given a single exemplar line illustration of a shape. Learning models of such artistic-based properties is extremely challenging, since hatching exhibits significant complexity as a network of overlapping curves of varying orientation, thickness, density, as well as considerable stylistic variation. In contrast to existing algorithms that are hand-tuned or hand-designed from insight and intuition, the proposed technique offers a largely automated and potentially natural workflow for artists. With respect to the second type of problems involving fast computations of geometric attributes in dynamic scenes, I demonstrate algorithms for learning functions of shape animation parameters that specifically aim at taking advantage of the spatial and temporal coherence in the attribute data. As a result, the learned mappings can be evaluated very efficiently during runtime. This is especially useful when traditional geometric computations are too expensive to re-estimate the shape attributes at each frame. I apply such algorithms to efficiently compute curvature and high-order derivatives of animated surfaces. As a result, curvature-dependent tasks, such as line drawing, which could be previously performed only offline for animated scenes, can now be executed in real-time on modern CPU hardware.
38

Machine Learning Algorithms for Geometry Processing by Example

Kalogerakis, Evangelos 18 January 2012 (has links)
This thesis proposes machine learning algorithms for processing geometry by example. Each algorithm takes as input a collection of shapes along with exemplar values of target properties related to shape processing tasks. The goal of the algorithms is to output a function that maps from the shape data to the target properties. The learned functions can be applied to novel input shape data in order to synthesize the target properties with style similar to the training examples. Learning such functions is particularly useful for two different types of geometry processing problems. The first type of problems involves learning functions that map to target properties required for shape interpretation and understanding. The second type of problems involves learning functions that map to geometric attributes of animated shapes required for real-time rendering of dynamic scenes. With respect to the first type of problems involving shape interpretation and understanding, I demonstrate learning for shape segmentation and line illustration. For shape segmentation, the algorithms learn functions of shape data in order to perform segmentation and recognition of parts in 3D meshes simultaneously. This is in contrast to existing mesh segmentation methods that attempt segmentation without recognition based only on low-level geometric cues. The proposed method does not require any manual parameter tuning and achieves significant improvements in results over the state-of-the-art. For line illustration, the algorithms learn functions from shape and shading data to hatching properties, given a single exemplar line illustration of a shape. Learning models of such artistic-based properties is extremely challenging, since hatching exhibits significant complexity as a network of overlapping curves of varying orientation, thickness, density, as well as considerable stylistic variation. In contrast to existing algorithms that are hand-tuned or hand-designed from insight and intuition, the proposed technique offers a largely automated and potentially natural workflow for artists. With respect to the second type of problems involving fast computations of geometric attributes in dynamic scenes, I demonstrate algorithms for learning functions of shape animation parameters that specifically aim at taking advantage of the spatial and temporal coherence in the attribute data. As a result, the learned mappings can be evaluated very efficiently during runtime. This is especially useful when traditional geometric computations are too expensive to re-estimate the shape attributes at each frame. I apply such algorithms to efficiently compute curvature and high-order derivatives of animated surfaces. As a result, curvature-dependent tasks, such as line drawing, which could be previously performed only offline for animated scenes, can now be executed in real-time on modern CPU hardware.
39

Photorealistic Rendering with V-ray

Rackwitz, Anja, Sterner, Markus January 2007 (has links)
What makes an image photorealistic and how to pinpoint and understand how our mind interprets different elements in an image conditions? It is proposed that the phrase "imperfect makes perfect" is the key for the photorealistic goal in today’s 3D. There is a review of all the elements for the creation of one perfect image, such as Global Illumination, Anti-Aliasing and also a basic review of photography, how a scene is set up, color temperature and the nature of the real light. To put different theories to a test, the common three dimensional software 3D Studio Max was used with the V-Ray renderer. On a field trip to IKEA communications, we were assigned a project of a room scene containing a kitchen, with a finished scene model. A kitchen was created and experimented to reach a result where there is no visible difference between a computer generated image and the photography. Our result was not what we had hoped for due to many problems with our scene. We ourselves see this as a first step toward a scientific explanation to photorealism and what makes something photorealistic.
40

Shape depiction using Local Light Alignment : Evaluating the shape enhancement capabilities of the Local Light Alignment technique at multiple scales / Formskildring med hjälp av Local Light Alignment : Utvärdering av den formskildringsförbättrande förmågan hos Local Light Alignment över en skal-rymd

Ahlsén, Edvard January 2023 (has links)
Local Light Alignment is a new shading based technique in the field of shape depiction, a field which concerns itself with techniques to represent and enhance the three-dimensional (3D) shape of objects in two-dimensional (2D) visual media. The main idea of Local Light Alignment is to locally adjust the incoming direction of light to create contrast in a way that enhances the perception of shape and surface detail. It is aimed at visual artists and uses a multiple scale approach, where the user can tune the enhancement of geometric shapes of different sizes. Local Light Alignment was published together with a new objective metric for measuring shape depiction called the congruence score. This thesis strives to increase knowledge about both these new techniques by investigating how varying several parameters for Local Light Alignment affect the congruence score at different scales and strengths of enhancement. The parameters investigated are the range and spatial parameters for the bilateral filter in the scale space construction as well as the enhancement strength parameter of the main Local Light Alignment algorithm. The thesis also tries to identify any common visual characteristics for images that score high, to give a better intuition of what the congruence score represents. The tests are performed using four models, each representing a different field of application. For each model, several viewpoints are tested, and for every viewpoint, 244 differently parameterized renders are produced and scored. The results notably show that the spatial and range parameters both affect controllability of the algorithm by shifting the scores toward the finer end of the scale space with higher values, without affecting the total score. The enhancement strength parameter is shown to affect the congruence score positively, but with diminishing returns as values approach 1. It is also shown by example that the images showing the largest improvement in congruence score tend to exhibit such high degrees of exaggeration that they would probably not be considered good examples of shape depiction by human standards. This is identified as an interesting area to perform subjective evaluations in future research. These investigations will be valuable to those who will create application software that features Local Light Alignment, as well as those wanting to employ the congruence score metric. / Local Light Alignment är en ny skuggningsbaserad teknik inom området formskildring, ett område som är befattar sig med olika tekniker för att förbättra den tredimensionella formåtergivningen av objekt i tvådimensionella medier. Huvudidén bakom Local Light Alignment är att lokalt justera riktningen av inkommande ljus på ett sätt som skapar ytterligare kontrast och förstärker uppfattningen av form och ytdetaljer. Tekniken riktar sig mot designers och använder sig av en flerskalig approach, där användaren kan justera förbättringen av geometriska former av olika storlekar. Local Light Alignment publicerades tillsammans med en kvantitativ metrik för att mäta formskildring, kallad ”congruence score”, eller kongruenspoängtal på svenska. Denna uppsats eftersträvar att öka kunskapen kring båda dessa nya tekniker genom att undersöka hur parameterrymden för Local Light Alignment påverkar kongruenspoängtalet vid olika skalor och förbättringsstyrkor. De undersökta parametrarna är omfångs- och spatialparametrarna som tillhör det bilaterala filtret som används i konstruktionen av skal-rymden, samt parametern för förbättringsstyrka tillhörandes den centrala delen av Local Light Alignment algoritmen. Upptatsen försöker också att identifiera gemensamma visuella uttryck för bilder som visar stor förbättring i kongruenspängtalet, för att kunna ge en intuition om vad kongruenspoängtalet representerar som modell. Studien är genomförd med 4 olika 3D-modeller, där varje modell representerar ett separat område där Local Light Alignment skulle kunna användas. För varje modell så undersöks flera åskådningsvinklar, och för varje åskådningsvinkel så produceras 244 olika parameteriserade renderingar som sedan utvärderas med kongruenspoängtal. Resultaten visar att omfångs- och spatialparametrarna båda påverkar kontrollerbarheten hos algoritmen genom att skifta massan av poäng från den grövre delen av skal-rymden, mot den finare. Detta utan att nämnvärt påverka totalpoängen. Förbättrinsstyrkeparametern visas påverkar kongruenspoängtalet positivt, men med avtagande effect då värdet närmar sig 1. Det demonstreras också visuellt hur de bilder som visar störst förbättring i kongruenspoängtal också uppvisar så höga nivåer av överdriftseffekter att de knappast kan anses vara goda exempel av förbättrad formskildring. Detta identifieras som en intressant riktning för framtida forskning med stöd av subjektiva utvärderingar.

Page generated in 0.1647 seconds