• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 9
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ray tracing on multiprocessor systems

McNeill, Michael D. J. January 1993 (has links)
No description available.
2

A shader based approach to painterly rendering

Pal, Kaushik 15 November 2004 (has links)
The purpose of this thesis is to develop a texture-based painterly shader that would render computer generated objects or scenes with strokes that are visually similar to paint media like watercolor, oil paint or dry media such as crayons, chalk, et cetera. This method would need an input scene in the form of three dimensional polygonal or NURBS meshes. While the structure of the meshes and the lighting in the scene would both play a crucial role in the final appearance of the scene, the painterly look will be imparted through a shader. This method, therefore, is essentially a rendering technique. Several modifiable parameters in the shader gives the user artistic freedom while overall introducing some amount of automation in the painterly rendering process.
3

A shader based approach to painterly rendering

Pal, Kaushik 15 November 2004 (has links)
The purpose of this thesis is to develop a texture-based painterly shader that would render computer generated objects or scenes with strokes that are visually similar to paint media like watercolor, oil paint or dry media such as crayons, chalk, et cetera. This method would need an input scene in the form of three dimensional polygonal or NURBS meshes. While the structure of the meshes and the lighting in the scene would both play a crucial role in the final appearance of the scene, the painterly look will be imparted through a shader. This method, therefore, is essentially a rendering technique. Several modifiable parameters in the shader gives the user artistic freedom while overall introducing some amount of automation in the painterly rendering process.
4

Fact or fiction? : photography merging genres in children's picturebooks

McKelvey, Bridgette January 2008 (has links)
This paper explores photography in children’s picturebooks and its ability to extend image-making and reading by creating a hybrid genre that merges real and non-real worlds. In analysing the use of photography in such a hybrid genre, the work of Lauren Child (2006, 2001a, 2001b, 2000), Polly Borland (2006), Shaun Tan (2007, 2000, 1998) and Dave McKean (2004a, 2004b, 1995) is deconstructed. These artists utilise photography in contemporary picturebooks that are fictional. In addition, David Doubilet’s images (1990, 1989, 1984, 1980) are discussed, which fuse underwater photojournalism with art, for factual outputs. This research uncovers a gap in picturebook literature and creates a new hybrid by merging genres to produce a work that is both factual and fictional. The research methodology in this study includes a brief overview of photography and notions of truth, contemporary picturebook trend theory, use of a student focus group, industry collaborations and workshops, and environmental education pedagogy. This thesis outlines summaries of research outcomes, not the least of which is the capacity for photography to enrich narrative accounts by providing multilayered information, character perspectives and/ or a metafictive experience. These research outcomes are then applied to the process of creating such a hybrid children’s picturebook.
5

Sampling Methods in Ray-Based Global Illumination

Cline, David 28 July 2007 (has links) (PDF)
In computer graphics, algorithms that attempt to create photographic images by simulating light transport are collectively known as Global Illumination methods. The most versatile of these are based on ray tracing (following ray paths through a scene), and numerical integration using random or quasi-random sampling. While ray tracing and sampling methods in global illumination have progressed much in the last two decades, the goal of fast and accurate simulation of light transport remains elusive. This dissertation presents a number of new sampling methods that attempt to address some of the shortcomings of existing global illumination algorithms. The first part of the dissertation concentrates on memory issues related to ray tracing of large scenes. In this part, we present memory-efficient lightweight bounding volumes as a data structure that can substantially reduce the memory overhead of a ray tracer, allowing more complicated scenes to be ray traced without complicated caching schemes. Part two of the dissertation concerns itself with sampling algorithms related to direct lighting, an important subset of global illumination. In this part, we develop two stage importance sampling} to sample the product of the BRDF function and a large light source such as an environment map. We then extend this method to include all three terms of the direct lighting equation, sampling the triple product of the BRDF, lighting and visibility. We show that the new sampling methods have a number of advantages over existing direct lighting algorithms, including comparatively low memory overhead, little precomputation, and the ability to sample all three terms of the direct lighting equation. Finally, the third part of the dissertation discusses sampling algorithms in the context of general global illumination. In this part, we develop two new algorithms that attempt to improve the sampling distribution over existing techniques by exploiting information gained during the course of sampling. The first of these methods, energy redistribution path tracing, works by using path mutation to spread energy, and thus share sampling information, between pixels. The second method, sample swarming, shares information gained during sampling by keeping importance maps for each pixel in the rendered image. Whenever a new pixel is to be rendered, the maps from neighboring pixels are averaged, propagating importance information through the scene. We demonstrate that both of these methods can perform substantially better than existing global illumination algorithms in a number of common rendering contexts.
6

The Paintings of Jeff Koons: 1994 - 2008

Zoller, Ian J. January 2010 (has links)
The Paintings of Jeff Koons: 1994 - 2008" is an in depth look at the painting of an artist who is still primarily known for his sculptural work of the 1980's. This thesis examines Koons' paintings in light of his previous work and looks at his studio practices, sources, connection to Photorealism, Surrealism, and Duchamp, etc. The thesis contends that a greater understanding and appreciation for Koons' paintings is necessary in order to grasp the importance of his entire oeuvre. / Art History
7

Computer generated lighting techniques: the study of mood in an interior visualisation

Marshall, Bronwyn Gillian 21 September 2009 (has links)
Abstract The report investigates computer generated (CG) lighting techniques with a focus on the rendering of interior architectural visualisations. With rapid advancements in CG technology, the demand and expectation for greater photorealism in visualisations are increasing. The tools to achieve this are widely available and fairly easy to apply; however, renderings on a local scale are still displaying functionality and lack visual appeal. The research discusses how design principles and aesthetics can be used effectively to create visual interest and display mood in the visualisation, with strong attention to the elements that are defined as the fundamentals in achieving photorealism. The focus is on a solid understanding of CG lighting techniques and principles in order to achieve high quality, dynamic visualisations. Case studies examine the work of lighting artist James Turrell and 3D artist Jose Pedro Costa and apply the findings to a creative project, encompassing the discussions in the report. The result is the completion of three photorealistic renderings of an interior visualisation, using different CG lighting techniques to convey mood. The research provides a platform for specialisation in the 3D environment and encourages a multidisciplinary approach to learning.
8

Photorealistic Surface Rendering with Microfacet Theory / Rendu photoréaliste de surfaces avec la théorie des microfacettes

Dupuy, Jonathan 26 November 2015 (has links)
La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale / Photorealistic rendering involves the numeric resolution of physically accurate light/matter interactions which, despite the tremendous and continuously increasing computational power that we now have at our disposal, is nowhere from becoming a quick and simple task for our computers. This is mainly due to the way that we represent objects: in order to reproduce the subtle interactions that create detail, tremendous amounts of geometry need to be queried. Hence, at render time, this complexity leads to heavy input/output operations which, combined with numerically complex filtering operators, require unreasonable amounts of computation times to guarantee artifact-free images. In order to alleviate such issues with today's constraints, a multiscale representation for matter must be derived. In this thesis, we derive such a representation for matter whose interface can be modelled as a displaced surface, a configuration that is typically simulated with displacement texture mapping in computer graphics. Our representation is derived within the realm of microfacet theory (a framework originally designed to model reflection of rough surfaces), which we review and augment in two respects. First, we render the theory applicable across multiple scales by extending it to support noncentral microfacet statistics. Second, we derive an inversion procedure that retrieves microfacet statistics from backscattering reflection evaluations. We show how this augmented framework may be applied to derive a general and efficient (although approximate) down-sampling operator for displacement texture maps that (a) preserves the anisotropy exhibited by light transport for any resolution, (b) can be applied prior to rendering and stored into MIP texture maps to drastically reduce the number of input/output operations, and (c) considerably simplifies per-pixel filtering operations, resulting overall in shorter rendering times. In order to validate and demonstrate the effectiveness of our operator, we render antialiased photorealistic images against ground truth. In addition, we provide C++ implementations all along the dissertation to facilitate the reproduction of the presented results. We conclude with a discussion on limitations of our approach, and avenues for a more general multiscale representation for matter
9

Digital Compositing for Photorealism and Lighting in Chroma key film studio

Andrijasevic, Neda, Johansson, Mirjam January 2012 (has links)
Photorealism is what visual effects are all about most of the time. This report entails digital compositing and studio lighting, in relation to Chroma key film material, aimed to give a photorealistic impression.    One of the identified problems in this report is that compositors may get Chroma key footage where the lighting is done poorly, which means a lot of extra work for the compositors and it might even make it impossible to create the desired end result.    Another problem recognized is that the knowledge that these professions possess is often tacit, not available in texts or even functionally defined.    Considering these problems, the purpose of this report is to articulate and try the tacit knowledge found in respect to these research questions: Which factors can alter the photorealistic impression of filmed Chroma key material? To what extent can different factors be altered in the compositing process, for a photorealistic result? How can a photorealistic result from composited Chroma key material be enabled and facilitated, with focus on studio lighting?       Methods used to answer these questions are interviews with compositors, a case study of a small video production, and the production of video clips, including studio lighting and compositing.    While professionals often write about the importance of consistency in image characteristics between different element that are composited together, this report defines which specific features that ought to be consistent, for a photorealistic result.    Further findings are focused on the limitations of the compositor; i.e. the features that are possible to manipulate and the features that have to be set correctly when filming in the studio, to enable a photorealistic outcome. Nonetheless, the main focus will be on the features of lighting set in the Chroma key film studio.    In fact, there are many features that are crucial for enabling and facilitating the compositing of a photorealistic end product. While some of the findings are new, others confirm what has already been presented.
10

Integração de sistemas de partículas com detecção de colisões em ambientes de ray tracing / Integration of particle systems with colision detection in ray tracing environments

Steigleder, Mauro January 1997 (has links)
Encontrar um modo de criar imagens fotorealísticas tem sido uma meta da Computação Gráfica por muitos anos [GLA 89]. Neste sentido, os aspectos que possuem principal importância são a modelagem e a iluminação. Ao considerar aspectos de modelagem, a obtenção de realismo mostra-se bastante difícil quando se pretende, através de técnicas tradicionais de modelagem, modelar objetos cujas formas não são bem definidas. Dentre alguns exemplos destes tipos de objetos, podem-se citar fogo, fumaça, nuvens, água, etc. Partindo deste fato, Reeves [REE 83] introduziu uma técnica denominada sistemas de partículas para efetuar a modelagem de fogo e explosões. Um sistema de partículas pode ser visto como um conjunto de partículas que evoluem ao longo do tempo. Os procedimentos envolvidos na animação de um sistema de partículas são bastante simples. Basicamente, a cada instante de tempo, novas partículas são geradas, os atributos das partículas antigas são alterados, ou estas partículas podem ser extintas de acordo com certas regras pré-definidas. Como as partículas de um sistema são entidades dinâmicas, os sistemas de partículas são especialmente adequados para o uso em animação. Ainda, dentre as principais vantagens dos sistemas de partículas quando comparados com as técnicas tradicionais de modelagem, podem-se citar a facilidade da obtenção de efeitos sobre as partículas (como borrão de movimento), a necessidade de poucos dados para a modelagem global do fenômeno, o controle por processos estocásticos, o nível de detalhamento ajustável e a possibilidade de grande controle sobre as suas deformações. Entretanto, os sistemas de partículas possuem algumas limitações e restrições que provocaram o pouco desenvolvimento de algoritmos específicos nesta área. Dentre estas limitações, as principais são a dificuldade de obtenção de efeitos realísticos de sombra e reflexão, o alto consumo de memória e o fato dos sistemas de partículas possuírem um processo de animação específico para cada efeito que se quer modelar. Poucos trabalhos foram desenvolvidos especificamente para a solução destes problemas, sendo que a maioria se destina à modelagem de fenômenos através de sistemas de partículas. Tendo em vista tais deficiências, este trabalho apresenta métodos para as soluções destes problemas. É apresentado um método para tornar viável a integração de sistemas de partículas em ambientes de Ray Tracing, através do uso de uma grade tridimensional. Também, são apresentadas técnicas para a eliminação de efeitos de aliasing das partículas, assim como para a redução da quantidades de memória exigida para o armazenamento dos sistemas de partículas. Considerando aspectos de animação de sistemas de partículas, também é apresentado uma técnica de aceleração para a detecção de colisões entre o sistema de partículas e os objetos de uma cena, baseada no uso de uma grade pentadimensional. Aspectos relativos à implementação, tempo de processamento e fatores de aceleração são apresentados no final do trabalho, assim como as possíveis extensões futuras e trabalhos sendo realizados. / Finding a way to create photorealistic images has been a goal of Computer Graphics for many years [GLA 89]. In this sense, the aspects that have main importance are modeling and illumination. Considering aspects of modeling, the obtention of realism is very difficult when it is intended to model fuzzy objects using traditional modeling techniques. Among some examples of these types of objects, fire, smoke, clouds, water, etc. can be mentioned. With this fact in mind, Reeves [REE 83] introduced a technique named particle systems for modeling of fire and explosions. A particle system can be seen as a set of particles that evolves over time. The procedures involved in the animation of particle systems are very simple. Basically, at each time instant, new particles are generated, the attributes of the old ones are changed, or these particles can be extinguished according to predefined rules. As the particles of a system are dynamic entities, particle systems are specially suitable for use in animation. Among the main advantages of particle systems, when compared to traditional techniques, it can be mentioned the facility of obtaining effects such as motion blur over the particles, the need of few data to the global modeling of a phenomen, the control by stochastic processes, an adjustable level of detail and a great control over their deformations. However, particle systems present some limitations and restrictions that cause the little development of specific algorithms in this area. Among this limitations, the main are the difficulty of obtention of realistic effects of shadow and reflection, the high requirement of memory and the fact that particle systems need a specific animation process for each effect intended to be modeled. Few works have been developed specifically for the solution of these problems; most of them are developed for the modeling of phenomena through the use of particle systems. Keeping these deficiencies in mind, this work presents methods for solving these problems. A method is presented to make practicable the integration of particle systems and ray tracing, through the use of a third-dimensional grid. Also, a technique is presented to eliminate effects of aliasing of particles, and to reduce the amount of memory required for the storage of particle systems. Considering particle systems animation, a technique is also presented to accelerate the collision detection between particle systems and the objects of a scene, based on the use of a fifth-dimensional grid. Aspects related to the implementation, processing time and acceleration factors are presented at the end of the work, as well as the possible future extensions and ongoing works.

Page generated in 0.0769 seconds