• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 123
  • 72
  • 59
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 957
  • 368
  • 210
  • 137
  • 136
  • 130
  • 128
  • 127
  • 124
  • 116
  • 108
  • 92
  • 87
  • 80
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Free View Rendering for 3D Video : Edge-Aided Rendering and Depth-Based Image Inpainting

Muddala, Suryanarayana Murthy January 2015 (has links)
Three Dimensional Video (3DV) has become increasingly popular with the success of 3D cinema. Moreover, emerging display technology offers an immersive experience to the viewer without the necessity of any visual aids such as 3D glasses. 3DV applications, Three Dimensional Television (3DTV) and Free Viewpoint Television (FTV) are auspicious technologies for living room environments by providing immersive experience and look around facilities. In order to provide such an experience, these technologies require a number of camera views captured from different viewpoints. However, the capture and transmission of the required number of views is not a feasible solution, and thus view rendering is employed as an efficient solution to produce the necessary number of views. Depth-image-based rendering (DIBR) is a commonly used rendering method. Although DIBR is a simple approach that can produce the desired number of views, inherent artifacts are major issues in the view rendering. Despite much effort to tackle the rendering artifacts over the years, rendered views still contain visible artifacts. This dissertation addresses three problems in order to improve 3DV quality: 1) How to improve the rendered view quality using a direct approach without dealing each artifact specifically. 2) How to handle disocclusions (a.k.a. holes) in the rendered views in a visually plausible manner using inpainting. 3) How to reduce spatial inconsistencies in the rendered view. The first problem is tackled by an edge-aided rendering method that uses a direct approach with one-dimensional interpolation, which is applicable when the virtual camera distance is small. The second problem is addressed by using a depth-based inpainting method in the virtual view, which reconstructs the missing texture with background data at the disocclusions. The third problem is undertaken by a rendering method that firstly inpaint occlusions as a layered depth image (LDI) in the original view, and then renders a spatially consistent virtual view. Objective assessments of proposed methods show improvements over the state-of-the-art rendering methods. Visual inspection shows slight improvements for intermediate views rendered from multiview videos-plus-depth, and the proposed methods outperforms other view rendering methods in the case of rendering from single view video-plus-depth. Results confirm that the proposed methods are capable of reducing rendering artifacts and producing spatially consistent virtual views. In conclusion, the view rendering methods proposed in this dissertation can support the production of high quality virtual views based on a limited number of input views. When used to create a multi-scopic presentation, the outcome of this dissertation can benefit 3DV technologies to improve the immersive experience.
332

Guiding a Path Tracer with Local Radiance Estimates / Guiding a Path Tracer with Local Radiance Estimates

Berger, Martin January 2012 (has links)
Path tracing is a basic, statistically unbiased method for calculating the global illumination in 3D scenes. For practical purposes, the algorithm is too slow, so it is used mainly for theoretical purposes or as a base for more advanced algorithms. This thesis explores the possibility of improving this algorithm by augmenting the sampling part, which computes outgoing directions during ray traversal through the scene. This optimization is accomplished by creating a special data structure in a preprocess step, which describes approximate light distribution in the scene and which then aids the sampling process. The presented algorithm is implemented in the PBRT library.
333

Argamassas com adição de fibras de polipropileno -estudo do comportamento reológico e mecânico. / Mortars with polypropylene fibres´addition - study of the rheological and mechanical behavior.

Silva, Rosiany da Paixão 19 June 2006 (has links)
Buscando evitar as anomalias surgidas nos revestimentos de argamassa, muitos projetistas e construtores têm procurado soluções alternativas, dentre as quais aparece o emprego de argamassas com adição de fibras. Porém, o conhecimento sobre o comportamento deste compósito é empírico, principalmente no Brasil, onde inexistem pesquisas sistêmicas sobre o assunto. A partir deste contexto, o presente trabalho teve por objetivo investigar a influência da adição de fibras de polipropileno em argamassas para revestimento, no que se refere ao seu comportamento reológico e mecânico. Para avaliar o comportamento reológico, foram empregadas as técnicas de ensaio dropping ball e squeeze flow, e a avaliação da aplicabilidade quanto à execução do revestimento, utilizando o conhecimento de um operário experiente. Para avaliar o comportamento mecânico, foram utilizados ensaios relativos a resistência à tração na flexão, resistência à compressão e módulo de elasticidade dinâmico. Os compósitos de argamassa produzidos no trabalho apresentam variação na matriz e no teor de fibras. A variação na matriz compreendeu o tipo de argamassa e a dosagem de água. Foram utilizadas duas argamassas, largamente empregadas no mercado da Construção Civil, uma com baixo teor de ar incorporado (da ordem de 5%) e outra com alto teor (da ordem de 30%) e foram utilizados seis teores de água. Quanto à fibra, empregou-se um único tipo, com 6mm de comprimento, variando-se o seu teor na mistura, com cinco dosagens distintas. Como resultado, constatou-se que a adição de fibras influencia na reologia das argamassas devido (não somente) às suas características físicas e mecânicas, mas também porque ao serem introduzidas, modificam fortemente as características da matriz original, como por exemplo, as alterações do teor de ar incorporado. Assim, constatou-se que a adição de fibras na argamassa com baixo teor de ar elevou este teor; enquanto na com alto teor de ar, a adição de fibras reduziu o teor de ar. Estas alterações conferiram condições de aplicabilidade particulares a cada tipo de argamassa. O comportamento mecânico das argamassas também foi alterado. Dentre os “modificadores reológicos" estudados, o teor de fibras na argamassa com baixo teor de ar incorporado foi o que mais contribuiu com a alteração da propriedade mecânica; enquanto para a argamassa com alto teor de ar, o teor de água é o que mais afeta o comportamento mecânico. / Many designers and constructors have been looking for alternative solutions trying to avoid types of anomalies in the mortar rendering. One of these solutions is the application of mortar with fibres’ addition. However, the knowledge concerning the composite behavior is very empirical, mainly in Brazil where there are no systematic researches about this subject. From this context, the objective of this job is to investigate the influence of the polypropylene fibres’ addition in the rheological and mechanical behavior of the mortar applied in rendering. For the study of the rheological behavior, laboratory evaluations were done using the following methods: dropping ball and squeeze flow. Furthermore, for the applicability evaluation, the knowledge of an expert was taken into account. For the study of the mechanical behavior, laboratory evaluations were done using the following methods: evaluation of the flexural strength, compressive strenght and dynamic elastic modulus. In this job, the mortar composites present variation in the matrice and fibre. The variation in the matrice involves the mortar type and the water dosage. Two mortars widely used in the civil construction market were used in this present work: one with low air voids formation (about 5%) and other with high air voids formation (about 30%). Six water percentages were also used in the matrice variation. Polypropylene fibres with length 6mm were used in the fibre variation. Five different fibre percentages were used in the mixes. As results, the fibres’ addition evidenced alterations in the mortars rheological due (not only) to physicals and mechanicals characteristics, but also because when the fibres were inserted, they modified the characteristics of the original matrice strongly, as for example, the alteration of the incorporated air percentage. So, the fibres’ addition in the mortar with low air voids formation increased the air formation; while in the mortar with high air voids formation, the fibres’ addition decreased the air formation. These alterations evidenced the specifics applicability conditions for each type of mortar. The mortar mechanical behavior also was altered. Inside the “rheological modifications" studied, the fibre percentage in the mortar with low air voids formation was what more contributed with the alteration of the mechanical property; while that the mortar with high air voids formation, the water percentage is what more affected the mechanical behavior.
334

Minimos-quadrados e aproximação de superfície de pontos: novas perspectivas e aplicações / Least squares and point-based surfaces: new perspectives and Applications

Gois, João Paulo 08 May 2008 (has links)
Métodos de representação de superfícies a partir de pontos não-organizados se mantêm como uma das principais vertentes científicas que aquecem o estado-da-arte em Computação Gráfica e, significativamente, estão sendo reconhecidos como uma ferramenta interessante para definição de interfaces móveis no contexto de simulações numéricas de escoamento de fluidos. Não é difícil encontrar motivos para tais fatos: pelo lado da computação gráfica, por exemplo, a manipulação de conjuntos de pontos massivos com geometrias complexas e sujeitos a informações ruidosas ainda abre margem para novas metodologias. Já no âmbito da mecânica dos fluidos, onde os dados não são originados de \\emph tridimensionais, mas sim de interfaces entre fluidos imiscíveis, mecanismos de representação de superfícies a partir de pontos não-organizados podem apresentar características computacionais e propriedades geométricas que os tornem atrativos para aplicações em simulação de fenômenos físicos. O objetivo principal dessa tese de doutorado foi, portanto, o desenvolvimento de técnicas de representação de superfícies a partir de pontos não-organizados, que sejam capazes de suprir restrições de importantes trabalhos prévios. Nesse sentido, primeiramente focalizamos a elaboração de técnicas baseadas em formulações de mínimos-quadrados-móveis e de uma técnica robusta de partição da unidade implícita adaptativa em duas vias. Além de mecanismos de representação de superfícies a partir de pontos não-organizados, também propusemos um método promissor para representação de interfaces em simulação numérica de escoamento de fluidos multifásicos. Para isso, embasamo-nos numa abordagem Lagrangeana (livre-de-malhas), fundamentada no método dos mínimos-quadrados-móveis algébricos e apresentamos diversos resultados numéricos, estudos de convergências e comparações que evidenciam o potencial dessa metodologia para simulações numéricas de fenômenos físicos. Apesar de a contribuição principal deste trabalho ser o desenvolvimento de métodos para representação de superfícies a partir de pontos não-organizados, a experiência que adquirimos no desenvolvimento dessas técnicas nos conduziu à elaboração de mecanismos para representação de dados volumétricos não-organizados. Por conta disso, apresentamos dois mecanismos de representação a partir de dados volumétricos não-organizados com o intuito de serem aplicáveis a informações oriundas de malhas contendo células arbitrárias, isto é, propusemos a definição de um método de rendering unificado / Surface reconstruction from unorganized points has been one of the most promising scientific research areas in Computer Graphics. In addition, it has been used successfully for the definition of fluid interface in numerical simulation of fluid flow. There are several reasons to that fact: for instance, considering Computer Graphics, we have the handling of out-of-core data from complicated geometries and subject to noisy information that brings out opportunities for the development of new techniques. Further, considering Numerical Fluid Mechanics, where the input data does not come from tridimensional scanners, but from fluid interfaces, schemes that define the surface from unorganized points can offer geometrical and computational properties useful to numerical fluid flow simulation. The main goal of this project was the development of novel techniques for reconstructing surfaces from unorganized points with the capability to overcome the main drawbacks of important previous work. To that end, first we focused on the development of techniques based on moving-least-squares and on a robust twofold partition of unity Implicits. Added to the development of surface reconstruction from unorganized points, we proposed a novel scheme for defining fluid flow interfaces. We approach a meshless Lagrangian based on algebraic moving-least-squares surfaces. In addition, we presented several numerical results, convergence tests and comparisons, which state the power of the method to numerical simulation of physical phenomena. Although our main contributions were focused on surface reconstruction from points, we proposed methods to function reconstruction from unorganized volumetric data. Thus, we present two schemes to represent volumetric data from arbitrary meshes, i.e., a unified rendering scheme
335

[en] DISTRIBUTED VISUALIZATION USING CLUSTERS OF PCS / [pt] VISUALIZAÇÃO DISTRIBUÍDA UTILIZANDO AGRUPAMENTOS DE PCS

FREDERICO RODRIGUES ABRAHAM 20 June 2005 (has links)
[pt] Este trabalho apresenta um novo sistema de renderização distribuída destinado ao uso em agrupamentos de PCs. É feita uma extensão à linha de produção gráfica convencional para uma linha de produção gráfica distribuída, que pelo uso de múltiplas linhas de execução permite paralelizar as operações feitas na CPU, na GPU e na rede que interliga os PCs do agrupamento. Este sistema serviu de base para a implementação e o teste de três arquiteturas para renderização distribuída: uma arquitetura com ordenação no início, uma arquitetura com ordenação no fim para renderização volumétrica e uma arquitetura híbrida que tenta combinar as vantagens da ordenação no início e da ordenação no fim. É apresentado um novo algoritmo de balanceamento de carga baseado nos tempos de renderização do quadro anterior. O algoritmo é de implementação muito simples e funciona bem tanto em aplicações com gargalo na geometria quanto em aplicações com gargalo na rasterização. Este trabalho também propõe uma estratégia de distribuição de trabalho entre os computadores de renderização do agrupamento que usa eficientemente os recursos gráficos disponíveis, melhorando assim o desempenho da renderização. Um novo algoritmo de partição paralela do modelo entre os computadores do agrupamento é proposto para a arquitetura híbrida. / [en] This work presents a new distributed rendering system destined for PC clusters. The conventional graphics pipeline is extended to a distributed pipeline that parallelizes the operations done on the CPU, the GPU and the network by using multiple threads. This system was the base for the implementation of three distributed rendering architectures: a sort-first architecture, a sort-last architecture for volume rendering, and a hybrid architecture that seeks to combine the advantages of both sort-first and sortlast architectures. A new load-balancing algorithm based on the rendering times of the previous frame is proposed. The algorithm is very simple to be implemented and works well for both geometry- and rasterization-bound models. A new strategy to assign tiles to rendering nodes is proposed which effectively uses the available graphics resources, thus improving rendering performance. A new parallel model partition algorithm is proposed for the hybrid architecture.
336

Legible Visualization of Semi-Transparent Objects using Light Transport / Visualisation d'objets semi-transparents basée sur le transport lumineux

Murray, David 10 December 2018 (has links)
Explorer et comprendre des données volumétriques ou surfaciques est un des nombreux enjeux du domaine de l'informatique graphique. L'apparence de telles données peut être modélisée et visualisée en utilisant la théorie du transport lumineux. Afin de rendre une telle visualisation compréhensible, le recours à des matériaux transparents est très répandu. Si des solutions existent pour simuler correctement la propagation de la lumière et ainsi afficher des objets semi-transparents, offrir une visualisation compréhensible reste un sujet de recherche ouvert. Le but de cette thèse est double. Tout d'abord, une analyse approfondie du modèle optique pour le transport de la lumière et ses implications sur la génération d'images par ordinateur doit être effectuée. Ensuite, cette connaissance pourra être utilisée pour proposer des solutions efficaces et fiables pour visualiser des milieux transparents et semi-transparents. Dans ce manuscrit, premièrement, nous présentons le modèle optique communément utilisé pour modéliser le transport de la lumière dans des milieux participatifs, sa simplification si l'on réduit la situation à des surfaces et la manière dont ces modèles sont utilisés en informatique graphique pour générer des images. Deuxièmement, nous présentons une solution pour améliorer la représentation des formes dans le cas particulier des surfaces. La technique proposée utilise le transport lumineux comme base pour modifier le processus d'éclairage et modifier l'apparence et l'opacité des matériaux. Troisièmement, nous nous concentrons sur la problématique de l’utilisation de données volumétriques au lieu du cas simplifié des surfaces. Dans ce cas, le fait de ne modifier que les propriétés du matériau a un impact limité. Nous étudions donc comment le transport lumineux peut être utilisé pour fournir des informations utiles à la compréhension de milieux participatifs. Enfin, nous présentons notre modèle de transport lumineux pour les milieux participatifs, qui vise à explorer une région d'intérêt d’un volume. / Exploring and understanding volumetric or surface data is one of the challenges of Computer Graphics. The appearance of these data can be modeled and visualized using light transport theory. For the sake of understanding such a data visualization, transparent materials are widely used. If solutions exist to correctly simulate the light propagation and display semi-transparent objects, offering a understandable visualization remains an open research topic. The goal of this thesis is twofold. First, an in-depth analysis of the optical model for light transport and its implication on computer generated images is performed. Second, this knowledge can be used to tackle the problematic of providing efficient and reliable solution to visualize transparent and semi-transparent media. In this manuscript, we first introduce the general optical model for light transport in participating media, its simplification to surfaces, and how it is used in computer graphics to generate images. Second, we present a solution to improve shape depiction in the special case of surfaces. The proposed technique uses light transport as a basis to change the lighting process and modify the materials appearance and opacity. Third, we focus on the problematic of using full volumetric data instead of the simplified case of surfaces. In this case, changing only the material properties has a limited impact, thus we study how light transport can be used to provide useful information for participating media. Last, we present our light transport model for participating media that aims at exploring part of interest of a volume.
337

Machine Learning Algorithms for Geometry Processing by Example

Kalogerakis, Evangelos 18 January 2012 (has links)
This thesis proposes machine learning algorithms for processing geometry by example. Each algorithm takes as input a collection of shapes along with exemplar values of target properties related to shape processing tasks. The goal of the algorithms is to output a function that maps from the shape data to the target properties. The learned functions can be applied to novel input shape data in order to synthesize the target properties with style similar to the training examples. Learning such functions is particularly useful for two different types of geometry processing problems. The first type of problems involves learning functions that map to target properties required for shape interpretation and understanding. The second type of problems involves learning functions that map to geometric attributes of animated shapes required for real-time rendering of dynamic scenes. With respect to the first type of problems involving shape interpretation and understanding, I demonstrate learning for shape segmentation and line illustration. For shape segmentation, the algorithms learn functions of shape data in order to perform segmentation and recognition of parts in 3D meshes simultaneously. This is in contrast to existing mesh segmentation methods that attempt segmentation without recognition based only on low-level geometric cues. The proposed method does not require any manual parameter tuning and achieves significant improvements in results over the state-of-the-art. For line illustration, the algorithms learn functions from shape and shading data to hatching properties, given a single exemplar line illustration of a shape. Learning models of such artistic-based properties is extremely challenging, since hatching exhibits significant complexity as a network of overlapping curves of varying orientation, thickness, density, as well as considerable stylistic variation. In contrast to existing algorithms that are hand-tuned or hand-designed from insight and intuition, the proposed technique offers a largely automated and potentially natural workflow for artists. With respect to the second type of problems involving fast computations of geometric attributes in dynamic scenes, I demonstrate algorithms for learning functions of shape animation parameters that specifically aim at taking advantage of the spatial and temporal coherence in the attribute data. As a result, the learned mappings can be evaluated very efficiently during runtime. This is especially useful when traditional geometric computations are too expensive to re-estimate the shape attributes at each frame. I apply such algorithms to efficiently compute curvature and high-order derivatives of animated surfaces. As a result, curvature-dependent tasks, such as line drawing, which could be previously performed only offline for animated scenes, can now be executed in real-time on modern CPU hardware.
338

Machine Learning Algorithms for Geometry Processing by Example

Kalogerakis, Evangelos 18 January 2012 (has links)
This thesis proposes machine learning algorithms for processing geometry by example. Each algorithm takes as input a collection of shapes along with exemplar values of target properties related to shape processing tasks. The goal of the algorithms is to output a function that maps from the shape data to the target properties. The learned functions can be applied to novel input shape data in order to synthesize the target properties with style similar to the training examples. Learning such functions is particularly useful for two different types of geometry processing problems. The first type of problems involves learning functions that map to target properties required for shape interpretation and understanding. The second type of problems involves learning functions that map to geometric attributes of animated shapes required for real-time rendering of dynamic scenes. With respect to the first type of problems involving shape interpretation and understanding, I demonstrate learning for shape segmentation and line illustration. For shape segmentation, the algorithms learn functions of shape data in order to perform segmentation and recognition of parts in 3D meshes simultaneously. This is in contrast to existing mesh segmentation methods that attempt segmentation without recognition based only on low-level geometric cues. The proposed method does not require any manual parameter tuning and achieves significant improvements in results over the state-of-the-art. For line illustration, the algorithms learn functions from shape and shading data to hatching properties, given a single exemplar line illustration of a shape. Learning models of such artistic-based properties is extremely challenging, since hatching exhibits significant complexity as a network of overlapping curves of varying orientation, thickness, density, as well as considerable stylistic variation. In contrast to existing algorithms that are hand-tuned or hand-designed from insight and intuition, the proposed technique offers a largely automated and potentially natural workflow for artists. With respect to the second type of problems involving fast computations of geometric attributes in dynamic scenes, I demonstrate algorithms for learning functions of shape animation parameters that specifically aim at taking advantage of the spatial and temporal coherence in the attribute data. As a result, the learned mappings can be evaluated very efficiently during runtime. This is especially useful when traditional geometric computations are too expensive to re-estimate the shape attributes at each frame. I apply such algorithms to efficiently compute curvature and high-order derivatives of animated surfaces. As a result, curvature-dependent tasks, such as line drawing, which could be previously performed only offline for animated scenes, can now be executed in real-time on modern CPU hardware.
339

Example-based Rendering of Textural Phenomena

Kwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
340

Entwurf und Implementierung eines computergraphischen Systems zur Integration komplexer, echtzeitfähiger 3D-Renderingverfahren / Design and implementation of a graphics system to integrate complex, real-time capable 3D rendering algorithms

Kirsch, Florian January 2005 (has links)
Thema dieser Arbeit sind echtzeitfähige 3D-Renderingverfahren, die 3D-Geometrie mit über der Standarddarstellung hinausgehenden Qualitäts- und Gestaltungsmerkmalen rendern können. Beispiele sind Verfahren zur Darstellung von Schatten, Reflexionen oder Transparenz. Mit heutigen computergraphischen Software-Basissystemen ist ihre Integration in 3D-Anwendungssysteme sehr aufwändig: Dies liegt einerseits an der technischen, algorithmischen Komplexität der Einzelverfahren, andererseits an Ressourcenkonflikten und Seiteneffekten bei der Kombination mehrerer Verfahren. Szenengraphsysteme, intendiert als computergraphische Softwareschicht zur Abstraktion von der Graphikhardware, stellen derzeit keine Mechanismen zur Nutzung dieser Renderingverfahren zur Verfügung.<br><br> Ziel dieser Arbeit ist es, eine Software-Architektur für ein Szenengraphsystem zu konzipieren und umzusetzen, die echtzeitfähige 3D-Renderingverfahren als Komponenten modelliert und es damit erlaubt, diese Verfahren innerhalb des Szenengraphsystems für die Anwendungsentwicklung effektiv zu nutzen. Ein Entwickler, der ein solches Szenengraphsystem nutzt, steuert diese Komponenten durch Elemente in der Szenenbeschreibung an, die die sichtbare Wirkung eines Renderingverfahrens auf die Geometrie in der Szene angeben, aber keine Hinweise auf die algorithmische Implementierung des Verfahrens enthalten. Damit werden Renderingverfahren in 3D-Anwendungssystemen nutzbar, ohne dass ein Entwickler detaillierte Kenntnisse über sie benötigt, so dass der Aufwand für ihre Entwicklung drastisch reduziert wird.<br><br> Ein besonderer Augenmerk der Arbeit liegt darauf, auf diese Weise auch verschiedene Renderingverfahren in einer Szene kombiniert einsetzen zu können. Hierzu ist eine Unterteilung der Renderingverfahren in mehrere Kategorien erforderlich, die mit Hilfe unterschiedlicher Ansätze ausgewertet werden. Dies erlaubt die Abstimmung verschiedener Komponenten für Renderingverfahren und ihrer verwendeten Ressourcen.<br><br> Die Zusammenarbeit mehrerer Renderingverfahren hat dort ihre Grenzen, wo die Kombination von Renderingverfahren graphisch nicht sinnvoll ist oder fundamentale technische Beschränkungen der Verfahren eine gleichzeitige Verwendung unmöglich machen. Die in dieser Arbeit vorgestellte Software-Architektur kann diese Grenzen nicht verschieben, aber sie ermöglicht den gleichzeitigen Einsatz vieler Verfahren, bei denen eine Kombination aufgrund der hohen Komplexität der Implementierung bislang nicht erreicht wurde. Das Vermögen zur Zusammenarbeit ist dabei allerdings von der Art eines Einzelverfahrens abhängig: Verfahren zur Darstellung transparenter Geometrie beispielsweise erfordern bei der Kombination mit anderen Verfahren in der Regel vollständig neuentwickelte Renderingverfahren; entsprechende Komponenten für das Szenengraphsystem können daher nur eingeschränkt mit Komponenten für andere Renderingverfahren verwendet werden.<br><br> Das in dieser Arbeit entwickelte System integriert und kombiniert Verfahren zur Darstellung von Bumpmapping, verschiedene Schatten- und Reflexionsverfahren sowie bildbasiertes CSG-Rendering. Damit stehen wesentliche Renderingverfahren in einem Szenengraphsystem erstmalig komponentenbasiert und auf einem hohen Abstraktionsniveau zur Verfügung. Das System ist trotz des zusätzlichen Verwaltungsaufwandes in der Lage, die Renderingverfahren einzeln und in Kombination grundsätzlich in Echtzeit auszuführen. / This thesis is about real-time rendering algorithms that can render 3D-geometry with quality and design features beyond standard display. Examples include algorithms to render shadows, reflections, or transparency. Integrating these algorithms into 3D-applications using today’s rendering libraries for real-time computer graphics is exceedingly difficult: On the one hand, the rendering algorithms are technically and algorithmically complicated for their own, on the other hand, combining several algorithms causes resource conflicts and side effects that are very difficult to handle. Scene graph libraries, which intend to provide a software layer to abstract from computer graphics hardware, currently offer no mechanisms for using these rendering algorithms, either.<br><br> The objective of this thesis is to design and to implement a software architecture for a scene graph library that models real-time rendering algorithms as software components allowing an effective usage of these algorithms for 3D-application development within the scene graph library. An application developer using the scene graph library controls these components with elements in a scene description that describe the effect of a rendering algorithm for some geometry in the scene graph, but that do not contain hints about the actual implementation of the rendering algorithm. This allows for deploying rendering algorithms in 3D-applications even for application developers that do not have detailed knowledge about them. In this way, the complexity of development of rendering algorithms can be drastically reduced.<br><br> In particular, the thesis focuses on the feasibility of combining several rendering algorithms within a scene at the same time. This requires to classify rendering algorithms into different categories, which are, each, evaluated using different approaches. In this way, components for different rendering algorithms can collaborate and adjust their usage of common graphics resources.<br><br> The possibility of combining different rendering algorithms can be limited in several ways: The graphical result of the combination can be undefined, or fundamental technical restrictions can render it impossible to use two rendering algorithms at the same time. The software architecture described in this work is not able to remove these limitations, but it allows to combine a lot of different rendering algorithms that, until now, could not be combined due to the high complexities of the required implementation. The capability of collaboration, however, depends on the kind of rendering algorithm: For instance, algorithms for rendering transparent geometry can be combined with other algorithms only with a complete redesign of the algorithm. Therefore, components in the scene graph library for displaying transparency can be combined with components for other rendering algorithms in a limited way only.<br><br> The system developed in this work integrates and combines algorithms for displaying bump mapping, several variants of shadow and reflection algorithms, and image-based CSG algorithms. Hence, major rendering algorithms are available for the first time in a scene graph library as components with high abstraction level. Despite the required additional indirections and abstraction layers, the system, in principle, allows for using and combining the rendering algorithms in real-time.

Page generated in 0.0815 seconds