• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 123
  • 72
  • 59
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 957
  • 368
  • 210
  • 137
  • 136
  • 130
  • 128
  • 127
  • 124
  • 116
  • 108
  • 92
  • 87
  • 80
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Entwurf und Implementierung eines computergraphischen Systems zur Integration komplexer, echtzeitfähiger 3D-Renderingverfahren / Design and implementation of a graphics system to integrate complex, real-time capable 3D rendering algorithms

Kirsch, Florian January 2005 (has links)
Thema dieser Arbeit sind echtzeitfähige 3D-Renderingverfahren, die 3D-Geometrie mit über der Standarddarstellung hinausgehenden Qualitäts- und Gestaltungsmerkmalen rendern können. Beispiele sind Verfahren zur Darstellung von Schatten, Reflexionen oder Transparenz. Mit heutigen computergraphischen Software-Basissystemen ist ihre Integration in 3D-Anwendungssysteme sehr aufwändig: Dies liegt einerseits an der technischen, algorithmischen Komplexität der Einzelverfahren, andererseits an Ressourcenkonflikten und Seiteneffekten bei der Kombination mehrerer Verfahren. Szenengraphsysteme, intendiert als computergraphische Softwareschicht zur Abstraktion von der Graphikhardware, stellen derzeit keine Mechanismen zur Nutzung dieser Renderingverfahren zur Verfügung.<br><br> Ziel dieser Arbeit ist es, eine Software-Architektur für ein Szenengraphsystem zu konzipieren und umzusetzen, die echtzeitfähige 3D-Renderingverfahren als Komponenten modelliert und es damit erlaubt, diese Verfahren innerhalb des Szenengraphsystems für die Anwendungsentwicklung effektiv zu nutzen. Ein Entwickler, der ein solches Szenengraphsystem nutzt, steuert diese Komponenten durch Elemente in der Szenenbeschreibung an, die die sichtbare Wirkung eines Renderingverfahrens auf die Geometrie in der Szene angeben, aber keine Hinweise auf die algorithmische Implementierung des Verfahrens enthalten. Damit werden Renderingverfahren in 3D-Anwendungssystemen nutzbar, ohne dass ein Entwickler detaillierte Kenntnisse über sie benötigt, so dass der Aufwand für ihre Entwicklung drastisch reduziert wird.<br><br> Ein besonderer Augenmerk der Arbeit liegt darauf, auf diese Weise auch verschiedene Renderingverfahren in einer Szene kombiniert einsetzen zu können. Hierzu ist eine Unterteilung der Renderingverfahren in mehrere Kategorien erforderlich, die mit Hilfe unterschiedlicher Ansätze ausgewertet werden. Dies erlaubt die Abstimmung verschiedener Komponenten für Renderingverfahren und ihrer verwendeten Ressourcen.<br><br> Die Zusammenarbeit mehrerer Renderingverfahren hat dort ihre Grenzen, wo die Kombination von Renderingverfahren graphisch nicht sinnvoll ist oder fundamentale technische Beschränkungen der Verfahren eine gleichzeitige Verwendung unmöglich machen. Die in dieser Arbeit vorgestellte Software-Architektur kann diese Grenzen nicht verschieben, aber sie ermöglicht den gleichzeitigen Einsatz vieler Verfahren, bei denen eine Kombination aufgrund der hohen Komplexität der Implementierung bislang nicht erreicht wurde. Das Vermögen zur Zusammenarbeit ist dabei allerdings von der Art eines Einzelverfahrens abhängig: Verfahren zur Darstellung transparenter Geometrie beispielsweise erfordern bei der Kombination mit anderen Verfahren in der Regel vollständig neuentwickelte Renderingverfahren; entsprechende Komponenten für das Szenengraphsystem können daher nur eingeschränkt mit Komponenten für andere Renderingverfahren verwendet werden.<br><br> Das in dieser Arbeit entwickelte System integriert und kombiniert Verfahren zur Darstellung von Bumpmapping, verschiedene Schatten- und Reflexionsverfahren sowie bildbasiertes CSG-Rendering. Damit stehen wesentliche Renderingverfahren in einem Szenengraphsystem erstmalig komponentenbasiert und auf einem hohen Abstraktionsniveau zur Verfügung. Das System ist trotz des zusätzlichen Verwaltungsaufwandes in der Lage, die Renderingverfahren einzeln und in Kombination grundsätzlich in Echtzeit auszuführen. / This thesis is about real-time rendering algorithms that can render 3D-geometry with quality and design features beyond standard display. Examples include algorithms to render shadows, reflections, or transparency. Integrating these algorithms into 3D-applications using today’s rendering libraries for real-time computer graphics is exceedingly difficult: On the one hand, the rendering algorithms are technically and algorithmically complicated for their own, on the other hand, combining several algorithms causes resource conflicts and side effects that are very difficult to handle. Scene graph libraries, which intend to provide a software layer to abstract from computer graphics hardware, currently offer no mechanisms for using these rendering algorithms, either.<br><br> The objective of this thesis is to design and to implement a software architecture for a scene graph library that models real-time rendering algorithms as software components allowing an effective usage of these algorithms for 3D-application development within the scene graph library. An application developer using the scene graph library controls these components with elements in a scene description that describe the effect of a rendering algorithm for some geometry in the scene graph, but that do not contain hints about the actual implementation of the rendering algorithm. This allows for deploying rendering algorithms in 3D-applications even for application developers that do not have detailed knowledge about them. In this way, the complexity of development of rendering algorithms can be drastically reduced.<br><br> In particular, the thesis focuses on the feasibility of combining several rendering algorithms within a scene at the same time. This requires to classify rendering algorithms into different categories, which are, each, evaluated using different approaches. In this way, components for different rendering algorithms can collaborate and adjust their usage of common graphics resources.<br><br> The possibility of combining different rendering algorithms can be limited in several ways: The graphical result of the combination can be undefined, or fundamental technical restrictions can render it impossible to use two rendering algorithms at the same time. The software architecture described in this work is not able to remove these limitations, but it allows to combine a lot of different rendering algorithms that, until now, could not be combined due to the high complexities of the required implementation. The capability of collaboration, however, depends on the kind of rendering algorithm: For instance, algorithms for rendering transparent geometry can be combined with other algorithms only with a complete redesign of the algorithm. Therefore, components in the scene graph library for displaying transparency can be combined with components for other rendering algorithms in a limited way only.<br><br> The system developed in this work integrates and combines algorithms for displaying bump mapping, several variants of shadow and reflection algorithms, and image-based CSG algorithms. Hence, major rendering algorithms are available for the first time in a scene graph library as components with high abstraction level. Despite the required additional indirections and abstraction layers, the system, in principle, allows for using and combining the rendering algorithms in real-time.
342

Design and implementation of an automated workflow to provide a zoomable web mapping application using artistic styles

Hartl, Maximilian 03 November 2015 (has links) (PDF)
Although proprietary and free web map applications have become an important part of daily life, individual map styling has been neglected for a fairly long time. With the latest possibilities of custom adjustment provided by many services and some interesting artistic experiments, this is about to change. In the context of artistic cartography and custom map styling, this work explores the possibilities of employing an automated process for the generation of WMTS compatible map tiles with an artistic styling. Web mapping standards and techniques of non-photorealistic rendering (NPR) are considered as well as traditional cartographic representations. Furthermore, existing vector- and raster-based processes are analyzed including an interactive workflow with the open-source image editing software GIMP, which is examined with respect to its drawing capabilities. Based on this, a concept for an automated rendering process is developed and influencing factors along with input parameters are discussed. An experimental automated processing is implemented using GIMP and its Python scripting interface to create single maps and seamless map tiles for the use in a WMTS application. Different drawing techniques of GIMP, such as brushes, dynamics and masks are applied during the rendering process. Geodata is taken from the freely available OpenStreetMap project and it is stored in a geodatabase. Furthermore, the GIS capabilities of the database are used to implement custom query procedures for the creation of seamless tiles, feature simplification and generalization that makes a preprocessing of the data unnecessary. Additionally randomization methods for the estrangement and abstraction of the SVG vector graphics geometry to emulate a hand-drawn appearance are created based on non-photorealistic rendering techniques. As a result, various rendering and abstraction processes are evaluated and discussed regarding their contribution to an artistic appearance. Map tiles are created using these stylings which are WMTS compatible and can be presented in a web mapping application.
343

Interactive rendering techniques for focus+context visualization of 3D geovirtual environments

Trapp, Matthias January 2013 (has links)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments. / Die Darstellung immer komplexerer raumbezogener Information durch Geovisualisierung stellt die existierenden Technologien und den Menschen ständig vor neue Herausforderungen. In dieser Arbeit werden fünf neue, echtzeitfähige Renderingverfahren und darauf basierende Anwendungen für die Fokus-&-Kontext-Visualisierung von interaktiven geovirtuellen 3D-Umgebungen – wie virtuelle 3D-Stadt- und Landschaftsmodelle – vorgestellt. Die große Menge verschiedener darzustellender raumbezogener Information in 3D-Umgebungen führt oft zu einer hohen Anzahl unterschiedlicher Objekte und somit zu einer hohen Geometrie- und Texturkomplexität. In der Folge verlieren 3D-Darstellungen durch Verdeckungen, überladene Bildinhalte und eine geringe Ausnutzung des zur Verfügung stehenden Bildraumes an Informationswert. Um diese Beschränkungen zu kompensieren und somit die Kommunikation raumbezogener Information zu verbessern, kann das Prinzip der Fokus-&-Kontext-Visualisierung angewendet werden. Hierbei wird die für den Nutzer wesentliche Information als detaillierte Ansicht im Fokus mit abstrahierter Kontextinformation nahtlos miteinander kombiniert. Um das für die interaktive Visualisierung notwendige Echtzeit-Rendering durchzuführen, können spezialisierte Parallelprozessoren für die Rasterisierung von computergraphischen Primitiven (GPUs) verwendet werden. Dazu ist die Konzeption und Implementierung von geeigneten Datenstrukturen und Rendering-Pipelines notwendig. Der Beitrag dieser Arbeit umfasst die folgenden fünf Renderingverfahren. • Das Renderingverfahren für interaktive 3D-Generalisierungslinsen: Hierbei wird die Kombination unterschiedlicher 3D-Szenengeometrien, z. B. generalisierte Varianten eines 3DStadtmodells, in einem Bild ermöglicht. Das Verfahren basiert auf einem generalisierten Clipping-Ansatz, der es erlaubt, unter Verwendung einer komprimierbaren, rasterbasierten Datenstruktur beliebige Bereiche einer 3D-Szene freizustellen bzw. zu kappen. Somit lässt sich eine Kombination von detaillierten Ansichten im Fokusbereich mit der Darstellung einer abstrahierten Variante im Kontextbereich implementieren. • Das Renderingverfahren zur Visualisierung von dynamischen Raster-Daten in geovirtuellen 3D-Umgebungen zur Darstellung von 2D-Oberflächenlinsen: Die Verwendung von projektiven Texturen zur Entkoppelung von Bild- und Geometriedaten ermöglicht eine flexible Kombination verschiedener Rasterebenen (z.B. Luftbilder oder Videos). Somit können verschiedene überlappende sowie verschachtelte 2D-Oberflächenlinsen mit unterschiedlichen Dateninhalten interaktiv visualisiert werden. • Das Renderingverfahren zur bildbasierten Deformation von geovirtuellen 3D-Umgebungen: Neben der interaktiven Bildsynthese von nicht-planaren Projektionen, wie beispielsweise zylindrischen oder sphärischen Panoramen, lassen sich mit diesem Verfahren multifokale 3D-Fischaugen-Linsen erzeugen sowie planare und nicht-planare Projektionen miteinander kombinieren. • Das Renderingverfahren für die Generierung von sichtabhängigen multiperspektivischen Ansichten von geovirtuellen 3D-Umgebungen: Das Verfahren basiert auf globalen Deformationen der 3D-Szenengeometrie und kann zur Erstellung von interaktiven 3D-Panoramakarten verwendet werden, welche beispielsweise detaillierte Absichten nahe der virtuellen Kamera (Fokus) mit abstrakten Ansichten im Hintergrund (Kontext) kombinieren. Dieser Ansatz reduziert Verdeckungen, nutzt den zur Verfügung stehenden Bildraum in verbesserter Weise aus und reduziert das Überladen von Bildinhalten. • Objekt-und bildbasierte Renderingverfahren für die Hervorhebung von Fokus-Objekten und Fokus-Bereichen innerhalb und außerhalb des sichtbaren Bildausschnitts, um die präattentive Wahrnehmung eines Benutzers besser zu unterstützen. Die in dieser Arbeit vorgestellten Konzepte, Entwürfe und Implementierungen von interaktiven Renderingverfahren zur Fokus-&-Kontext-Visualisierung sowie deren ausgewählte Anwendungen ermöglichen eine effektivere Kommunikation raumbezogener Information und repräsentieren softwaretechnische Bausteine für die Entwicklung neuer Anwendungen und Systeme im Bereich der geovirtuellen 3D-Umgebungen.
344

Direct volume illustration for cardiac applications

Mueller, Daniel C. January 2008 (has links)
To aid diagnosis, treatment planning, and patient education, clinicians require tools to anal- yse and explore the increasingly large three-dimensional (3-D) datasets generated by modern medical scanners. Direct volume rendering is one such tool finding favour with radiologists and surgeons for its photorealistic representation. More recently, volume illustration — or non-photorealistic rendering (NPR) — has begun to move beyond the mere depiction of data, borrowing concepts from illustrators to visually enhance desired information and suppress un- wanted clutter. Direct volume rendering generates images by accumulating pixel values along rays cast into a 3-D image. Transfer functions allow users to interactively assign material properties such as colour and opacity (a process known as classification). To achieve real-time framerates, the rendering must be accelerated using a technique such as 3-D texture mapping on commod- ity graphics processing units (GPUs). Unfortunately, current methods do not allow users to intuitively enhance regions of interest or suppress occluding structures. Furthermore, addi- tional scalar images describing clinically relevant measures have not been integrated into the direct rendering method. These tasks are essential for the effective exploration, analysis, and presentation of 3-D images. This body of work seeks to address the aforementioned limitations. First, to facilitate the research program, a flexible architecture for prototyping volume illustration methods is pro- posed. This program unifies a number of existing techniques into a single framework based on 3-D texture mapping, while also providing for the rapid experimentation of novel methods. Next, the prototyping environment is employed to improve an existing method—called tagged volume rendering — which restricts transfer functions to given spatial regions using a number of binary segmentations (tags). An efficient method for implementing binary tagged volume rendering is presented, along with various technical considerations for improving the classifi- cation. Finally, the concept of greyscale tags is proposed, leading to a number of novel volume visualisation techniques including position modulated classification and dynamic exploration. The novel methods proposed in this work are generic and can be employed to solve a wide range of problems. However, to demonstrate their usefulness, they are applied to a specific case study. Ischaemic heart disease, caused by narrowed coronary arteries, is a leading healthconcern in many countries including Australia. Computed tomography angiography (CTA) is an imaging modality which has the potential to allow clinicians to visualise diseased coronary arteries in their natural 3-D environment. To apply tagged volume rendering for this case study, an active contour method and minimal path extraction technique are proposed to segment the heart and arteries respectively. The resultant images provide new insight and possibilities for diagnosing and treating ischaemic heart disease.
345

Analyses d'images de tomographie X chez le petit animal : applications aux études de phénotypage ex vivo et in vivo / Analysis of small animal X-Ray tomographic imaging : application for phenotypical analysis in mice ex vivo and in vivo

Marchadier, Arnaud 13 December 2011 (has links)
L’imagerie du petit animal est incontournable pour le développement des recherches dans les secteurs de labiologie, de la médecine et de l’industrie pharmaceutique. Parmi les différentes modalités d’imagerie développéeschez l’humain et adaptées à l’animal, l’imagerie tomographique à rayons X est devenue une référencepour l’analyse des caractères anatomiques et phénotypiques chez la souris. Elle permet de réaliser des étudeslongitudinales in vivo et des analyses haute résolution ex vivo de façon non invasives et en 3D. L’analyse deces images 3D nécessite des outils spécifiques à chaque problématique.Dans ce contexte, notre travail de thèse a permis d’apporter des contributions sur les thématiques suivantes :1. le développement d’outils d’analyse, à la fois quantitatifs et qualitatifs, pour l’imagerie des tissusminéralisés et adipeux2. l’application des méthodologies développées à des problématiques de recherche biomédicale3. l’étude comparative et "multi-échelle" de différentes technologies de tomographie X pour l’imagerie dupetit animal4. la mise au point d’une méthode originale par résonnance paramagnétique électronique pour la dosimétried’un acte d’imagerie X chez le petit animalEn conclusion, les outils d’imagerie 3D que nous avons développés représentent un nouvel apport pour la dissectionvirtuelle de l’animal de laboratoire, permettant l’exploration de nombreux tissus et organes et rivalisantavec les méthodes d’histologie et de microscopie électronique.L’application de ces méthodes d’imagerie pour la recherche fondamentale et pré-clinique ouvre la perspectived’une alternative nouvelle dans l’expérimentation animale. / Small animal imaging is highly necessary for the development of biomedical research and pharmaceuticalapplications. Amongst various available imaging methods, X-Ray tomography is now considered as a goldstandard for anatomical and phenotypical analysis in mice. CT imaging allows non invasive longitudinal studiesin vivo and high resolution analysis ex vivo. The 3D image analysis requires the development of specific toolsdepending on the biomedical problematics.In this context, we have investigated the following research areas :1. Development of 3D image tools for qualitative and quantitative image analysis of mineralized andadipose tissues in murin models2. Application of our tools to biomedical investigations3. Comparative and multi-scale analysis of various tomography technologies for small animal imaging4. Development of an original method using Electronic Paramagnetic Resonance (EPR) for X-ray dosimetryin miceIn conclusion, our 3D imaging methods are potentially of high interest for the virtual dissection of laboratoryanimals, allowing extended analysis of various tissues and organs complementary to standard histological andmicroscopic approaches.
346

Métodos de renderização não-fotorrealística. / Non-photorealistic rendering methods.

Brandão, Daniel Nicolau 29 February 2008 (has links)
This dissertation dicusses the main concepts involved in the non-photorealistic rendering techniques and proposes a general scheme to implement such techniques. The work discusses also a non-photorealistic rendering style from line drawing to 3D models presented by Stéphane Grabli et al, in such a way that the line styles are programmable. We also present an implementation from portion of the work of Grabli. / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nesta dissertação são discutidos os principais conceitos envolvidos nas técnicas de renderização não-fotorrelística e propõe um esquema geral para implementação de tais técnicas. É discutido também um estilo de renderização não-fotorrealística de desenhos de linha para modelos 3D apresentado por Stéphane Grabli e colaboradores, cujos estilos de linha sejam programáveis. Também apresentamos uma implementação de parte do trabalho de Grabli.
347

[en] MCAD SHAPE GRAMMAR: PROCEDURAL MODELING FOR INDUSTRIAL MASSIVE CAD MODELS / [pt] MCAD SHAPE GRAMMAR: MODELAGEM PROCEDIMENTAL EM MODELOS CAD MASSIVOS INDUSTRIAIS

WALLAS HENRIQUE SOUSA DOS SANTOS 31 July 2018 (has links)
[pt] Modelos CAD 3D são ferramentas utilizadas na indústria para planejamento e simulações antes da construção ou realização de tarefas. Em muitos casos, como por exemplo na indústria de óleo e gás, esses modelos podem ser massivos, ou seja, possuem informações detalhadas em larga escala no intuito de que sejam fontes de informações precisas. Para obtenção de navegação interativa nesses modelos é necessária uma combinação de hardware e software adequados. Mesmo hoje com GPUs mais modernas, a renderização direta desses modelos não é eficiente, sendo necessárias abordagens clássicas como descarte de objetos não visíveis e LOD antes de enviar os dados à GPU. Logo, para renderização em tempo real de modelos CAD massivos são necessários algoritmos e estruturas de dados escaláveis para processamento da cena de forma eficiente. O trabalho dessa tese propõe o MCAD (Massive Computer-Aided Design) Shape grammar, uma gramática expansiva que gera objetos para criar cenas 3D de modelos massivos de forma procedimental. Nos últimos anos, modelagem procedimental tem ganhado atenção para criar cenas 3D rapidamente utilizando uma representação compacta, que armazena regras de geração ao invés de representação explícita da cena. MCAD Shape grammar explora repetições e padrões presentes em modelos massivos para renderização de cenas, reduzindo o consumo de memória e processando a cena procedimentalmente de forma eficiente. Convertemos modelos reais de refinarias em MCAD Shape grammar e implementamos um renderizador para os mesmos. Os resultados mostraram que esta solução é escalável com alto desempenho, além de ser a primeira vez que modelagem procedimental é utilizada nesse domínio. / [en] 3D CAD models are tools used in the industry for planning and simulations before construction or completion of tasks. In many cases, such as in the oil and gas industry, these models can be massive, that is, they have large-scale detailed information in order to be sources of accurate information. Interactive navigation in these models requires a combination of appropriate hardware and software. Even nowadays with modern GPUs, the direct rendering of these models is not efficient, requiring classic approaches such as culling non-visible objects and LOD before sending the data to the GPU. Therefore, for real-time rendering of massive CAD models, we need scalable algorithms and data structures to efficiently process the scene. The work of this thesis proposes MCAD (Massive Computer-Aided Design) Shape grammar, an expansive grammar that procedurally generates objects to create 3D scenes of massive models. In recent years procedural modeling has drawn attention for quickly creating 3D scenes using a compact representation, which stores generation rules rather than explicit representation of the scene. MCAD Shape grammar explores repetitions and patterns present in massive models for rendering scenes, reducing memory footprint and procedurally processing the scene efficiently. We converted real refinery models into MCAD Shape grammar and implemented a renderer for them. Results showed that our solution is scalable with high performance, also it is the first time that procedural modeling is used in this domain.
348

Minimos-quadrados e aproximação de superfície de pontos: novas perspectivas e aplicações / Least squares and point-based surfaces: new perspectives and Applications

João Paulo Gois 08 May 2008 (has links)
Métodos de representação de superfícies a partir de pontos não-organizados se mantêm como uma das principais vertentes científicas que aquecem o estado-da-arte em Computação Gráfica e, significativamente, estão sendo reconhecidos como uma ferramenta interessante para definição de interfaces móveis no contexto de simulações numéricas de escoamento de fluidos. Não é difícil encontrar motivos para tais fatos: pelo lado da computação gráfica, por exemplo, a manipulação de conjuntos de pontos massivos com geometrias complexas e sujeitos a informações ruidosas ainda abre margem para novas metodologias. Já no âmbito da mecânica dos fluidos, onde os dados não são originados de \\emph tridimensionais, mas sim de interfaces entre fluidos imiscíveis, mecanismos de representação de superfícies a partir de pontos não-organizados podem apresentar características computacionais e propriedades geométricas que os tornem atrativos para aplicações em simulação de fenômenos físicos. O objetivo principal dessa tese de doutorado foi, portanto, o desenvolvimento de técnicas de representação de superfícies a partir de pontos não-organizados, que sejam capazes de suprir restrições de importantes trabalhos prévios. Nesse sentido, primeiramente focalizamos a elaboração de técnicas baseadas em formulações de mínimos-quadrados-móveis e de uma técnica robusta de partição da unidade implícita adaptativa em duas vias. Além de mecanismos de representação de superfícies a partir de pontos não-organizados, também propusemos um método promissor para representação de interfaces em simulação numérica de escoamento de fluidos multifásicos. Para isso, embasamo-nos numa abordagem Lagrangeana (livre-de-malhas), fundamentada no método dos mínimos-quadrados-móveis algébricos e apresentamos diversos resultados numéricos, estudos de convergências e comparações que evidenciam o potencial dessa metodologia para simulações numéricas de fenômenos físicos. Apesar de a contribuição principal deste trabalho ser o desenvolvimento de métodos para representação de superfícies a partir de pontos não-organizados, a experiência que adquirimos no desenvolvimento dessas técnicas nos conduziu à elaboração de mecanismos para representação de dados volumétricos não-organizados. Por conta disso, apresentamos dois mecanismos de representação a partir de dados volumétricos não-organizados com o intuito de serem aplicáveis a informações oriundas de malhas contendo células arbitrárias, isto é, propusemos a definição de um método de rendering unificado / Surface reconstruction from unorganized points has been one of the most promising scientific research areas in Computer Graphics. In addition, it has been used successfully for the definition of fluid interface in numerical simulation of fluid flow. There are several reasons to that fact: for instance, considering Computer Graphics, we have the handling of out-of-core data from complicated geometries and subject to noisy information that brings out opportunities for the development of new techniques. Further, considering Numerical Fluid Mechanics, where the input data does not come from tridimensional scanners, but from fluid interfaces, schemes that define the surface from unorganized points can offer geometrical and computational properties useful to numerical fluid flow simulation. The main goal of this project was the development of novel techniques for reconstructing surfaces from unorganized points with the capability to overcome the main drawbacks of important previous work. To that end, first we focused on the development of techniques based on moving-least-squares and on a robust twofold partition of unity Implicits. Added to the development of surface reconstruction from unorganized points, we proposed a novel scheme for defining fluid flow interfaces. We approach a meshless Lagrangian based on algebraic moving-least-squares surfaces. In addition, we presented several numerical results, convergence tests and comparisons, which state the power of the method to numerical simulation of physical phenomena. Although our main contributions were focused on surface reconstruction from points, we proposed methods to function reconstruction from unorganized volumetric data. Thus, we present two schemes to represent volumetric data from arbitrary meshes, i.e., a unified rendering scheme
349

Argamassas com adição de fibras de polipropileno -estudo do comportamento reológico e mecânico. / Mortars with polypropylene fibres´addition - study of the rheological and mechanical behavior.

Rosiany da Paixão Silva 19 June 2006 (has links)
Buscando evitar as anomalias surgidas nos revestimentos de argamassa, muitos projetistas e construtores têm procurado soluções alternativas, dentre as quais aparece o emprego de argamassas com adição de fibras. Porém, o conhecimento sobre o comportamento deste compósito é empírico, principalmente no Brasil, onde inexistem pesquisas sistêmicas sobre o assunto. A partir deste contexto, o presente trabalho teve por objetivo investigar a influência da adição de fibras de polipropileno em argamassas para revestimento, no que se refere ao seu comportamento reológico e mecânico. Para avaliar o comportamento reológico, foram empregadas as técnicas de ensaio dropping ball e squeeze flow, e a avaliação da aplicabilidade quanto à execução do revestimento, utilizando o conhecimento de um operário experiente. Para avaliar o comportamento mecânico, foram utilizados ensaios relativos a resistência à tração na flexão, resistência à compressão e módulo de elasticidade dinâmico. Os compósitos de argamassa produzidos no trabalho apresentam variação na matriz e no teor de fibras. A variação na matriz compreendeu o tipo de argamassa e a dosagem de água. Foram utilizadas duas argamassas, largamente empregadas no mercado da Construção Civil, uma com baixo teor de ar incorporado (da ordem de 5%) e outra com alto teor (da ordem de 30%) e foram utilizados seis teores de água. Quanto à fibra, empregou-se um único tipo, com 6mm de comprimento, variando-se o seu teor na mistura, com cinco dosagens distintas. Como resultado, constatou-se que a adição de fibras influencia na reologia das argamassas devido (não somente) às suas características físicas e mecânicas, mas também porque ao serem introduzidas, modificam fortemente as características da matriz original, como por exemplo, as alterações do teor de ar incorporado. Assim, constatou-se que a adição de fibras na argamassa com baixo teor de ar elevou este teor; enquanto na com alto teor de ar, a adição de fibras reduziu o teor de ar. Estas alterações conferiram condições de aplicabilidade particulares a cada tipo de argamassa. O comportamento mecânico das argamassas também foi alterado. Dentre os “modificadores reológicos” estudados, o teor de fibras na argamassa com baixo teor de ar incorporado foi o que mais contribuiu com a alteração da propriedade mecânica; enquanto para a argamassa com alto teor de ar, o teor de água é o que mais afeta o comportamento mecânico. / Many designers and constructors have been looking for alternative solutions trying to avoid types of anomalies in the mortar rendering. One of these solutions is the application of mortar with fibres’ addition. However, the knowledge concerning the composite behavior is very empirical, mainly in Brazil where there are no systematic researches about this subject. From this context, the objective of this job is to investigate the influence of the polypropylene fibres’ addition in the rheological and mechanical behavior of the mortar applied in rendering. For the study of the rheological behavior, laboratory evaluations were done using the following methods: dropping ball and squeeze flow. Furthermore, for the applicability evaluation, the knowledge of an expert was taken into account. For the study of the mechanical behavior, laboratory evaluations were done using the following methods: evaluation of the flexural strength, compressive strenght and dynamic elastic modulus. In this job, the mortar composites present variation in the matrice and fibre. The variation in the matrice involves the mortar type and the water dosage. Two mortars widely used in the civil construction market were used in this present work: one with low air voids formation (about 5%) and other with high air voids formation (about 30%). Six water percentages were also used in the matrice variation. Polypropylene fibres with length 6mm were used in the fibre variation. Five different fibre percentages were used in the mixes. As results, the fibres’ addition evidenced alterations in the mortars rheological due (not only) to physicals and mechanicals characteristics, but also because when the fibres were inserted, they modified the characteristics of the original matrice strongly, as for example, the alteration of the incorporated air percentage. So, the fibres’ addition in the mortar with low air voids formation increased the air formation; while in the mortar with high air voids formation, the fibres’ addition decreased the air formation. These alterations evidenced the specifics applicability conditions for each type of mortar. The mortar mechanical behavior also was altered. Inside the “rheological modifications” studied, the fibre percentage in the mortar with low air voids formation was what more contributed with the alteration of the mechanical property; while that the mortar with high air voids formation, the water percentage is what more affected the mechanical behavior.
350

Rendering with Marching Cubes, looking at Hybrid Solutions / Rendering med Marching Cubes, en närmare titt på hybrid lösningar.

Andersson, Patrik, Johansson, Sakarias January 2012 (has links)
Marching Cubes is a rendering technique that has many advantages for a lot of areas. It is a technique for representing scalar fields as a three-dimensional mesh. It is used for geographical applications as well as scientific ones, mainly in the medical industry to visually render medical data of the human body. But it&apos;s also an interesting technique to explore for the usage in computer games or other real-time applications since it can create some really interesting rendering. The main focus in this paper is to present a novel hybrid solution using marching cubes and heightmaps to render terrain; moreover, to find if it’s suitable for real-time applications. The paper will follow a theoretical approach as well as an implementational one on the hybrid solution. The results across several tests for different scenarios show that the hybrid solution works well for today&apos;s real-time applications using a modern graphics card and CPU (Central Processing Unit).

Page generated in 0.0719 seconds