• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 29
  • 6
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 93
  • 93
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

High-dimensional Data in Scientific Visualization: Representation, Fusion and Difference

Mohammed, Ayat Mohammed Naguib 14 July 2017 (has links)
Visualization has proven to be an effective means for analyzing high-dimensional data, especially Multivariate Multidimensional (MVMD) scientific data. Scientific visualization deals with data that have natural spatial mapping such as maps, buildings interiors or even your physiological body parts, while information visualization involves abstract, non-spatial data. Visual analytics uses either visualization types to gain deep inferences about scientific data or information. In recent years, a variety of techniques have been developed combining statistical and visual analysis tools to represent data of different types in one view to enable data fusion. One vital feature of such visualization tools is the support for comparison: showing the differences between two or more objects. This feature is called visual differencing, or discrimination. Visual differencing is a common requirement across different research domains, helping analysts compare different objects in the data set or compare different attributes of the same object. From a visual analytic point of view, this research examines humans' predictable bias in interpreting visual-spatial, spatiotemporal information, and inference-making in scientific visualization. Practically, I examined different case studies from different domains such as land suitability in agriculture, spectrum sensing in software-defined radio networks, raster images in remote sensing, pattern recognition in point cloud, airflow distribution in aerodynamics, galaxy catalogs in astrophysics and protein membrane interaction in molecular dynamics. Each case required different computing power, ranging from personal computer to high performance cluster. Based on this experience across application domains, I propose a high-performance visualization paradigm for scientific visualization that supports three key features of scientific data analysis: representations, fusion, and visual discrimination. This paradigm is informed by practical work with multiple high-performance computing and visualization platforms from desktop displays to immersive CAVE displays. In order to evaluate the applicability of the proposed paradigm, I carried out two user studies. The first user study addressed the feature of data fusion with multivariate maps and the second one addressed visual differencing with three multi-view management techniques. The high-performance visualization paradigm and the results of these studies contribute to our knowledge of efficient MVMD designs and provides scientific visualization developers with a framework to mitigate the trade-offs of scalable visualization design such as the data mappings, computing power, and output modality. / Ph. D.
52

Visualização exploratória de dados volumétricos multivalorados variantes no tempo / Exploratory visualization of volumetric data multivalued time varying

Santos, Thiago Silva Reis 08 October 2012 (has links)
Simulações por computador permitem reduzir custo e, muitas vezes, realizar experimentos que na vida real seriam impraticáveis, ou por questões ambientais (explosões nucleares), ou por fatores que estão fora do controle do ser humano (colisões entre estrelas). Entretanto, e muito difícil manipular e analisar as centenas de gigabytes, ou mesmo terabytes, que tais simulações produzem como resultado. Os trabalhos que lidam com tais conjuntos de dados, tipicamente, empregam tanto técnicas de visualização científica como técnicas de visualização da informação, em geral refletindo o comportamento dos dados em um único instante de tempo. Entretanto, a análise da evolução temporal e a disponibilização de representações visuais integradas ainda é um grande desafio. Esse trabalho introduz diversas estratégias buscando tratar estes problemas, as quais tem em comum a utilização de projeções multidimensionais para apoiar a análise exploratória dos de dados, tanto em um instante de tempo específico, como ao longo da evolução temporal. O objetivo é favorecer a localização de grupos de elementos com comportamento similar e acompanhar sua evolução ao longo da simulação. Uma das estratégias introduzidas resume o comportamento temporal dos dados multidimensionais em uma única visualização, o que permite rastrear as entidades com comportamento similar e analisá-las ao longo da simulação / Computer simulations of physical phenomena allow reducing costs and studying behavior that would be unfeasible to observe in real life situations, either due to environmental limitations, e.g., a nuclear explosion, or due to factors that are beyond human control (e.g., collisions between stars). Millions of primitives (voxels, vertices or particle) may be required to accurately capture system behavior, thus generating very large data sets that are typically time-varying and multidimensional, as multiple simulation variables describe each primitive. Therefore, analyzing the hundreds of gigabytes or even terabytes resulting from these simulations remains a challenge. Current solutions that handle this type of data usually rely on Scientific or Information Visualization techniques, but typically revealing data behavior at a particular time instant. It remains a major challenge to provide visualizations capable of assisting analysts trying to inspect and understand behavior along the temporal domain. This work is an attempt in this direction, introducing several strategies to handle these problems. They have in common the use of multidimensional projection techniques to support exploratory analysis of simulation data, both at specic time instants and along the simulation as a whole. The goal is to favor the perception of groups of elements showing similar behavior and track their temporal evolution. One of the strategies introduced summarizes, in a single visual representation, the temporal behavior of the multidimensional data space, thus allowing analysts to identify and analyze the entities with similar behavior along the simulation
53

Desenvolvimento de um ambiente para visualização tridimensional da dinâmica de risers. / Development of an environment for tridimensional visualization of riser dynamics.

Bernardes Júnior, João Luiz 21 December 2004 (has links)
A importância da exploração marítima de petróleo, em especial para o Brasil, é indiscutível e risers são estruturas essenciais para essa atividade. Uma melhor compreensão da dinâmica dessas estruturas e dos esforços a que estão submetidas vem resultando de pesquisa constante na área, pesquisa que gera um grande volume de dados, freqüentemente descrevendo fenômenos de difícil compreensão. Este trabalho descreve o desenvolvimento de um ambiente que combina técnicas de realidade virtual (como ambientes 3D, navegação e estereoscopia) e visualização científica (como mapeamento de cores, deformações e glifos) para facilitar a visualização desses dados. O ambiente, batizado como RiserView, permite a montagem de cenas tridimensionais compostas por risers, relevo do solo, superfície marítima, embarcações, bóias e outras estruturas, cada um com sua dinâmica própria. Permite ainda a visualização do escoamento para que a formação de vórtices na vizinhança dos risers e a interação fluido-mecânica resultante possam ser estudadas. O usuário pode controlar parâmetros da visualização de cada elemento e da animação da cena, bem como navegar livremente por ela. Foi desenvolvido também um algoritmo de baixo custo computacional (graças a simplificações possíveis devido à natureza do problema) para detecção e exibição em tempo real de colisões entre risers. O Processo Unificado foi adaptado para servir como metodologia para o projeto e implementação do aplicativo. O uso do VTK (API gráfica e de visualização científica) e do IUP (API para desenvolvimento de interfaces com o usuário) simplificou o desenvolvimento, principalmente para produzir um aplicativo portável para MS-Windows e Linux. Como opções de projeto, a visualização científica e a velocidade na renderização das cenas são privilegiadas, ao invés do realismo e da agilidade na interação com o usuário. As conseqüências dessas escolhas, bem como alternativas, são discutidas no trabalho. O uso do VTK e, através dele, do OpenGL permite que o aplicativo faça uso dos recursos disponíveis em placas gráficas comerciais para aumentar sua performance. Em sua versão atual a tarefa mais custosa para o RiserView é a atualização das posições de risers, principalmente descritos no domínio da freqüência, mas o trabalho discute aprimoramentos relativamente simples para minimizar esse problema. Apesar desses (e de outros) aprimoramentos possíveis, discutidos no trabalho, o ambiente mostra-se bastante adequado à visualização dos risers e de sua dinâmica bem como de fenômenos e elementos a eles associados. / The importance of offshore oil exploration, especially to Brazil, cannot be argued and risers are crucial structures for this activity. A better understanding of the dynamics of these structures and of the efforts to which they are subject has been resulting from constant research in the field, research that generates a large volume of data, often describing phenomena of difficult comprehension. This work describes the development of a software environment that combines elements of virtual reality (3D environments, navigation, stereoscopy) and scientific visualization techniques (such as color mapping, deformations and glyphs) to improve the understanding and visualization of these data. The environment, christened RiserView, allows the composition of tridimensional scenes including risers, the floor and surface of the ocean and ships, buoys and other structures, each with its own dynamics. It also allows the visualization of the flow in the neighborhood of the risers so that vortex shedding and the resulting fluid-mechanic interactions may be studied. The user may control parameters of the scene animation and of the visualization for each of its elements, as well as navigate freely within the scene. An algorithm of low computational cost (thanks to simplifications possible due to the nature of the problem), for the detection and exhibition of collisions between risers in real time, was also developed. The Unified Process was adapted to guide the software's project and implementation. The use of VTK (a scientific visualization and graphics API) and IUP (a user interface development API) simplified the development, especially the effort required to build an application portable to MS-Windows and Linux. As project choices, scientific visualization and the speed in rendering scenes in real time were given higher priority than realism and the agility in the user interaction, respectively. The consequences of these choices, as well as some alternatives, are discussed. The use of VTK and, through it, OpenGL, allows the application to access features available in most commercial graphics cards to increase performance. In its current version, the most costly task for RiserView are the calculations required to update riser positions during animation, especially for risers described in the frequency domain, but the work discusses relatively simple improvements to minimize this problem. Despite these (and other) possible improvements discussed in the work, the application proves quite adequate to the visualization of risers and their dynamics, as well as of associate elements and phenomena.
54

Animação de fluidos via autômatos celulares e sistemas de partículas / Fluid animation by cellular automata and particles systems

Xavier, Adilson Vicente 04 August 2006 (has links)
Made available in DSpace on 2015-03-04T18:50:40Z (GMT). No. of bitstreams: 1 Apresentacao.pdf: 115092 bytes, checksum: 8c6ec19160c941d82efcb1e4536bf57c (MD5) Previous issue date: 2006-08-04 / Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / The past two decades showed a rapid growing of physically-based modeling of fluids for computer graphics applications. Techniques in the field of Computational Fluid Dynamics (CFD) have been applied for realistic fluid animation for virtual surgery simulators, computer games and visual effects. In this approach, since the equation is solved numerically the next step is the rendering. A majority of fluid animation methods in computer graphics rely on a top down viewpoint that uses 2D/3D mesh based approaches motivated by the Eulerian methods of Finite Element (FE) and Finite Difference (FD), in conjunction with Navier-Stokes equations of fluids. Recently mesh-free methods like Smoothed Particle Hydrodynamics (SPH) have been applied. On the other hand, cellular automata (CA) are discrete models based on point particles that move on a lattice, according to suitable and simple rules in order to mimic a fully molecular dynamics. Such bottom-up framework needs low computational resources for both the memory allocation and the computation itself. In this work, we study the theoretical and practice aspects for computational animation of fluids in computer graphics, using cellular automata and SPH. We propose two models for animation of two-phase systems (e.g. gas-liquid), one based on SPH and CA and another only on CA. Finally, we describe a software developed in the context of this thesis for animation of fluids by CA. / Nas últimas décadas, observou-se um interesse crescente por aplicações de técnicas de dinâmica de fluidos na geração de efeitos visuais para a indústria cinematográfica e de jogos eletrônicos. Estas aplicações fazem parte da chamada Animação Computacional de Fluidos; a qual é uma área multidisciplinar, envolvendo também conceitos e métodos em computação gráfica e visualização científica. Nesta área, uma vez resolvidas numericamente as equações de fluidos, passa-se à fase de rendering, onde técnicas de visualização são aplicadas sobre os campos gerados, com o objetivo de criar efeitos visuais, tais como transparência, imagens refletidas na superfície de um líquido, ou mesmo, efeitos especiais que incluem deformação de paisagens, incêndios, etc. O métodos de Diferenças Finitas é o mais tradicional em trabalhos de animação de fluidos em computação gráfica. Nos últimos anos, porém, métodos baseados em sistemas de partículas, e livres de malhas, tais como o Smoothed Particle Hydrodinamics (SPH), foram utilizados na tentativa de resolver limitações inerentes aos métodos baseados em malhas. Por outro lado, métodos baseados em uma classe de autômatos celulares (AC), cuja evolução imita um sistema de partículas, vêm sendo também estudados como uma alternativa ao uso de equações diferenciais parciais e métodos numéricos para simulação de fluidos. Nesta tese, são estudados os aspectos teóricos e práticos da animação computacional de fluidos para computação gráfica, utilizando autômatos celulares e SPH. São propostos dois modelos para animação de sistemas bifásicos (gás-líquido, por exemplo), um deles baseado em SPH e AC, e um segundo totalmente baseado em AC. Finalmente, descrevemos um aplicativo, desenvolvido no âmbito desta tese, para animação de fluidos via AC.
55

Reconstruction Volumique de Résultats de Simulation à Base Chimère / Volumetric Reconstruction of Chimera Simulation Results

Huynh, Minh Duc 09 July 2012 (has links)
La simulation numérique des écoulements est une étape essentielle de la conception des turbines à gaz équipant les hélicoptères. La recherche permanente de la performance a conduit à des géométries de turbines très complexes et il devient de plus en plus difficile de modéliser des grilles de simulation qui épousent parfaitement la CAO des moteurs. La technique chimère permet de s’affranchir des contraintes de recollement parfait des différentes grilles en autorisant leur chevauchement. Cependant elle soulève de nouveaux problèmes lors de la phase de post-traitement, lorsqu’il s’agit d’exploiter les résultats de simulation afin de faire de nouveaux calculs ou de les visualiser, parce que les outils usuels ne sont pas adaptés à ces configurations particulières. Dans le cadre des deux premiers projets du programme MOSART du pôle de compétitivité Aerospace Valley, respectivement MACAO et OSMOSES, nous avons travaillé en collaboration avec l’entreprise Turbomeca à la conception d’une méthode de reconstruction volumique afin de traiter les résultats de simulations à base chimère. Nous avons ainsi proposé une méthode innovante permettant de reconstruire une partition de l’espace de simulation exempte de chevauchement entre grilles. La nouvelle partition conserve le maximum de propriétés des grilles d’origine et assure en tout point la conformité aux bords. La complexité théorique est linéaire avec la taille des grilles d’origine et nous permet d’obtenir des temps de traitement de l’ordre de la seconde pour des grilles de plusieurs centaines de milliers de mailles. Le principal intérêt de ce travail est de rendre exploitables les résultats de simulations à base chimère par les outils de post-traitement, qu’il s’agisse d’outils maison ou des nombreux logiciels commerciaux ou OpenSource disponibles, condition indispensable pour l’adoption de la méthode chimère par les bureaux d’études. / Computationnal fluid dynamics is an essential step in gas turbine modelling. Continuous optimization of turbines has led to sophisticated geometries, which raises severe issues for the design of adapted simulation grids. The chimera technique aims at relaxing geometry matching constraints by allowing grids overlap. However, post-processing of simulation results performed over chimera grids raises new issues because usual tools are not tuned for this particular geometricconfigurations. In the framework of the MOSART programme of the world competitiveness cluster Aerospace Valley, we have been working in collaboration with Turbomeca in order to develop a technique for the volumetric reconstruction of chimerasimulation results. We propose an innovative method that allows us to build a collection of non-overlapping grids while preserving the main properties of the former simulation grids and featuring boundary conforming property everywhere.The theorical complexity of our algorithms has proved to be linear in the size of the former grids and leads to computation times of a few seconds for grids of hundreds of thousands of cells. The main impact of this work leads in the possibility of using any post-processing tool, including a large number of OpenSource solutions, for post-processing chimera simulation results, which is a mandatory condition for the wide acceptance of this method by industry actors.
56

Desenvolvimento de um ambiente para visualização tridimensional da dinâmica de risers. / Development of an environment for tridimensional visualization of riser dynamics.

João Luiz Bernardes Júnior 21 December 2004 (has links)
A importância da exploração marítima de petróleo, em especial para o Brasil, é indiscutível e risers são estruturas essenciais para essa atividade. Uma melhor compreensão da dinâmica dessas estruturas e dos esforços a que estão submetidas vem resultando de pesquisa constante na área, pesquisa que gera um grande volume de dados, freqüentemente descrevendo fenômenos de difícil compreensão. Este trabalho descreve o desenvolvimento de um ambiente que combina técnicas de realidade virtual (como ambientes 3D, navegação e estereoscopia) e visualização científica (como mapeamento de cores, deformações e glifos) para facilitar a visualização desses dados. O ambiente, batizado como RiserView, permite a montagem de cenas tridimensionais compostas por risers, relevo do solo, superfície marítima, embarcações, bóias e outras estruturas, cada um com sua dinâmica própria. Permite ainda a visualização do escoamento para que a formação de vórtices na vizinhança dos risers e a interação fluido-mecânica resultante possam ser estudadas. O usuário pode controlar parâmetros da visualização de cada elemento e da animação da cena, bem como navegar livremente por ela. Foi desenvolvido também um algoritmo de baixo custo computacional (graças a simplificações possíveis devido à natureza do problema) para detecção e exibição em tempo real de colisões entre risers. O Processo Unificado foi adaptado para servir como metodologia para o projeto e implementação do aplicativo. O uso do VTK (API gráfica e de visualização científica) e do IUP (API para desenvolvimento de interfaces com o usuário) simplificou o desenvolvimento, principalmente para produzir um aplicativo portável para MS-Windows e Linux. Como opções de projeto, a visualização científica e a velocidade na renderização das cenas são privilegiadas, ao invés do realismo e da agilidade na interação com o usuário. As conseqüências dessas escolhas, bem como alternativas, são discutidas no trabalho. O uso do VTK e, através dele, do OpenGL permite que o aplicativo faça uso dos recursos disponíveis em placas gráficas comerciais para aumentar sua performance. Em sua versão atual a tarefa mais custosa para o RiserView é a atualização das posições de risers, principalmente descritos no domínio da freqüência, mas o trabalho discute aprimoramentos relativamente simples para minimizar esse problema. Apesar desses (e de outros) aprimoramentos possíveis, discutidos no trabalho, o ambiente mostra-se bastante adequado à visualização dos risers e de sua dinâmica bem como de fenômenos e elementos a eles associados. / The importance of offshore oil exploration, especially to Brazil, cannot be argued and risers are crucial structures for this activity. A better understanding of the dynamics of these structures and of the efforts to which they are subject has been resulting from constant research in the field, research that generates a large volume of data, often describing phenomena of difficult comprehension. This work describes the development of a software environment that combines elements of virtual reality (3D environments, navigation, stereoscopy) and scientific visualization techniques (such as color mapping, deformations and glyphs) to improve the understanding and visualization of these data. The environment, christened RiserView, allows the composition of tridimensional scenes including risers, the floor and surface of the ocean and ships, buoys and other structures, each with its own dynamics. It also allows the visualization of the flow in the neighborhood of the risers so that vortex shedding and the resulting fluid-mechanic interactions may be studied. The user may control parameters of the scene animation and of the visualization for each of its elements, as well as navigate freely within the scene. An algorithm of low computational cost (thanks to simplifications possible due to the nature of the problem), for the detection and exhibition of collisions between risers in real time, was also developed. The Unified Process was adapted to guide the software's project and implementation. The use of VTK (a scientific visualization and graphics API) and IUP (a user interface development API) simplified the development, especially the effort required to build an application portable to MS-Windows and Linux. As project choices, scientific visualization and the speed in rendering scenes in real time were given higher priority than realism and the agility in the user interaction, respectively. The consequences of these choices, as well as some alternatives, are discussed. The use of VTK and, through it, OpenGL, allows the application to access features available in most commercial graphics cards to increase performance. In its current version, the most costly task for RiserView are the calculations required to update riser positions during animation, especially for risers described in the frequency domain, but the work discusses relatively simple improvements to minimize this problem. Despite these (and other) possible improvements discussed in the work, the application proves quite adequate to the visualization of risers and their dynamics, as well as of associate elements and phenomena.
57

Développement d'un système in situ à base de tâches pour un code de dynamique moléculaire classique adapté aux machines exaflopiques / Integration of High-Performance Task-Based In Situ for Molecular Dynamics on Exascale Computers

Dirand, Estelle 06 November 2018 (has links)
L’ère de l’exascale creusera encore plus l’écart entre la vitesse de génération des données de simulations et la vitesse d’écriture et de lecture pour analyser ces données en post-traitement. Le temps jusqu’à la découverte scientifique sera donc grandement impacté et de nouvelles techniques de traitement des données doivent être mises en place. Les méthodes in situ réduisent le besoin d’écrire des données en les analysant directement là où elles sont produites. Il existe plusieurs techniques, en exécutant les analyses sur les mêmes nœuds de calcul que la simulation (in situ), en utilisant des nœuds dédiés (in transit) ou en combinant les deux approches (hybride). La plupart des méthodes in situ traditionnelles ciblent les simulations qui ne sont pas capables de tirer profit du nombre croissant de cœurs par processeur mais elles n’ont pas été conçues pour les architectures many-cœurs qui émergent actuellement. La programmation à base de tâches est quant à elle en train de devenir un standard pour ces architectures mais peu de techniques in situ à base de tâches ont été développées.Cette thèse propose d’étudier l’intégration d’un système in situ à base de tâches pour un code de dynamique moléculaire conçu pour les supercalculateurs exaflopiques. Nous tirons profit des propriétés de composabilité de la programmation à base de tâches pour implanter l’architecture hybride TINS. Les workflows d’analyses sont représentés par des graphes de tâches qui peuvent à leur tour générer des tâches pour une exécution in situ ou in transit. L’exécution in situ est rendue possible grâce à une méthode innovante de helper core dynamique qui s’appuie sur le concept de vol de tâches pour entrelacer efficacement tâches de simulation et d’analyse avec un faible impact sur le temps de la simulation.TINS utilise l’ordonnanceur de vol de tâches d’Intel® TBB et est intégré dans ExaStamp, un code de dynamique moléculaire. De nombreuses expériences ont montrées que TINS est jusqu’à 40% plus rapide que des méthodes existantes de l’état de l’art. Des simulations de dynamique moléculaire sur des système de 2 milliards de particles sur 14,336 cœurs ont montré que TINS est capable d’exécuter des analyses complexes à haute fréquence avec un surcoût inférieur à 10%. / The exascale era will widen the gap between data generation rate and the time to manage their output and analysis in a post-processing way, dramatically increasing the end-to-end time to scientific discovery and calling for a shift toward new data processing methods. The in situ paradigm proposes to analyze data while still resident in the supercomputer memory to reduce the need for data storage. Several techniques already exist, by executing simulation and analytics on the same nodes (in situ), by using dedicated nodes (in transit) or by combining the two approaches (hybrid). Most of the in situ techniques target simulations that are not able to fully benefit from the ever growing number of cores per processor but they are not designed for the emerging manycore processors.Task-based programming models on the other side are expected to become a standard for these architectures but few task-based in situ techniques have been developed so far. This thesis proposes to study the design and integration of a novel task-based in situ framework inside a task-based molecular dynamics code designed for exascale supercomputers. We take benefit from the composability properties of the task-based programming model to implement the TINS hybrid framework. Analytics workflows are expressed as graphs of tasks that can in turn generate children tasks to be executed in transit or interleaved with simulation tasks in situ. The in situ execution is performed thanks to an innovative dynamic helper core strategy that uses the work stealing concept to finely interleave simulation and analytics tasks inside a compute node with a low overhead on the simulation execution time.TINS uses the Intel® TBB work stealing scheduler and is integrated into ExaStamp, a task-based molecular dynamics code. Various experiments have shown that TINS is up to 40% faster than state-of-the-art in situ libraries. Molecular dynamics simulations of up to 2 billions particles on up to 14,336 cores have shown that TINS is able to execute complex analytics workflows at a high frequency with an overhead smaller than 10%.
58

Visualization of Particle In Cell Simulations / Visualization of Particle In Cell Simulations

Ljung, Patric January 2000 (has links)
<p>A numerical simulation case involving space plasma and the evolution of instabilities that generates very fast electrons, i.e. approximately at half of the speed of light, is used as a test bed for scientific visualisation techniques. A visualisation system was developed to provide interactive real-time animation and visualisation of the simulation results. The work focuses on two themes and the integration of them. The first theme is the storage and management of the large data sets produced. The second theme deals with how the Visualisation System and Visual Objects are tailored to efficiently visualise the data at hand. </p><p>The integration of the themes has resulted in an interactive real-time animation and visualisation system which constitutes a very powerful tool for analysis and understanding of the plasma physics processes. The visualisations contained in this work have spawned many new possible research projects and provided insight into previously not fully understood plasma physics phenomena.</p>
59

Efficient Methods for Direct Volume Rendering of Large Data Sets

Ljung, Patric January 2006 (has links)
Direct Volume Rendering (DVR) is a technique for creating images directly from a representation of a function defined over a three-dimensional domain. The technique has many application fields, such as scientific visualization and medical imaging. A striking property of the data sets produced within these fields is their ever increasing size and complexity. Despite the advancements of computing resources these data sets seem to grow at even faster rates causing severe bottlenecks in terms of data transfer bandwidths, memory capacity and processing requirements in the rendering pipeline. This thesis focuses on efficient methods for DVR of large data sets. At the core of the work lies a level-of-detail scheme that reduces the amount of data to process and handle, while optimizing the level-of-detail selection so that high visual quality is maintained. A set of techniques for domain knowledge encoding which significantly improves assessment and prediction of visual significance for blocks in a volume are introduced. A complete pipeline for DVR is presented that uses the data reduction achieved by the level-of-detail selection to minimize the data requirements in all stages. This leads to reduction of disk I/O as well as host and graphics memory. The data reduction is also exploited to improve the rendering performance in graphics hardware, employing adaptive sampling both within the volume and within the rendered image. The developed techniques have been applied in particular to medical visualization of large data sets on commodity desktop computers using consumer graphics processors. The specific application of virtual autopsies has received much interest, and several developed data classification schemes and rendering techniques have been motivated by this application. The results are, however, general and applicable in many fields and significant performance and quality improvements over previous techniques are shown. / On the defence date the status of article IX was Accepted.
60

Concept and Workflow for 3D Visualization of Multifaceted Meteorological Data

Helbig, Carolin 10 August 2015 (has links) (PDF)
The analysis of heterogeneous, complex data sets has become important in many scientific domains. With the help of scientific visualization, researchers can be supported in exploring their research results. One domain, where researchers have to deal with spatio-temporal data from different sources including simulation, observation and time-independent data, is meteorology. In this thesis, a concept and workflow for the 3D visualization of meteorological data was developed in cooperation with domain experts. Three case studies have been conducted based on the developed concept. In addition, the concept has been enhanced based on the experiences gained from the case studies. In contrast to existing all-in-one software applications, the proposed workflow employs a combination of existing software applications and their extensions to make a variety of already implemented visualization algorithms available. The workflow provides methods for data integration and for abstraction of the data as well as for generating representations of the variables of interest. Solutions for visualizing sets of variables, comparing results of multiple simulation runs and results of simulations based on different models are presented. The concept includes the presentation of the visualization scenes in virtual reality environments for a more comprehensible display of multifaceted data. To enable the user to navigate within the scenes, some interaction functionality was provided to control time, camera, and display of objects. The proposed methods have been selected with respect to the requirements defined in cooperation with the domain experts and have been verified with user tests. The developed visualization methods are used to analyze and present recent research results as well as for educational purposes. As the proposed approach uses generally applicable concepts, it can also be applied for the analysis of scientific data from other disciplines. / In nahezu allen Wissenschaftsdisziplinen steigt der Umfang erhobener Daten. Diese sind oftmals heterogen und besitzen eine komplexe Struktur, was ihre Analyse zu einer Herausforderung macht. Die wissenschaftliche Visualisierung bietet hier Möglichkeiten, Wissenschaftler bei der Untersuchung ihrer Forschungsergebnisse zu unterstützen. Eine der Disziplinen, in denen räumlich-zeitliche Daten aus verschiedenen Quellen inklusive Simulations- und Observationsdaten eine Rolle spielen, ist die Meteorologie. In dieser Arbeit wurde in Zusammenarbeit mit Experten der Meteorologie ein Konzept und ein Workflow für die 3D-Visualisierung meteorologischer Daten entwickelt. Dabei wurden drei Fallstudien erarbeitet, die zum einen auf dem erstellten Konzept beruhen und zum anderen durch die während der Fallstudie gesammelten Erfahrungen das Konzept erweiterten. Der Workflow besteht aus einer Kombination existierender Software sowie Erweiterungen dieser. Damit wurden Funktionen zur Verfügung gestellt, die bei anderen Lösungsansätzen in diesem Bereich, die oft nur eine geringere Anzahl an Funktionalität bieten, nicht zur Verfügung stehen. Der Workflow beinhaltet Methoden zur Datenintegration sowie für die Abstraktion und Darstellung der Daten. Es wurden Lösungen für die Visualisierung einer Vielzahl an Variablen sowie zur vergleichenden Darstellung verschiedener Simulationsläufe und Simulationen verschiedener Modelle präsentiert. Die generierten Visualisierungsszenen wurden mit Hilfe von 3D-Geräten, beispielsweise eine Virtual-Reality-Umgebung, dargestellt. Die stereoskopische Projektion bietet dabei die Möglichkeit, diese komplexen Daten mit verbessertem räumlichem Eindruck darzustellen. Um dem Nutzer eine umfassende Analyse der Daten zu ermöglichen, wurden eine Reihe von Funktionen zur Interaktion zur Verfügung gestellt, um beispielsweise Zeit, Kamera und die Anzeige von 3D-Objekten zu steuern. Das Konzept und der Workflow wurden entsprechend der Anforderungen entwickelt, die zusammen mit Fachexperten definiert wurden. Des Weiteren wurden die Anwendungen in verschiedenen Entwicklungsstadien durch Nutzer getestet und deren Feedback in die Entwicklung einbezogen. Die Ergebnisse der Fallstudien wurden von den Wissenschaftlern benutzt, um ihre Daten zu analysieren, sowie diese zu präsentieren und in der Lehre einzusetzen. Da der vorgeschlagene Workflow allgemein anwendbare Konzepte beinhaltet, kann dieser auch für die Analyse wissenschaftlicher Daten anderer Disziplinen verwendet werden.

Page generated in 0.1316 seconds