• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 29
  • 6
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 93
  • 93
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Super Spider: uma ferramenta versátil para exploração de dados multi-dimensionais representados por malhas de triângulos / Super Spider: a versatile tool for multi-dimensional data represented as triangle meshes

Watanabe, Lionis de Souza 11 April 2007 (has links)
Este trabalho apresenta o Super Spider: um sistema de exploração visual baseado no Spider Cursor, que abrange várias técnicas interativas da área de Visualização Computacional e conta com novos recursos de auxílio à investigação visual, além de ser uma ferramenta portável e flexível. / This work presents the Super Spider: a visual exploration system, based on Spider Cursor, that embraces many interactive techniques of Computer Visualization area and take into account innovative techniques to aid visual investigation, besides consisting of a portable and flexible tool.
22

VisualMet : um sistema para visualização e exploração de dados meteorológicos / VisualMet: a system for visualizing and exploring meteorological data

Manssour, Isabel Harb January 1996 (has links)
Os centros operacionais e de pesquisa em previsão numérica do tempo geralmente trabalham com uma grande quantidade de dados complexos multivariados, tendo que interpretá-los num curto espaço de tempo. Técnicas de visualização científica podem ser utilizadas para ajudar a entender o comportamento atmosférico. Este trabalho descreve a arquitetura e as facilidades apresentadas pelo sistema VisualMet, que foi implementado com base em um estudo das tarefas desenvolvidas pelos meteorologistas responsáveis pelo 8º Distrito de Meteorologia, em Porto Alegre. Este centro coleta dados meteorológicos três vezes ao dia, de 32 estações locais, e recebe dados similares do Instituto Nacional de Meteorologia, localizado em Brasília, e do National Meteorological Center, localizado nos Estados Unidos. Tais dados são resultados de observações de variáveis tais como temperatura, pressão, velocidade do vento e tipos de nuvens. As tarefas dos meteorologistas e as classes de dados foram observadas e analisadas para definir as características do sistema. A arquitetura e a implementação do VisualMet seguem, respectivamente, uma abordagem orientada a ferramentas e o paradigma de programação orientada a objetos. Dados obtidos das estações meteorológicas são instancias de uma classe chamada "Entidade". Três outras classes de objetos representando ferramentas que suportam as tarefas dos meteorologistas foram modeladas. Os objetos no sistema são apresentados ao usuário através de duas janelas, "Base de Entidades" e " Base de Ferramentas". A implementação da "Base de Ferramentas" inclui ferramentas de mapeamento (para produzir mapas de contorno, mapas de ícones e gráficos), ferramentas de armazenamento (para guardar e recuperar imagens geradas pelo sistema) e uma ferramenta de consulta (para ler valores de variáveis de estações selecionadas). E dada especial atenção a ferramenta de mapa de contorno, onde foi utilizado o método Multiquádrico para interpolação de dados. O trabalho apresenta ainda um estudo sobre métodos de interpolação de dados esparsos, antes de descrever detalhadamente os resultados obtidos com a ferramenta de mapa de contorno. Estes resultados (imagens) são discutidos e comparados com mapas gerados manualmente por meteorologistas do 8º Distrito de Meteorologia. Possíveis extensões do presente trabalho são também abordadas. / The weather forecast centers deal with a great volume of complex multivariate data, which usually have to be understood within short time. Scientific visualization techniques can be used to support both daily forecasting and meteorological research. This work reports the architecture and facilities of a system, named VisualMet, that was implemented based on a case study of the tasks accomplished by the meteorologists responsible for the 8th Meteorological District, in the South of Brazil. This center collects meteorological data three times a day from 32 local stations and receives similar data from both the National Institute of Meteorology, located in Brasilia, and National Meteorological Center, located in the United States of America. Such data result from observation of variables like temperature, pressure, wind velocity, and type of clouds. The tasks of meteorologists and the classes of application data were observed to define system requirements. The architecture and implementation of Visual- Met follow the tool-oriented approach and object-oriented paradigm, respectively. Data taken from meteorological stations are instances of a class named Entity. Three other classes of tools which support the meteorologists' tasks are modeled. Objects in the system are presented to the user through two windows, "Entities Base" and "Tools Base". Current implementation of the "Tools Base" contains mapping tools (to produce contour maps, icons maps and graphs), recording tools (to save and load images generated by the system) and a query tool (to read variables values of selected stations). The results of applying the multiquadric method to interpolate data for the construction of contour maps are also discussed. Before describing the results obtained with the multiquadric method, this work also presents a study on interpolation methods for scattered data. The results (images) obtained with the contour map tool are discussed and compared with the maps drawn by the meteorologists of the 8th Meteorological District. Possible extensions to this work are also presented.
23

Desenvolvimento de um pós-processador para visualização de resultados de simulação numérica em aqüíferos / Post-processor development for numerical simulationr result visualization in aquifers.

Quaresma, José Eduardo 10 September 2004 (has links)
Este trabalho apresenta o desenvolvimento de um programa visualizador de processos em aqüíferos intitulado VPA, como módulo integrante de um pa cote de programas computacionais aplicados ao gerenciamento de recursos hídricos subterrâneos denominado SPA (Simulação de Processos em Aqüí- feros). Esse visualizador foi desenvolvido para sistema operacional SuSE Linux, utilizando como ambientes gráficos o KDE e o GNOME. O ambiente de programação Glade 2 foi escolhido para programação e desenvolvimento do visualiza dor. Este ambiente de programação em C é distribuído nos âmbitos de open source development (GNU), e serve como complemento da interface gráfica com o usuário GUI (Graphic User Interfa ce). Em conjunto com esta linguagem de programa ção foi usado o Data Explorer IBM OpenDX, dis-tribuído também nos termos de open source development, como ferramenta para visualização de simulações em duas e três dimensões. Assim, é possível visualizar os resultados gera dos pela simulação de processos de escoamento de fluidos e transporte de poluentes no subsolo por meio dos módulos de pré e pós-processamento do pacote SPA, na forma de dados binários. O mó dulo desenvolvido representa uma alternativa robusta e econômica para a visualização cientí- fica de processos em aqüíferos. / In this work the development of an aquifer poces ses viewer program called VPA is presented. The program is integrated as a module of a compu- tational package devoted to groundwater resources management named SPA (Simulation for Processes in Aquifers). The program was developed on SuSE Linux operational system utilizing KDE and GNOME graphical environments. The graphic templates for the VPA graphical user interface (GUI)were taken from Glade 2. That graphical library is written in C programming language and is distributed under the open source development license (GNU). Scientific visualization routines from Data Ex- plorer IBM OpenDX, also distributed under GNU license, were gathered to VPA giving to it capabi lity to generate two and three-dimensional gra- phics to numerical results. Thus, VPA constitutes the post-processing tool for reading numerical results from binary files and constructing graphics for fluid flow and pol lutant transport simulation in groundwater in the SPA environment. Such module shows itself an economic and robust alternative in scientific visualization to aquifer processes.
24

Real-Time Visualization of Finite Element Models Using Surrogate Modeling Methods

Heap, Ryan C. 01 August 2013 (has links)
Finite element analysis (FEA) software is used to obtain linear and non-linear solutions to one, two, and three-dimensional (3-D) geometric problems that will see a particular load and constraint case when put into service. Parametric FEA models are commonly used in iterative design processes in order to obtain an optimum model given a set of loads, constraints, objectives, and design parameters to vary. In some instances it is desirable for a designer to obtain some intuition about how changes in design parameters can affect the FEA solution of interest, before simply sending the model through the optimization loop. This could be accomplished by running the FEA on the parametric model for a set of part family members, but this can be very timeconsuming and only gives snapshots of the models real behavior. The purpose of this thesis is to investigate a method of visualizing the FEA solution of the parametric model as design parameters are changed in real-time by approximating the FEA solution using surrogate modeling methods. The tools this research will utilize are parametric FEA modeling, surrogate modeling methods, and visualization methods. A parametric FEA model can be developed that includes mesh morphing algorithms that allow the mesh to change parametrically along with the model geometry. This allows the surrogate models assigned to each individual node to use the nodal solution of multiple finite element analyses as regression points to approximate the FEA solution. The surrogate models can then be mapped to their respective geometric locations in real-time. Solution contours display the results of the FEA calculations and are updated in real-time as the parameters of the design model change.
25

Shark Sim: A Procedural Method of Animating Leopard Sharks Based on Raw Location Data

Blizard, Katherine S 01 June 2013 (has links)
Fish such as the Leopard Shark (Triakis semifasciata) can be tagged on their fin, released back into the wild, and their location tracked though technologies such as autonomous robots. Timestamped location data about their target is stored. We present a way to procedurally generate an animated simulation of T. semifasciata using only these timestamped location points. This simulation utilizes several components. Input timestamps dictate a monotonic time-space curve mapping the simulation clock to the space curve. The space curve connects all the location points as a spline without any sharp folds that are too implausible for shark traversal. We create a model leopard shark that has convincing kinematics that respond to the space curve. This is achieved through acquiring a skinned model and applying T. semifasciata motion kinematics that respond to velocity and turn commands. These kinematics affect the spine and all fins that control locomotion and direction. Kinematic- based procedural keyframes added onto a queue interpolate while the shark model traverses the path. This simulation tool generates animation sequences that can be viewed in real-time. A user study of 27 individuals was deployed to measure the perceived realism of the sequences as judged by the user by contrasting 5 different film sequences. Results of the study show that on average, viewers perceive our simulation as more realistic than not.
26

Pathway Pioneer: Heterogenous Server Architecture for Scientific Visualization and Pathway Search in Metabolic Network Using Informed Search

Oswal, Vipul Kantilal 01 August 2014 (has links)
There is a huge demand for analysis and visualization of the biological models. PathwayPioneer is a web-based tool to analyze and visually represent complex biological models. PathwayPioneer generates the initial layout of the model and allows users to customize it. It is developed using .net technologies (C#) and hosted on the Internet Information Service (IIS) server. At back-end it interacts with python-based COBRApy library for biological calculations like Flux Balance Analysis (FBA). We have developed a parallel processing architecture to accommodate processing of large models and enable message-based communication between the .net webserver and python engine. We compared the performance of our online system by loading a website with multiple concurrent dummy users and performed different time intensive operations in parallel. Given two metabolites of interest, millions of pathways can be found between them even in a small metabolic network. Depth First Search or Breadth First search algorithm retrieves all the possible pathways, thereby requiring huge computational time and resources. In Pathway Search using Informed Method, we have implemented, compared, and analyzed three different informed search techniques (Selected Subsystem, Selected Compartment, and Dynamic Search) and traditional exhaustive search technique. We found that the Dynamic approach performs exceedingly well with respect to time and total number of pathways searches. During our implementation we developed a SBML parser which outperforms the commercial libSBML parser in C#.
27

[en] VISUALIZATION OF ENGINEERING MODELS THROUGH THE WEB USING VRML / [pt] VISUALIZAÇÃO DE MODELOS DE ENGENHARIA VIA WEB UTILIZANDO VRML

RAUL ADEMAR VALDIVIA PACHECO 31 May 2004 (has links)
[pt] O crescente aumento da disponibilidade de recursos computacionais para a simulação numérica permite que cientistas e engenheiros produzam enormes quantidades de dados. A melhor compreensão destes dados mediante o uso de técnicas de computação gráfica é conhecido como visualização cientifica. Este trabalho propõe a visualização científica de problemas de engenharia usando uma arquitetura distribuída via WEB. Os dados simulados são lidos diretamente de um banco de dados e são gerados arquivos com as informações necessárias para sua visualização. Geram-se arquivos com dados de pré- processamento (como nós, elementos, linhas, áreas e elementos diferenciados por índices) e pós-processamento (como deformação, deslocamento e tensões, resultados mais importantes na análise utilizando o método de elementos finitos). Considerando uma arquitetura distribuída, a simulação numérica pode ser feita em um computador (servidor) e a visualização pode ser feita em um outro computador (cliente), utilizando uma interface simples, porém robusta para a visualização, como é o caso da WEB. A utilização do formato VRML facilita a distribuição e compartilhamento nesta visualização, fazendo assim independente a plataforma do servidor, que contém o software de simulação numérica, da plataforma do cliente. Usando como caso de estudo o software de Análise de Elementos Finitos ANSYS, os resultados obtidos mostraram-se satisfatórios e melhor manipuláveis ao se comparar com resultados visualizados por aquele software. O estudo de caso pode ser estendido para outros softwares de simulação da área de mecânica dos sólidos. / [en] The growing increase in computational resources for numerical simulation allows researchers and engineers to generate huge amount of data. The understanding of those data through the use of computer graphics techniques is known as Scientific Visualization. This work proposes a Scientific Visualization of engineering problems through the use of a distributed architecture via Web. The simulated data are obtained directly from a data base and the appropriate information is generated to their visualization.Pre-processing data (such as nodes, elements, lines, areas etc) and posprocessing data (such as deformation, stress and strain, from a finite element analysis) are generated. A distributed architecture is considered which allows the numerical simulation to be done in a main computer (server) and the visualization to take place in another computer (client), by using a simple, but robust, interface, as is the Web. It is being used the VRML which provides a natural data distribution and sharing. Results obtained with the Ansys software, a case study for a finite element analysis, have proven to be satisfactory and allowing for easy manipulation when compared with the results visualized through the mentioned software. The proposed architecture can be extended to cope with other solid mechanics software.
28

TerraVis: A Stereoscopic Viewer for Interactive Seismic Data Visualization

Stoecker, Justin W 27 April 2011 (has links)
Accurate earthquake prediction is a difficult, unsolved problem that is central to the ambitions of many geoscientists. Understanding why earthquakes occur requires a profound understanding of many interrelated processes; our planet functions as a massive, complex system. Scientific visualization can be applied to such problems to improve understanding and reveal relationships between data. There are several challenges inherent to visualizing seismic data: working with large, high-resolution 3D and 4D data sets in a myriad of formats, integrating and rendering multiple models in the same space, and the need for real-time interactivity and intuitive interfaces. This work describes a product of the collaboration between computer science and geophysics. TerraVis is a real-time system that incorporates advanced visualization techniques for seismic data. The software can process and efficiently render digital elevation models, earthquake catalogs, fault slip distributions, moment tensor solutions, and scalar fields in the same space. In addition, the software takes advantage of stereoscopic viewing and head tracking for immersion and improved depth perception. During reconstruction efforts after the devastating 2010 earthquake in Haiti, TerraVis was demonstrated as a tool for assessing the risk of future earthquakes.
29

Interactive Visualization Of Large Scale Time-Varying Datasets

Frishert, Willem Jan January 2008 (has links)
<p>Visualization of large scale time-varying volumetric datasets is an active topic of research. Technical limitations in terms of bandwidth and memory usage become a problem when visualizing these datasets on commodity computers at interactive frame rates. The overall objective is to overcome these limitations by adapting the methods of an existing Direct Volume Rendering pipeline. The objective is considered to be a proof of concept to assess the feasibility of visualizing large scale time-varying datasets using this pipeline. The pipeline consists of components from previous research, which make extensive use of graphics hardware to visualize large scale static data on commodity computers.</p><p>This report presents a diploma work, which adapts the pipeline to visualize flow features concealed inside the large scale Computational Fluid Dynamics dataset. The work provides a foundation to address the technical limitations of the commodity computer to visualize time-varying datasets. The report describes the components making up the Direct Volume Rendering pipeline together with the adaptations. It also briefly describes the Computational Fluid Dynamics simulation, the flow features and an earlier visualization approach to show the system’s limitations when exploring the dataset.</p>
30

Interactive Visualization Of Large Scale Time-Varying Datasets

Frishert, Willem Jan January 2008 (has links)
Visualization of large scale time-varying volumetric datasets is an active topic of research. Technical limitations in terms of bandwidth and memory usage become a problem when visualizing these datasets on commodity computers at interactive frame rates. The overall objective is to overcome these limitations by adapting the methods of an existing Direct Volume Rendering pipeline. The objective is considered to be a proof of concept to assess the feasibility of visualizing large scale time-varying datasets using this pipeline. The pipeline consists of components from previous research, which make extensive use of graphics hardware to visualize large scale static data on commodity computers. This report presents a diploma work, which adapts the pipeline to visualize flow features concealed inside the large scale Computational Fluid Dynamics dataset. The work provides a foundation to address the technical limitations of the commodity computer to visualize time-varying datasets. The report describes the components making up the Direct Volume Rendering pipeline together with the adaptations. It also briefly describes the Computational Fluid Dynamics simulation, the flow features and an earlier visualization approach to show the system’s limitations when exploring the dataset.

Page generated in 0.1356 seconds