• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Extensible Graphical User Interface

Tejwani, Kamal Ram 17 October 2008 (has links)
No description available.
2

A Data Controller in a Language and Platform Independent Steering System and its interaction with Regular and Agent Based Models

Jayakumar, Adithya 06 August 2013 (has links)
No description available.
3

The Language And Platform Independent Steering (LAPIS) System

Smith, Harrison B. 25 June 2012 (has links)
No description available.
4

Conception et mise en oeuvre d'une plate-forme de pilotage de simltions numériques parallèles et distribuées

Richart, Nicolas 20 January 2010 (has links)
Le domaine de la simulation numérique évolue vers des simulations de phénomènes physiques toujours plus complexes. Cela se traduit typiquement par le couplage de plusieurs codes de simulation, où chaque code va gérer une physique (simulations multi-physiques) ou une échelle particulière (simulations multi-échelles). Dans ce cadre, l'analyse des résultats des simulations est un point clé, que ce soit en phase de développement pour valider les codes ou détecter des erreurs, ou en phase de production pour confronter les résultats à la réalité expérimentale. Dans tous les cas, le pilotage de simulations peut aider durant ce processus d'analyse des résultats. L'objectif de cette thèse est de concevoir et de réaliser une plate-forme logicielle permettant de piloter de telles simulations. Plus précisément, il s'agit à partir d'un client de pilotage distant d'accéder ou de modifier les données de la simulation de manière cohérente, afin par exemple de visualiser "en-ligne" les résultats intermédiaires. Pour ce faire, nous avons proposé un modèle de pilotage permettant de représenter des simulations couplées et d'interagir avec elles efficacement et de manière cohérente. Ces travaux ont été validés sur une simulation multi-échelles en physique des matériaux. / The numerical simulations evolve more and more to simulations of complex physical phenomena through multi-scale or multi-physics codes. For these kind of simulations data analysis is a main issue for many reasons, as detecting bugs during the development phase or to understand the dynamic of the physical phenomena simulated during the production phase. The computational steering is a technique well suited to do all this kind of data analysis. The goal of this thesis is to design and develop a computational steering framework that take into account the complexity of coupled simulations. So, through a computational steering client we want to interact coherently with data generated in coupled simulations. This afford for example to visualize on-line the intermediate results of simulations. In order to make this possible we will introduce an abstract model that enables to represent coupled simulations and to know when we can interact coherently with them. These works have been validated on a legacy multi-scale simulation of material physics.
5

Experiment Management for the Problem Solving Environment WBCSim

Shu, Jiang 31 August 2009 (has links)
A problem solving environment (PSE) is a computational system that provides a complete and convenient set of high level tools for solving problems from a specific domain. This thesis takes an in-depth look at the experiment management aspect of PSEs, which can be divided into three levels: 1) data management, 2) change management, and 3) execution management. At the data management level, anything related to an experiment (computer simulation) should be stored and documented. A database management system can be used to store the simulation runs for a PSE. Then various high level interfaces can be provided to allow users to save, retrieve, search, and compare these simulation runs. At the change management level, a scientist should only focus on how to solve a problem in the experiment domain. Aside from running experiments, a scientist may only consider how to define a new model, how to modify an existing model, and how to interpret an experiment result. By using XML to describe a simulation model and unify various implementation layers, changing an existing model in a PSE can be intuitive and fast. At the execution management level, how an experiment is executed is the main concern. By providing a computational steering capability, a scientist can pause, examine, and compare the intermediate results from a simulation. Contrasted with the traditional way of running a lengthy simulation to see the result at the end, computational steering can leverage the user's expert knowledge on the fly (during the simulation run) and provide new insights and new product design opportunities. This thesis illustrates these concepts and implementation by using WBCSim as an example. WBCSim is a PSE that increases the productivity of wood scientists conducting research on wood-based composite materials and manufacturing processes. It integrates Fortran 90 simulation codes with a Web based graphical front end, an optimization tool, and various visualization tools. The WBCSim project was begun in 1997 with support from United States Department of Agriculture, Department of Energy, and Virginia Tech. It has since been used by students in several wood science classes, by graduate students and faculty, and by researchers at several forest products companies. WBCSim also serves as a test bed for the design, construction, and evaluation of useful, production quality PSEs. / Ph. D.
6

Integrated Parallel Simulations and Visualization for Large-Scale Weather Applications

Malakar, Preeti January 2013 (has links) (PDF)
The emergence of the exascale era necessitates development of new techniques to efficiently perform high-performance scientific simulations, online data analysis and on-the-fly visualization. Critical applications like cyclone tracking and earthquake modeling require high-fidelity and high- performance simulations involving large-scale computations and generate huge amounts of data. Faster simulations and simultaneous online data analysis and visualization enable scientists provide real-time guidance to policy makers. In this thesis, we present a set of techniques for efficient high-fidelity simulations, online data analysis and visualization in environments with varying resource configurations. First, we present a strategy for improving throughput of weather simulations with multiple regions of interest. We propose parallel execution of these nested simulations based on partitioning the 2D process grid into disjoint rectangular regions associated with each subdomain. The process grid partitioning is obtained from a Huffman tree which is constructed from the relative execution times of the subdomains. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. We observe up to 33% gain over the default strategy in weather models. Second, we propose a processor reallocation heuristic that minimizes data redistribution cost while reallocating processors in the case of dynamic regions of interest. This algorithm is based on hierarchical diffusion approach that uses a novel tree reorganization strategy. We have also developed a parallel data analysis algorithm to detect regions of interest within a domain. This helps improve performance of detailed simulations of multiple weather phenomena like depressions and clouds, thereby in- creasing the lead time to severe weather phenomena like tornadoes and storm surges. Our method is able to reduce the redistribution time by 25% over a simple partition from scratch method. We also show that it is important to consider resource constraints like I/O bandwidth, disk space and network bandwidth for continuous simulation and smooth visualization. High simulation rates on modern-day processors combined with high I/O bandwidth can lead to rapid accumulation of data at the simulation site and eventual stalling of simulations. We show that formulating the problem as an optimization problem can deter- mine optimal execution parameters for enabling smooth simulation and visualization. This approach proves beneficial for resource-constrained environments, whereas a naive greedy strategy leads to stalling and disk overflow. Our optimization method provides about 30% higher simulation rate and consumes about 25-50% lesser storage space than a naive greedy approach. We have then developed an integrated adaptive steering framework, InSt, that analyzes the combined e ect of user-driven steering with automatic tuning of application parameters based on resource constraints and the criticality needs of the application to determine the final parameters for the simulations. It is important to allow the climate scientists to steer the ongoing simulation, specially in the case of critical applications. InSt takes into account both the steering inputs of the scientists and the criticality needs of the application. Finally, we have developed algorithms to minimize the lag between the time when the simulation produces an output frame and the time when the frame is visualized. It is important to reduce the lag so that the scientists can get on-the- y view of the simulation, and concurrently visualize important events in the simulation. We present most-recent, auto-clustering and adaptive algorithms for reducing lag. The lag-reduction algorithms adapt to the available resource parameters and the number of pending frames to be sent to the visualization site by transferring a representative subset of frames. Our adaptive algorithm reduces lag by 72% and provides 37% larger representativeness than the most-recent for slow networks.

Page generated in 0.0361 seconds