Spelling suggestions: "subject:"performancecomputing"" "subject:"performance.comparing""
151 |
A faster technique for rendering meshes in multiple display systemsHand, Randall Eugene. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
152 |
ProLAS a novel dynamic load balancing library for advanced scientific computing /Krishnan, Manoj Kumar. January 2003 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
|
153 |
Overlapping of communication and computation and early binding fundamental mechanisms for improving parallel performance on clusters of workstations /Dimitrov, Rossen Petkov. January 2001 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
|
154 |
A web-based high performance simulation system for transport and retention of dissolved contaminants in soilsZeng, Honghai. January 2002 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Engineering. / Title from title screen. Includes bibliographical references.
|
155 |
Best effort MPI/RT as an alternative to MPI design and performance comparison /Angadi, Raghavendra. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
|
156 |
Performance evaluation of high performance parallel I/ODhandapani, Mangayarkarasi. January 2003 (has links) (PDF)
Thesis (M.S.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
|
157 |
High-performance algorithms and software for large-scale molecular simulationLiu, Xing 08 June 2015 (has links)
Molecular simulation is an indispensable tool in many different disciplines such as physics, biology, chemical engineering, materials science, drug design, and others. Performing large-scale molecular simulation is of great interest to biologists and chemists, because many important biological and pharmaceutical phenomena can only be observed in very large molecule systems and after sufficiently long time dynamics. On the other hand, molecular simulation methods usually have very steep computational costs, which limits current molecular simulation studies to relatively small systems. The gap between the scale of molecular simulation that existing techniques can handle and the scale of interest has become a major barrier for applying molecular simulation to study real-world problems.
In order to study large-scale molecular systems using molecular simulation, it requires developing highly parallel simulation algorithms and constantly adapting the algorithms to rapidly changing high performance computing architectures. However, many existing algorithms and codes for molecular simulation are from more than a decade ago, which were designed for sequential computers or early parallel architectures. They may not scale efficiently and do not fully exploit features of today's hardware. Given the rapid evolution in computer architectures, the time has come to revisit these molecular simulation algorithms and codes.
In this thesis, we demonstrate our approach to addressing the computational challenges of large-scale molecular simulation by presenting both the high-performance algorithms and software for two important molecular simulation applications: Hartree-Fock (HF) calculations and hydrodynamics simulations, on highly parallel computer architectures. The algorithms and software presented in this thesis have been used by biologists and chemists to study some problems that were unable to solve using existing codes. The parallel techniques and methods developed in this work can be also applied to other molecular simulation applications.
|
158 |
Petrophysical modeling and simulatin study of geological CO₂ sequestrationKong, Xianhui 24 June 2014 (has links)
Global warming and greenhouse gas (GHG) emissions have recently become the significant focus of engineering research. The geological sequestration of greenhouse gases such as carbon dioxide (CO₂) is one approach that has been proposed to reduce the greenhouse gas emissions and slow down global warming. Geological sequestration involves the injection of produced CO₂ into subsurface formations and trapping the gas through many geological mechanisms, such as structural trapping, capillary trapping, dissolution, and mineralization. While some progress in our understanding of fluid flow in porous media has been made, many petrophysical phenomena, such as multi-phase flow, capillarity, geochemical reactions, geomechanical effect, etc., that occur during geological CO₂ sequestration remain inadequately studied and pose a challenge for continued study. It is critical to continue to research on these important issues. Numerical simulators are essential tools to develop a better understanding of the geologic characteristics of brine reservoirs and to build support for future CO₂ storage projects. Modeling CO₂ injection requires the implementation of multiphase flow model and an Equation of State (EOS) module to compute the dissolution of CO₂ in brine and vice versa. In this study, we used the Integrated Parallel Accurate Reservoir Simulator (IPARS) developed at the Center for Subsurface Modeling at The University of Texas at Austin to model the injection process and storage of CO₂ in saline aquifers. We developed and implemented new petrophysical models in IPARS, and applied these models to study the process of CO₂ sequestration. The research presented in this dissertation is divided into three parts. The first part of the dissertation discusses petrophysical and computational models for the mechanical, geological, petrophysical phenomena occurring during CO₂ injection and sequestration. The effectiveness of CO₂ storage in saline aquifers is governed by the interplay of capillary, viscous, and buoyancy forces. Recent experimental data reveals the impact of pressure, temperature, and salinity on interfacial tension (IFT) between CO₂ and brine. The dependence of CO₂-brine relative permeability and capillary pressure on IFT is also clearly evident in published experimental results. Improved understanding of the mechanisms that control the migration and trapping of CO₂ in the subsurface is crucial to design future storage projects for long-term, safe containment. We have developed numerical models for CO₂ trapping and migration in aquifers, including a compositional flow model, a relative permeability model, a capillary model, an interfacial tension model, and others. The heterogeneities in porosity and permeability are also coupled to the petrophysical models. We have developed and implemented a general relative permeability model that combines the effects of pressure gradient, buoyancy, and capillary pressure in a compositional and parallel simulator. The significance of IFT variations on CO₂ migration and trapping is assessed. The variation of residual saturation is modeled based on interfacial tension and trapping number, and a hysteretic trapping model is also presented. The second part of this dissertation is a model validation and sensitivity study using coreflood simulation data derived from laboratory study. The motivation of this study is to gain confidence in the results of the numerical simulator by validating the models and the numerical accuracies using laboratory and field pilot scale results. Published steady state, core-scale CO₂/brine displacement results were selected as a reference basis for our numerical study. High-resolution compositional simulations of brine displacement with supercritical CO₂ are presented using IPARS. A three-dimensional (3D) numerical model of the Berea sandstone core was constructed using heterogeneous permeability and porosity distributions based on geostatistical data. The measured capillary pressure curve was scaled using the Leverett J-function to include local heterogeneity in the sub-core scale. Simulation results indicate that accurate representation of capillary pressure at sub-core scales is critical. Water drying and the shift in relative permeability had a significant impact on the final CO₂ distribution along the core. This study provided insights into the role of heterogeneity in the final CO₂ distribution, where a slight variation in porosity gives rise to a large variation in the CO₂ saturation distribution. The third part of this study is a simulation study using IPARS for Cranfield pilot CO₂ sequestration field test, conducted by the Bureau of Economic Geology (BEG) at The University of Texas at Austin. In this CO₂ sequestration project, a total of approximately 2.5 million tons supercritical CO₂ was injected into a deep saline aquifer about ~10000 ft deep over 2 years, beginning December 1st 2009. In this chapter, we use the simulation capabilities of IPARS to numerically model the CO₂ injection process in Cranfield. We conducted a corresponding history-matching study and got good agreement with field observation. Extensive sensitivity studies were also conducted for CO₂ trapping, fluid phase behavior, relative permeability, wettability, gravity and buoyancy, and capillary effects on sequestration. Simulation results are consistent with the observed CO₂ breakthrough time at the first observation well. Numerical results are also consistent with bottomhole injection flowing pressure for the first 350 days before the rate increase. The abnormal pressure response with rate increase on day 350 indicates possible geomechanical issues, which can be represented in simulation using an induced fracture near the injection well. The recorded injection well bottomhole pressure data were successfully matched after modeling the fracture in the simulation model. Results also illustrate the importance of using accurate trapping models to predict CO₂ immobilization behavior. The impact of CO₂/brine relative permeability curves and trapping model on bottom-hole injection pressure is also demonstrated. / text
|
159 |
Modeling Cardiovascular Hemodynamics Using the Lattice Boltzmann Method on Massively Parallel SupercomputersRandles, Amanda Elizabeth 24 September 2013 (has links)
Accurate and reliable modeling of cardiovascular hemodynamics has the potential to improve understanding of the localization and progression of heart diseases, which are currently the most common cause of death in Western countries. However, building a detailed, realistic model of human blood flow is a formidable mathematical and computational challenge. The simulation must combine the motion of the fluid, the intricate geometry of the blood vessels, continual changes in flow and pressure driven by the heartbeat, and the behavior of suspended bodies such as red blood cells. Such simulations can provide insight into factors like endothelial shear stress that act as triggers for the complex biomechanical events that can lead to atherosclerotic pathologies. Currently, it is not possible to measure endothelial shear stress in vivo, making these simulations a crucial component to understanding and potentially predicting the progression of cardiovascular disease. In this thesis, an approach for efficiently modeling the fluid movement coupled to the cell dynamics in real-patient geometries while accounting for the additional force from the expansion and contraction of the heart will be presented and examined. First, a novel method to couple a mesoscopic lattice Boltzmann fluid model to the microscopic molecular dynamics model of cell movement is elucidated. A treatment of red blood cells as extended structures, a method to handle highly irregular geometries through topology driven graph partitioning, and an efficient molecular dynamics load balancing scheme are introduced. These result in a large-scale simulation of the cardiovascular system, with a realistic description of the complex human arterial geometry, from centimeters down to the spatial resolution of red-blood cells. The computational methods developed to enable scaling of the application to 294,912 processors are discussed, thus empowering the simulation of a full heartbeat. Second, further extensions to enable the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations than previously possible. / Engineering and Applied Sciences
|
160 |
The Case For Hardware Overprovisioned SupercomputersPatki, Tapasya January 2015 (has links)
Power management is one of the most critical challenges on the path to exascale supercomputing. High Performance Computing (HPC) centers today are designed to be worst-case power provisioned, leading to two main problems: limited application performance and under-utilization of procured power. In this dissertation we introduce hardware overprovisioning: a novel, flexible design methodology for future HPC systems that addresses the aforementioned problems and leads to significant improvements in application and system performance under a power constraint. We first establish that choosing the right configuration based on application characteristics when using hardware overprovisioning can improve application performance under a power constraint by up to 62%. We conduct a detailed analysis of the infrastructure costs associated with hardware overprovisioning and show that it is an economically viable supercomputing design approach. We then develop RMAP (Resource MAnager for Power), a power-aware, low-overhead, scalable resource manager for future hardware overprovisioned HPC systems. RMAP addresses the issue of under-utilized power by using power-aware backfilling and improves job turnaround times by up to 31%. This dissertation opens up several new avenues for research in power-constrained supercomputing as we venture toward exascale, and we conclude by enumerating these.
|
Page generated in 0.085 seconds