• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards Improving Drought Forecasts Across Different Spatial and Temporal Scales

Madadgar, Shahrbanou 03 January 2014 (has links)
Recent water scarcities across the southwestern U.S. with severe effects on the living environment inspire the development of new methodologies to achieve reliable drought forecasting in seasonal scale. Reliable forecast of hydrologic variables, in general, is a preliminary requirement for appropriate planning of water resources and developing effective allocation policies. This study aims at developing new techniques with specific probabilistic features to improve the reliability of hydrologic forecasts, particularly the drought forecasts. The drought status in the future is determined by certain hydrologic variables that are basically estimated by the hydrologic models with rather simple to complex structures. Since the predictions of hydrologic models are prone to different sources of uncertainties, there have been several techniques examined during past several years which generally attempt to combine the predictions of single (multiple) hydrologic models to generate an ensemble of hydrologic forecasts addressing the inherent uncertainties. However, the imperfect structure of hydrologic models usually lead to systematic bias of hydrologic predictions that further appears in the forecast ensembles. This study proposes a post-processing method that is applied to the raw forecast of hydrologic variables and can develop the entire distribution of forecast around the initial single-value prediction. To establish the probability density function (PDF) of the forecast, a group of multivariate distribution functions, the so-called copula functions, are incorporated in the post-processing procedure. The performance of the new post-processing technique is tested on 2500 hypothetical case studies and the streamflow forecast of Sprague River Basin in southern Oregon. Verified by some deterministic and probabilistic verification measures, the method of Quantile Mapping as a traditional post-processing technique cannot generate the qualified forecasts as comparing with the copula-based method. The post-processing technique is then expanded to exclusively study the drought forecasts across the different spatial and temporal scales. In the proposed drought forecasting model, the drought status in the future is evaluated based on the drought status of the past seasons while the correlations between the drought variables of consecutive seasons are preserved by copula functions. The main benefit of the new forecast model is its probabilistic features in analyzing future droughts. It develops conditional probability of drought status in the forecast season and generates the PDF and cumulative distribution function (CDF) of future droughts given the past status. The conditional PDF can return the highest probable drought in the future along with an assessment of the uncertainty around that value. Using the conditional CDF for forecast season, the model can generate the maps of drought status across the basin with particular chance of occurrence in the future. In a different analysis of the conditional CDF developed for the forecast season, the chance of a particular drought in the forecast period can be approximated given the drought status of earlier seasons. The forecast methodology developed in this study shows promising results in hydrologic forecasts and its particular probabilistic features are inspiring for future studies.
2

Integrated Parallel Simulations and Visualization for Large-Scale Weather Applications

Malakar, Preeti January 2013 (has links) (PDF)
The emergence of the exascale era necessitates development of new techniques to efficiently perform high-performance scientific simulations, online data analysis and on-the-fly visualization. Critical applications like cyclone tracking and earthquake modeling require high-fidelity and high- performance simulations involving large-scale computations and generate huge amounts of data. Faster simulations and simultaneous online data analysis and visualization enable scientists provide real-time guidance to policy makers. In this thesis, we present a set of techniques for efficient high-fidelity simulations, online data analysis and visualization in environments with varying resource configurations. First, we present a strategy for improving throughput of weather simulations with multiple regions of interest. We propose parallel execution of these nested simulations based on partitioning the 2D process grid into disjoint rectangular regions associated with each subdomain. The process grid partitioning is obtained from a Huffman tree which is constructed from the relative execution times of the subdomains. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. We observe up to 33% gain over the default strategy in weather models. Second, we propose a processor reallocation heuristic that minimizes data redistribution cost while reallocating processors in the case of dynamic regions of interest. This algorithm is based on hierarchical diffusion approach that uses a novel tree reorganization strategy. We have also developed a parallel data analysis algorithm to detect regions of interest within a domain. This helps improve performance of detailed simulations of multiple weather phenomena like depressions and clouds, thereby in- creasing the lead time to severe weather phenomena like tornadoes and storm surges. Our method is able to reduce the redistribution time by 25% over a simple partition from scratch method. We also show that it is important to consider resource constraints like I/O bandwidth, disk space and network bandwidth for continuous simulation and smooth visualization. High simulation rates on modern-day processors combined with high I/O bandwidth can lead to rapid accumulation of data at the simulation site and eventual stalling of simulations. We show that formulating the problem as an optimization problem can deter- mine optimal execution parameters for enabling smooth simulation and visualization. This approach proves beneficial for resource-constrained environments, whereas a naive greedy strategy leads to stalling and disk overflow. Our optimization method provides about 30% higher simulation rate and consumes about 25-50% lesser storage space than a naive greedy approach. We have then developed an integrated adaptive steering framework, InSt, that analyzes the combined e ect of user-driven steering with automatic tuning of application parameters based on resource constraints and the criticality needs of the application to determine the final parameters for the simulations. It is important to allow the climate scientists to steer the ongoing simulation, specially in the case of critical applications. InSt takes into account both the steering inputs of the scientists and the criticality needs of the application. Finally, we have developed algorithms to minimize the lag between the time when the simulation produces an output frame and the time when the frame is visualized. It is important to reduce the lag so that the scientists can get on-the- y view of the simulation, and concurrently visualize important events in the simulation. We present most-recent, auto-clustering and adaptive algorithms for reducing lag. The lag-reduction algorithms adapt to the available resource parameters and the number of pending frames to be sent to the visualization site by transferring a representative subset of frames. Our adaptive algorithm reduces lag by 72% and provides 37% larger representativeness than the most-recent for slow networks.

Page generated in 0.1332 seconds