• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 27
  • 19
  • 13
  • 10
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 232
  • 151
  • 61
  • 58
  • 41
  • 36
  • 32
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Incorporating geologic information into hydraulic tomography: A general framework based on geostatistical approach

Zha, Yuanyuan, Yeh, Tian-Chyi J., Illman, Walter A., Onoe, Hironori, Mok, Chin Man W., Wen, Jet-Chau, Huang, Shao-Yang, Wang, Wenke 04 1900 (has links)
Hydraulic tomography (HT) has become a mature aquifer test technology over the last two decades. It collects nonredundant information of aquifer heterogeneity by sequentially stressing the aquifer at different wells and collecting aquifer responses at other wells during each stress. The collected information is then interpreted by inverse models. Among these models, the geostatistical approaches, built upon the Bayesian framework, first conceptualize hydraulic properties to be estimated as random fields, which are characterized by means and covariance functions. They then use the spatial statistics as prior information with the aquifer response data to estimate the spatial distribution of the hydraulic properties at a site. Since the spatial statistics describe the generic spatial structures of the geologic media at the site rather than site-specific ones (e. g., known spatial distributions of facies, faults, or paleochannels), the estimates are often not optimal. To improve the estimates, we introduce a general statistical framework, which allows the inclusion of site-specific spatial patterns of geologic features. Subsequently, we test this approach with synthetic numerical experiments. Results show that this approach, using conditional mean and covariance that reflect site-specific large-scale geologic features, indeed improves the HT estimates. Afterward, this approach is applied to HT surveys at a kilometerscale- fractured granite field site with a distinct fault zone. We find that by including fault information from outcrops and boreholes for HT analysis, the estimated hydraulic properties are improved. The improved estimates subsequently lead to better prediction of flow during a different pumping test at the site.
2

Thresholds for peak-over-threshold theory

Amankonah, Frank O. January 2005 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2005. / "August, 2005." Includes bibliographical references (leaf 43). Online version available on the World Wide Web.
3

Copula Based Hierarchical Bayesian Models

Ghosh, Souparno 2009 August 1900 (has links)
The main objective of our study is to employ copula methodology to develop Bayesian hierarchical models to study the dependencies exhibited by temporal, spatial and spatio-temporal processes. We develop hierarchical models for both discrete and continuous outcomes. In doing so we expect to address the dearth of copula based Bayesian hierarchical models to study hydro-meteorological events and other physical processes yielding discrete responses. First, we present Bayesian methods of analysis for longitudinal binary outcomes using Generalized Linear Mixed models (GLMM). We allow flexible marginal association among the repeated outcomes from different time-points. An unique property of this copula-based GLMM is that if the marginal link function is integrated over the distribution of the random effects, its form remains same as that of the conditional link function. This unique property enables us to retain the physical interpretation of the fixed effects under conditional and marginal model and yield proper posterior distribution. We illustrate the performance of the posited model using real life AIDS data and demonstrate its superiority over the traditional Gaussian random effects model. We develop a semiparametric extension of our GLMM and re-analyze the data from the AIDS study. Next, we propose a general class of models to handle non-Gaussian spatial data. The proposed model can deal with geostatistical data that can accommodate skewness, tail-heaviness, multimodality. We fix the distribution of the marginal processes and induce dependence via copulas. We illustrate the superior predictive performance of our approach in modeling precipitation data as compared to other kriging variants. Thereafter, we employ mixture kernels as the copula function to accommodate non-stationary data. We demonstrate the adequacy of this non-stationary model by analyzing permeability data. In both cases we perform extensive simulation studies to investigate the performances of the posited models under misspecification. Finally, we take up the important problem of modeling multivariate extreme values with copulas. We describe, in detail, how dependences can be induced in the block maxima approach and peak over threshold approach by an extreme value copula. We prove the ability of the posited model to handle both strong and weak extremal dependence and derive the conditions for posterior propriety. We analyze the extreme precipitation events in the continental United States for the past 98 years and come up with a suite of predictive maps.
4

Statistical inference of a threshold model in extreme value analysis

Lee, David., 李大為. January 2012 (has links)
In many data sets, a mixture distribution formulation applies when it is known that each observation comes from one of the underlying categories. Even if there are no apparent categories, an implicit categorical structure may justify a mixture distribution. This thesis concerns the modeling of extreme values in such a setting within the peaks-over-threshold (POT) approach. Specifically, the traditional POT modeling using the generalized Pareto distribution is augmented in the sense that, in addition to threshold exceedances, data below the threshold are also modeled by means of the mixture exponential distribution. In the first part of this thesis, the conventional frequentist approach is applied for data modeling. In view of the mixture nature of the problem, the EM algorithm is employed for parameter estimation, where closed-form expressions for the iterates are obtained. A simulation study is conducted to confirm the suitability of such method, and the observation of an increase in standard error due to the variability of the threshold is addressed. The model is applied to two real data sets, and it is demonstrated how computation time can be reduced through a multi-level modeling procedure. With the fitted density, it is possible to derive many useful quantities such as return periods and levels, value-at-risk, expected tail loss and bounds for ruin probabilities. A likelihood ratio test is then used to justify model choice against the simpler model where the thin-tailed distribution is homogeneous exponential. The second part of the thesis deals with a fully Bayesian approach to the same model. It starts with the application of the Bayesian idea to a special case of the model where a closed-form posterior density is computed for the threshold parameter, which serves as an introduction. This is extended to the threshold mixture model by the use of the Metropolis-Hastings algorithm to simulate samples from a posterior distribution known up to a normalizing constant. The concept of depth functions is proposed in multidimensional inference, where a natural ordering does not exist. Such methods are then applied to real data sets. Finally, the issue of model choice is considered through the use of posterior Bayes factor, a criterion that stems from the posterior density. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
5

On tail behaviour and extremal values of some non-negative time seriesmodels

Zhang, Zhiqiang, 張志強 January 2002 (has links)
published_or_final_version / abstract / toc / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
6

Efficient estimation of parameters of the extreme value distribution

Saha, Sathi Rani January 2014 (has links)
The problem of efficient estimation of the parameters of the extreme value distribution has not been addressed in the literature. We obtain efficient estimators of the parameters of type I (maximum) extreme value distribution without solving the likelihood equations. This research provides for the first time simple expressions for the elements of the information matrix for type II censoring. We construct efficient estimators of the parameters using linear combinations of order statistics of a random sample drawn from the population. We derive explicit formulas for the information matrix for this problem for type II censoring and construct efficient estimators of the parameters using linear combinations of available order statistics with additional weights to the smallest and largest order statistics. We consider numerical examples to illustrate the applications of the estimators. We also perform an extensive Monte Carlo simulation study to examine the performance of the estimators for different sample sizes.
7

Contributions to multivariate L-moments : L-comoment mathematics /

Xiao, Peng. January 2006 (has links)
Thesis (Ph. D.)--University of Texas at Dallas, 2006. / Includes vita. Includes bibliographical references (leaves 92-93).
8

Multivariate Regular Variation and its Applications

Mariko, Dioulde Habibatou January 2015 (has links)
In this thesis, we review the basic notions related to univariate regular variation and study some fundamental properties of regularly varying random variables. We then consider the notion of regular variation in the multivariate case. After collecting some results from multivariate regular variation for random vectors with values in $\mathbb{R}_{+}^{d}$, we discuss its properties and examine several examples of multivariate regularly varying random vectors such as independent and identically distributed random vectors, fully dependent random vectors and other models. We also present the elements of univariate and multivariate extreme value theory and emphasize the connection with multivariate regular variation. Some measures of extreme dependence such as the stable tail dependence function and the Pickands dependence function are presented. We end the study by conducting a data analysis using financial data. In the univariate case, graphical tools such as quantile-quantile plots, mean excess plots and Hill plots are used in order to determine the underlying distribution of the univariate data. In the multivariate case, non-parametric estimators of the stable tail dependence function and the Pickands dependence function are used to describe the dependence structure of the multivariate data.
9

Bivariate extreme value analysis of commodity prices

Joyce, Matthew 21 April 2017 (has links)
The crude oil, natural gas, and electricity markets are among the most widely traded and talked about commodity markets across the world. Over the past two decades each commodity has seen price volatility due to political, economic, social, and technological reasons. With that comes a significant amount of risk that both corporations and governments must account for to ensure expected cash flows and to minimize losses. This thesis analyzes the portfolio risk of the major US commodity hubs for crude oil, natural gas and electricity by applying Extreme Value Theory to historical daily price returns between 2003 and 2013. The risk measures used to analyze risk are Value-at-Risk and Expected Shortfall, with these estimated by fitting the Generalized Pareto Distribution to the data using the peak-over-threshold method. We consider both the univariate and bivariate cases in order to determine the effects that price shocks within and across commodities will have in a mixed portfolio. The results show that electricity is the most volatile, and therefore most risky, commodity of the three markets considered for both positive and negative returns. In addition, we find that the univariate and bivariate results are statistically indistinguishable, leading to the conclusion that for the three markets analyzed during this period, price shocks in one commodity does not directly impact the volatility of another commodity’s price. / Graduate
10

New statistical models for extreme values

Eljabri, Sumaya Saleh M. January 2013 (has links)
Extreme value theory (EVT) has wide applicability in several areas like hydrology, engineering, science and finance. Across the world, we can see the disruptive effects of flooding, due to heavy rains or storms. Many countries in the world are suffering from natural disasters like heavy rains, storms, floods, and also higher temperatures leading to desertification. One of the best known extraordinary natural disasters is the 1931 Huang He flood, which led to around 4 millions deaths in China; these were a series of floods between Jul and Nov in 1931 in the Huang He river.Several publications are focused on how to find the best model for these events, and to predict the behaviour of these events. Normal, log-normal, Gumbel, Weibull, Pearson type, 4-parameter Kappa, Wakeby and GEV distributions are presented as statistical models for extreme events. However, GEV and GP distributions seem to be the most widely used models for extreme events. In spite of that, these models have been misused as models for extreme values in many areas.The aim of this dissertation is to create new modifications of univariate extreme value models.The modifications developed in this dissertation are divided into two parts: in the first part, we make generalisations of GEV and GP, referred to as the Kumaraswamy GEV and Kumaraswamy GP distributions. The major benefit of these models is their ability to fit the skewed data better than other models. The other idea in this study comes from Chen, which is presented in Proceedings of the International Conference on Computational Intelligence and Software Engineering, pp. 1-4. However, the cumulative and probability density functions for this distribution do not appear to be valid functions. The correction of this model is presented in chapter 6.The major problem in extreme event models is the ability of the model to fit tails of data. In chapter 7, the idea of the Chen model with the correction is combined with the GEV distribution to introduce a new model for extreme values referred to as new extreme value (NEV) distribution. It seems to be more flexible than the GEV distribution.

Page generated in 0.0361 seconds