• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 89
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 564
  • 134
  • 101
  • 98
  • 77
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 42
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Finite Rank Perturbations of Random Matrices and their Continuum Limits

Bloemendal, Alexander 05 January 2012 (has links)
We study Gaussian sample covariance matrices with population covariance a bounded-rank perturbation of the identity, as well as Wigner matrices with bounded-rank additive perturbations. The top eigenvalues are known to exhibit a phase transition in the large size limit: with weak perturbations they follow Tracy-Widom statistics as in the unperturbed case, while above a threshold there are outliers with independent Gaussian fluctuations. Baik, Ben Arous and Péché (2005) described the transition in the complex case and conjectured a similar picture in the real case, the latter of most relevance to high-dimensional data analysis. Resolving the conjecture, we prove that in all cases the top eigenvalues have a limit near the phase transition. Our starting point is the work of Rámirez, Rider and Virág (2006) on the general beta random matrix soft edge. For rank one perturbations, a modified tridiagonal form converges to the known random Schrödinger operator on the half-line but with a boundary condition that depends on the perturbation. For general finite-rank perturbations we develop a new band form; it converges to a limiting operator with matrix-valued potential. The low-lying eigenvalues describe the limit, jointly as the perturbation varies in a fixed subspace. Their laws are also characterized in terms of a diffusion related to Dyson's Brownian motion and in terms of a linear parabolic PDE. We offer a related heuristic for the supercritical behaviour and rigorously treat the supercritical asymptotics of the ground state of the limiting operator. In a further development, we use the PDE to make the first explicit connection between a general beta characterization and the celebrated Painlevé representations of Tracy and Widom (1993, 1996). In particular, for beta = 2,4 we give novel proofs of the latter. Finally, we report briefly on evidence suggesting that the PDE provides a stable, even efficient method for numerical evaluation of the Tracy-Widom distributions, their general beta analogues and the deformations discussed and introduced here. This thesis is based in part on work to be published jointly with Bálint Virág.
432

Some questions in risk management and high-dimensional data analysis

Wang, Ruodu 04 May 2012 (has links)
This thesis addresses three topics in the area of statistics and probability, with applications in risk management. First, for the testing problems in the high-dimensional (HD) data analysis, we present a novel method to formulate empirical likelihood tests and jackknife empirical likelihood tests by splitting the sample into subgroups. New tests are constructed to test the equality of two HD means, the coefficient in the HD linear models and the HD covariance matrices. Second, we propose jackknife empirical likelihood methods to formulate interval estimations for important quantities in actuarial science and risk management, such as the risk-distortion measures, Spearman's rho and parametric copulas. Lastly, we introduce the theory of completely mixable (CM) distributions. We give properties of the CM distributions, show that a few classes of distributions are CM and use the new technique to find the bounds for the sum of individual risks with given marginal distributions but unspecific dependence structure. The result partially solves a problem that had been a challenge for decades, and directly leads to the bounds on quantities of interest in risk management, such as the variance, the stop-loss premium, the price of the European options and the Value-at-Risk associated with a joint portfolio.
433

Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution

Gottfridsson, Anneli January 2011 (has links)
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
434

Modelling And Analyzing The Uncertainty Propagation In Vector-based Network Structures In Gis

Yarkinoglu Gucuk, Oya 01 September 2007 (has links) (PDF)
Uncertainty is a quantitative attribute that represents the difference between reality and representation of reality. Uncertainty analysis and error propagation modeling reveals the propagation of input error through output. Main objective of this thesis is to model the uncertainty and its propagation for dependent line segments considering positional correlation. The model is implemented as a plug-in, called Propagated Band Model (PBM) Plug-in, to a commercial desktop application, GeoKIT Explorer. Implementation of the model is divided into two parts. In the first one, model is applied to each line segment of the selected network, separately. In the second one, error in each segment is transmitted through the line segments from the start node to the end node of the network. Outcomes are then compared with the results of the G-Band model which is the latest uncertainty model for vector features. To comment on similarities and differences of the outcomes, implementation is handled for two different cases. In the first case, users digitize the selected road network. In the second case recently developed software called Interactive Drawer (ID) is used to allow user to define a new network and simulate this network through Monte Carlo Simulation Method. PBM Plug-in is designed to accept the outputs of these implementation cases as an input, as well as generating and visualizing the uncertainty bands of the given line network. Developed implementations and functionality are basically for expressing the importance and effectiveness of uncertainty handling in vector based geometric features, especially for line segments which construct a network.
435

Representation Of Covariance Matrices In Track Fusion Problems

Gunay, Melih 01 November 2007 (has links) (PDF)
Covariance Matrix in target tracking algorithms has a critical role at multi- sensor track fusion systems. This matrix reveals the uncertainty of state es- timates that are obtained from diferent sensors. So, many subproblems of track fusion usually utilize this matrix to get more accurate results. That is why this matrix should be interchanged between the nodes of the multi-sensor tracking system. This thesis mainly deals with analysis of approximations of the covariance matrix that can best represent this matrix in order to efectively transmit this matrix to the demanding site. Kullback-Leibler (KL) Distance is exploited to derive some of the representations for Gaussian case. Also com- parison of these representations is another objective of this work and this is based on the fusion performance of the representations and the performance is measured for a system of a 2-radar track fusion system.
436

Automatic history matching in Bayesian framework for field-scale applications

Mohamed Ibrahim Daoud, Ahmed 12 April 2006 (has links)
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
437

VaR METODOLOGIJOS ANALIZĖ IR METODŲ PRAKTINIS TAIKYMAS / VaR methodology analysis and methods practical use

Rauktytė, Aidana 08 November 2010 (has links)
Magistro darbe nagrinėjamas šiuo metu vienas moderniausių rizikos matų – rizikos vertė (angl.Value-at-risk) Analizuojami trys pagrindiniai VaR rodiklio skaičiavimo metodai: variacijos/kovariacijos, istorinio modeliavimo ir Monte Karlo simuliacijos keliamų prielaidų, sudėtingumo ir adekvatumo požiūriais. Visų trijų metodų pagalba dabartinėmis rinkos sąlygomis atliekami empiriniai tyrimai, siekiant įvertinti rizikos vertes valiutų ir akcijų rinkose, atlikta gautų rizikos verčių palyginamoji analizė bei patikrintas naudotų metodų tikslumas. Autorės suformuluota hipotezė, kad VaR rodiklio skaičiavimo metodai nėra tinkami naudoti pereinamuoju laikotarpiu kuomet ekonominė aplinka ir padėtis nėra stabili iš dalies patvirtinta, nes atliktų tyrimų rezultatai atmetė tik variacijos/kovariacijos bei istorinio modeliavimo metodų tinkamumą. / In this master‘s work analyzed one of the modern risk measurements – Value-at-Risk (VaR). The paper examined three main VaR calculation methods: variance/covariance, historical simulation and Monte Carlo generations satisfying in the terms of the assumptions, adequacy and complexity. For all three methods was carried out empirical studies to assess the risk of currency and stock markets, made comparative analysis of the obtained risk values and verified accuracy of used methods in the current market conditions. The authors formulated the hypothesis that the VaR indicator calculation methods are not suitable for use during the transitional period when the economic environment and situation is not stable partially confirmed because the results of tests performed to reject just the variance / covariance and historical simulation methods.
438

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
439

Optimal Linear Combinations of Portfolios Subject to Estimation Risk

Jonsson, Robin January 2015 (has links)
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
440

Chaos multiplicatif Gaussien, matrices aléatoires et applications

Allez, Romain 23 November 2012 (has links) (PDF)
Dans ce travail, nous nous sommes intéressés d'une part à la théorie du chaos multiplicatif Gaussien introduite par Kahane en 1985 et d'autre part à la théorie des matrices aléatoires dont les pionniers sont Wigner, Wishart et Dyson. La première partie de ce manuscrit contient une brève introduction à ces deux théories ainsi que les contributions personnelles de ce manuscrit expliquées rapidement. Les parties suivantes contiennent les textes des articles publiés [1], [2], [3], [4], [5] et pré-publiés [6], [7], [8] sur ces résultats dans lesquels le lecteur pourra trouver des développements plus détaillés

Page generated in 0.0639 seconds