231 |
Quantitative analysis of extreme risks in insurance and financeYuan, Zhongyi 01 May 2013 (has links)
In this thesis, we aim at a quantitative understanding of extreme risks. We use heavy-tailed distribution functions to model extreme risks, and use various tools, such as copulas and MRV, to model dependence structures. We focus on modeling as well as quantitatively estimating certain measurements of extreme risks.
We start with a credit risk management problem. More specifically, we consider a credit portfolio of multiple obligors subject to possible default. We propose a new structural model for the loss given default, which takes into account the severity of default. Then we study the tail behavior of the loss given default under the assumption that the losses of the obligors jointly follow an MRV structure. This structure provides an ideal framework for modeling both heavy tails and asymptotic dependence. Using HRV, we also accommodate the asymptotically independent case. Multivariate models involving Archimedean copulas, mixtures and linear transforms are revisited.
We then derive asymptotic estimates for the Value at Risk and Conditional Tail Expectation of the loss given default and compare them with the traditional empirical estimates.
Next, we consider an investor who invests in multiple lines of business and study a capital allocation problem. A randomly weighted sum structure is proposed, which can capture both the heavy-tailedness of losses and the dependence among them, while at the same time separates the magnitudes from dependence. To pursue as much generality as possible, we do not impose any requirement on the dependence structure of the random weights. We first study the tail behavior of the total loss and obtain asymptotic formulas under various sets of conditions. Then we derive asymptotic formulas for capital allocation and further refine them to be explicit for some cases.
Finally, we conduct extreme risk analysis for an insurer who makes investments. We consider a discrete-time risk model in which the insurer is allowed to invest a proportion of its wealth in a risky stock and keep the rest in a risk-free bond. Assume that the claim amounts within individual periods follow an autoregressive process with heavy-tailed innovations and that the log-returns of the stock follow another autoregressive process, independent of the former one. We derive an asymptotic formula for the finite-time ruin probability and propose a hybrid method, combining simulation with asymptotics, to compute this ruin probability more efficiently. As an application, we consider a portfolio optimization problem in which we determine the proportion invested in the risky stock that maximizes the expected terminal wealth subject to a constraint on the ruin probability.
|
232 |
Interval GraphsYang, Joyce C 01 January 2016 (has links)
We examine the problem of counting interval graphs. We answer the question posed by Hanlon, of whether the formal power series generating function of the number of interval graphs on n vertices has a positive radius of convergence. We have found that it is zero. We have obtained a lower bound and an upper bound on the number of interval graphs on n vertices. We also study the application of interval graphs to the dynamic storage allocation problem. Dynamic storage allocation has been shown to be NP-complete by Stockmeyer. Coloring interval graphs on-line has applications to dynamic storage allocation. The most colors used by Kierstead's algorithm is 3 ω -2, where ω is the size of the largest clique in the graph. We determine a lower bound on the colors used. One such lower bound is 2 ω -1.
|
233 |
A Numerical Study in Prediction of Pressure on High-Speed Planing Craft during Slamming EventsSrivastava, Shivank 18 May 2018 (has links)
This thesis is an attempt to create a computer based tool that can be used academically and later industrially by naval architects in analysis and development of efficient planing hull forms. The work contained here is based on the theory created by Vorus (1996) which falls between empirical asymptotic solutions and intractable non-linear boundary value problem in the time-domain. The computer code developed predicts pressures on the bottom of high-speed planing craft during slamming events. The code is validated with available numerical data as a benchmark case. An aluminum wedge is dropped from various heights resulting in unsteady pressure distributions with high peak over the bottom plate. These pressure distributions are compared to the numerically predicted pressures by the code and presented in this thesis. The predicted flow velocities are within 8% difference of experimental data. The graphs depicts similar trends in experimental and numerical data. The predicted peak pressures deviate within 4% to 20% from experimental data. The analysis and comparison illustrate efficacy of the code.
|
234 |
Reaction Diffusion Equations On Domains With Thin LayersUnknown Date (has links)
acase@tulane.edu
|
235 |
Statistical inference in high dimensional linear and AFT modelsChai, Hao 01 July 2014 (has links)
Variable selection procedures for high dimensional data have been proposed and studied by a large amount of literature in the last few years. Most of the previous research focuses on the selection properties as well as the point estimation properties. In this paper, our goal is to construct the confidence intervals for some low-dimensional parameters in the high-dimensional setting. The models we study are the partially penalized linear and accelerated failure time models in the high-dimensional setting. In our model setup, all variables are split into two groups. The first group consists of a relatively small number of variables that are more interesting. The second group consists of a large amount of variables that can be potentially correlated with the response variable. We propose an approach that selects the variables from the second group and produces confidence intervals for the parameters in the first group. We show the sign consistency of the selection procedure and give a bound on the estimation error. Based on this result, we provide the sufficient conditions for the asymptotic normality of the low-dimensional parameters. The high-dimensional selection consistency and the low-dimensional asymptotic normality are developed for both linear and AFT models with high-dimensional data.
|
236 |
Semiparametric regression analysis of zero-inflated dataLiu, Hai 01 July 2009 (has links)
Zero-inflated data abound in ecological studies as well as in other scientific and quantitative fields. Nonparametric regression with zero-inflated response may be studied via the zero-inflated generalized additive model (ZIGAM). ZIGAM assumes that the conditional distribution of the response variable belongs to the zero-inflated 1-parameter exponential family which is a probabilistic mixture of the zero atom and the 1-parameter exponential family, where the zero atom accounts for an excess of zeroes in the data. We propose the constrained zero-inflated generalized additive model (COZIGAM) for analyzing zero-inflated data, with the further assumption that the probability of non-zero-inflation is some monotone function of the (non-zero-inflated) exponential family distribution mean. When the latter assumption obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We develop an iterative algorithm for model estimation based on the penalized likelihood approach, and derive formulas for constructing confidence intervals of the maximum penalized likelihood estimator. Some asymptotic properties including the consistency of the regression function estimator and the limiting distribution of the parametric estimator are derived. We also propose a Bayesian model selection criterion for choosing between the unconstrained and the constrained ZIGAMs. We consider several useful extensions of the COZIGAM, including imposing additive-component-specific proportional and partial constraints, and incorporating threshold effects to account for regime shift phenomena. The new methods are illustrated with both simulated data and real applications. An R package COZIGAM has been developed for model fitting and model selection with zero-inflated data.
|
237 |
Identification of the Parameters When the Density of the Minimum is GivenDavis, John C 30 May 2007 (has links)
Let (X1, X2, X3) be a tri-variate normal vector with a non-singular co-variance matrix ∑ , where for i ≠ j, ∑ij < 0 . It is shown here that it is then possible to determine the three means, the three variances and the three correlation coefficients based only on the knowledge of the probability density function for the minimum variate Y = min{X1 , X2 , X3 }. We will present a method for identifying the nine parameters which consists of careful determination of the asymptotic orders of various bivariate tail probabilities.
|
238 |
Asymptotic giant branch stars : their influence on binary systems and the interstellar mediumKarakas, Amanda I. (Amanda Irene), 1974- January 2003 (has links)
Abstract not available
|
239 |
Asymptotic methods for tests of homogeneity for finite mixture modelsStewart, Michael Ian January 2002 (has links)
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
|
240 |
L'homogénéisation d'équations de convection-diffusion singulières et de problèmes spectraux à poids indéfiniPankratova, Iryna 17 January 2011 (has links) (PDF)
Le but de la thèse est d'étudier l'homogénéisation d'équations de convection-diffusion singulières et de problèmes spectraux à poids indéfini. La thèse se compose de deux parties. La première partie contient des résultats qualitatifs et asymptotiques pour les solutions d'équations de type convection-diffusion stationnaires et instationnaires, qui sont définies dans des domaines bornés ou nonbornés. Les problèmes examinés comprennent des études qualitatives pour une équation elliptique avec des termes du premier ordre dans un cylindre semi-infini, l'homogénéisation de modèles de convection-diffusion dans des cylindres minces et une analyse asymptotique d'équations de convection-diffusion instationnaires avec un grand terme du premier ordre, posées dans un domaine borné. La deuxième partie de la thèse porte sur l'homogénéisation de problèmes spectraux à poids indéfini, pouvant changer de signe. On montre que le comportement asymptotique dépend essentiellement de la moyenne du poids, notamment si la moyenne est nulle ou non nulle. On construit alors le développement asymptotique du spectre dans les deux cas.
|
Page generated in 0.0616 seconds