• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 14
  • 12
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Robust pricing and hedging beyond one marginal

Spoida, Peter January 2014 (has links)
The robust pricing and hedging approach in Mathematical Finance, pioneered by Hobson (1998), makes statements about non-traded derivative contracts by imposing very little assumptions about the underlying financial model but directly using information contained in traded options, typically call or put option prices. These prices are informative about marginal distributions of the asset. Mathematically, the theory of Skorokhod embeddings provides one possibility to approach robust problems. In this thesis we consider mostly robust pricing and hedging problems of Lookback options (options written on the terminal maximum of an asset) and Convex Vanilla Options (options written on the terminal value of an asset) and extend the analysis which is predominately found in the literature on robust problems by two features: Firstly, options with multiple maturities are available for trading (mathematically this corresponds to multiple marginal constraints) and secondly, restrictions on the total realized variance of asset trajectories are imposed. Probabilistically, in both cases, we develop new optimal solutions to the Skorokhod embedding problem. More precisely, in Part I we start by constructing an iterated Azema-Yor type embedding (a solution to the n-marginal Skorokhod embedding problem, see Chapter 2). Subsequently, its implications are presented in Chapter 3. From a Mathematical Finance perspective we obtain explicitly the optimal superhedging strategy for Barrier/Lookback options. From a probability theory perspective, we find the maximum maximum of a martingale which is constrained by finitely many intermediate marginal laws. Further, as a by-product, we discover a new class of martingale inequalities for the terminal maximum of a cadlag submartingale, see Chapter 4. These inequalities enable us to re-derive the sharp versions of Doob's inequalities. In Chapter 5 a different problem is solved. Motivated by the fact that in some markets both Vanilla and Barrier options with multiple maturities are traded, we characterize the set of market models in this case. In Part II we incorporate the restriction that the total realized variance of every asset trajectory is bounded by a constant. This has been previously suggested by Mykland (2000). We further assume that finitely many put options with one fixed maturity are traded. After introducing the general framework in Chapter 6, we analyse the associated robust pricing and hedging problem for convex Vanilla and Lookback options in Chapters 7 and 8. Robust pricing is achieved through construction of appropriate Root solutions to the Skorokhod embedding problem. Robust hedging and pathwise duality are obtained by a careful development of dynamic pathwise superhedging strategies. Further, we characterize existence of market models with a suitable notion of arbitrage.
12

Accounting for Model Uncertainty in Linear Mixed-Effects Models

Sima, Adam 01 February 2013 (has links)
Standard statistical decision-making tools, such as inference, confidence intervals and forecasting, are contingent on the assumption that the statistical model used in the analysis is the true model. In linear mixed-effect models, ignoring model uncertainty results in an underestimation of the residual variance, contributing to hypothesis tests that demonstrate larger than nominal Type-I errors and confidence intervals with smaller than nominal coverage probabilities. A novel utilization of the generalized degrees of freedom developed by Zhang et al. (2012) is used to adjust the estimate of the residual variance for model uncertainty. Additionally, the general global linear approximation is extended to linear mixed-effect models to adjust the standard errors of the parameter estimates for model uncertainty. Both of these methods use a perturbation method for estimation, where random noise is added to the response variable and, conditional on the observed responses, the corresponding estimate is calculated. A simulation study demonstrates that when the proposed methodologies are utilized, both the variance and standard errors are inflated for model uncertainty. However, when a data-driven strategy is employed, the proposed methodologies show limited usefulness. These methods are evaluated with a trial assessing the performance of cervical traction in the treatment of cervical radiculopathy.
13

Large-Scale Portfolio Allocation Under Transaction Costs and Model Uncertainty

Hautsch, Nikolaus, Voigt, Stefan 09 1900 (has links) (PDF)
We theoretically and empirically study portfolio optimization under transaction costs and establish a link between turnover penalization and covariance shrinkage with the penalization governed by transaction costs. We show how the ex ante incorporation of transaction costs shifts optimal portfolios towards regularized versions of efficient allocations. The regulatory effect of transaction costs is studied in an econometric setting incorporating parameter uncertainty and optimally combining predictive distributions resulting from high-frequency and low-frequency data. In an extensive empirical study, we illustrate that turnover penalization is more effective than commonly employed shrinkage methods and is crucial in order to construct empirically well-performing portfolios.
14

Block stability analysis using deterministic and probabilistic methods

Bagheri, Mehdi January 2011 (has links)
This thesis presents a discussion of design tools for analysing block stability around a tunnel. First, it was determined that joint length and field stress have a significant influence on estimating block stability. The results of calculations using methods based on kinematic limit equilibrium (KLE) were compared with the results of filtered DFN-DEM, which are closer to reality. The comparison shows that none of the KLE approaches– conventional, limited joint length, limited joint length with stress and probabilistic KLE – could provide results similar to DFN-DEM. This is due to KLE’s unrealistic assumptions in estimating either volume or clamping forces. A simple mechanism for estimating clamping forces such as continuum mechanics or the solution proposed by Crawford-Bray leads to an overestimation of clamping forces, and thus unsafe design. The results of such approaches were compared to those of DEM, and it was determined that these simple mechanisms ignore a key stage of relaxation of clamping forces due to joint existence. The amount of relaxation is a function of many parameters, such as stiffness of the joint and surrounding rock, the joint friction angle and the block half-apical angle. Based on a conceptual model, the key stage was considered in a new analytical solution for symmetric blocks, and the amount of joint relaxation was quantified. The results of the new analytical solution compared to those of DEM and the model uncertainty of the new solution were quantified. Further numerical investigations based on local and regional stress models were performed to study initial clamping forces. Numerical analyses reveal that local stresses, which are a product of regional stress and joint stiffness, govern block stability. Models with a block assembly show that the clamping forces in a block assembly are equal to the clamping forces in a regional stress model. Therefore, considering a single block in massive rock results in lower clamping forces and thus safer design compared to a block assembly in the same condition of in-situ stress and properties. Furthermore, a sensitivity analysis was conducted to determine which is  the most important parameter by assessing sensitivity factors and studying the applicability of the partial coefficient method for designing block stability. It was determined that the governing parameter is the dispersion of the half-apical angle. For a dip angle with a high dispersion, partial factors become very large and the design value for clamping forces is close to zero. This suggests that in cases with a high dispersion of the half-apical angle, the clamping forces could be ignored in a stability analysis, unlike in cases with a lower dispersion. The costs of gathering more information about the joint dip angle could be compared to the costs of overdesign. The use of partial factors is uncertain, at least without dividing the problem into sub-classes. The application of partial factors is possible in some circumstances but not always, and a FORM analysis is preferable. / QC 20111201
15

On the Convective-Scale Predictability of the Atmosphere

Bengtsson, Lisa January 2012 (has links)
A well-represented description of convection in weather and climate models is essential since convective clouds strongly influence the climate system. Convective processes interact with radiation, redistribute sensible and latent heat and momentum, and impact hydrological processes through precipitation. Depending on the models’ horizontal resolution, the representation of convection may look very different. However, the convective scales not resolved by the model are traditionally parameterized by an ensemble of non-interacting convective plumes within some area of uniform forcing, representing the “large scale”. A bulk representation of the mass-flux associated with the individual plumes in the defined area provide the statistical effect of moist convection on the atmosphere. Studying the characteristics of the ECMWF ensemble prediction system it is found that the control forecast of the ensemble system is not variable enough in order to yield a sufficient spread using an initial perturbation technique alone. Such insufficient variability may be addressed in the parameterizations of, for instance, cumulus convection where the sub-grid variability in space and time is traditionally neglected. Furthermore, horizontal transport due to gravity waves can act to organize deep convection into larger scale structures which can contribute to an upscale energy cascade. However, horizontal advection and numerical diffusion are the only ways through which adjacent model grid-boxes interact in the models. The impact of flow dependent horizontal diffusion on resolved deep convection is studied, and the organization of convective clusters is found very sensitive to the method of imposing horizontal diffusion. However, using numerical diffusion in order to represent lateral effects is undesirable. To address the above issues, a scheme using cellular automata in order to introduce lateral communication, memory and a stochastic representation of the statistical effects of cumulus convection is implemented in two numerical weather models. The behaviour of the scheme is studied in cases of organized convective squall-lines, and initial model runs show promising improvements. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 4: Submitted. </p>
16

Utility Indifference Pricing of Credit Instruments

Sigloch, Georg 03 March 2010 (has links)
While the market for credit instruments grew continuously in the decade before 2008, its liquidity has dried up significantly in the current crisis, and investors have become aware of the possible consequences of being exposed to credit risk. In this thesis we address these issues by pricing credit instruments using utility indifference pricing, a method that takes into account the investor's personal risk aversion and which is not affected by the lack of liquidity. Through stochastic optimal control methods, we use indifference pricing with exponential utility to determine corporate bond prices and CDS spreads. In the first part we examine how these quantities are affected by risk aversion under different models of default. The emphasis lies on a hybrid model, in which a regime switch of the reference entity is triggered by a creditworthiness index correlated to its stock price. The second part generalizes this setup by introducing uncertainty in the model parameters. Robust optimal control has been used independently in the literature to address model uncertainty for portfolio selection problems. Here, we incorporate this approach with utility indifference and derive some analytical and numerical results on how model uncertainty affects credit spreads.
17

Utility Indifference Pricing of Credit Instruments

Sigloch, Georg 03 March 2010 (has links)
While the market for credit instruments grew continuously in the decade before 2008, its liquidity has dried up significantly in the current crisis, and investors have become aware of the possible consequences of being exposed to credit risk. In this thesis we address these issues by pricing credit instruments using utility indifference pricing, a method that takes into account the investor's personal risk aversion and which is not affected by the lack of liquidity. Through stochastic optimal control methods, we use indifference pricing with exponential utility to determine corporate bond prices and CDS spreads. In the first part we examine how these quantities are affected by risk aversion under different models of default. The emphasis lies on a hybrid model, in which a regime switch of the reference entity is triggered by a creditworthiness index correlated to its stock price. The second part generalizes this setup by introducing uncertainty in the model parameters. Robust optimal control has been used independently in the literature to address model uncertainty for portfolio selection problems. Here, we incorporate this approach with utility indifference and derive some analytical and numerical results on how model uncertainty affects credit spreads.
18

Bayesian Model Uncertainty and Prior Choice with Applications to Genetic Association Studies

Wilson, Melanie Ann January 2010 (has links)
<p>The Bayesian approach to model selection allows for uncertainty in both model specific parameters and in the models themselves. Much of the recent Bayesian model uncertainty literature has focused on defining these prior distributions in an objective manner, providing conditions under which Bayes factors lead to the correct model selection, particularly in the situation where the number of variables, <italic>p</italic>, increases with the sample size, <italic>n</italic>. This is certainly the case in our area of motivation; the biological application of genetic association studies involving single nucleotide polymorphisms. While the most common approach to this problem has been to apply a marginal test to all genetic markers, we employ analytical strategies that improve upon these marginal methods by modeling the outcome variable as a function of a multivariate genetic profile using Bayesian variable selection. In doing so, we perform variable selection on a large number of correlated covariates within studies involving modest sample sizes. </p> <p>In particular, we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally 'validated' in independent studies. </p> <p></p> <p>In the context of Bayesian model uncertainty for problems involving a large number of correlated covariates we characterize commonly used prior distributions on the model space and investigate their implicit multiplicity correction properties first in the extreme case where the model includes an increasing number of redundant covariates and then under the case of full rank design matrices. We provide conditions on the asymptotic (in <italic>n</italic> and <italic>p</italic>) behavior of the model space prior </p> <p>required to achieve consistent selection of the global hypothesis of at least one associated variable in the analysis using global posterior probabilities (i.e. under 0-1 loss). In particular, under the assumption that the null model is true, we show that the commonly used uniform prior on the model space leads to inconsistent selection of the global hypothesis via global posterior probabilities (the posterior probability of at least one association goes to <italic>1</italic>) when the rank of the design matrix is finite. In the full rank case, we also show inconsistency when <italic>p</italic> goes to infinity faster than the square root of <italic>n</italic>. Alternatively, we show that any model space prior such that the global prior odds of association increases at a rate slower than the square root of <italic>n<italic> results in consistent selection of the global hypothesis in terms of posterior probabilities.</p> / Dissertation
19

A Robust Design Method for Model and Propagated Uncertainty

Choi, Hae-Jin 04 November 2005 (has links)
One of the important factors to be considered in designing an engineering system is uncertainty, which emanates from natural randomness, limited data, or limited knowledge of systems. In this study, a robust design methodology is established in order to design multifunctional materials, employing multi-time and length scale analyses. The Robust Concept Exploration Method with Error Margin Index (RCEM-EMI) is proposed for design incorporating non-deterministic system behavior. The Inductive Design Exploration Method (IDEM) is proposed to facilitate distributed, robust decision-making under propagated uncertainty in a series of multiscale analyses or simulations. These methods are verified in the context of Design of Multifunctional Energetic Structural Materials (MESM). The MESM is being developed to replace the large amount of steel reinforcement in a missile penetrator for light weight, high energy release, and sound structural integrity. In this example, the methods facilitate following state-of-the-art design capabilities, robust MESM design under (a) random microstructure changes and (b) propagated uncertainty in a multiscale analysis chain. The methods are designed to facilitate effective and efficient materials design; however, they are generalized to be applicable to any complex engineering systems design that incorporates computationally intensive simulations or expensive experiments, non-deterministic models, accumulated uncertainty in multidisciplinary analyses, and distributed, collaborative decision-making.
20

Hierarchical Bayesian Benchmark Dose Analysis

Fang, Qijun January 2014 (has links)
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to hierarchical Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indeed, for the few existing forms of Bayesian BMDs, informative prior information is seldom incorporated. Here, a new method is developed by using reparameterized quantal-response models that explicitly describe the BMD as a target parameter. This potentially improves the BMD/BMDL estimation by combining elicited prior belief with the observed data in the Bayesian hierarchy. Besides this, the large variety of candidate quantal-response models available for applying these methods, however, lead to questions of model adequacy and uncertainty. Facing this issue, the Bayesian estimation technique here is further enhanced by applying Bayesian model averaging to produce point estimates and (lower) credible bounds. Implementation is facilitated via a Monte Carlo-based adaptive Metropolis (AM) algorithm to approximate the posterior distribution. Performance of the method is evaluated via a simulation study. An example from carcinogenicity testing illustrates the calculations.

Page generated in 0.0281 seconds