• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 4
  • 4
  • 2
  • Tagged with
  • 186
  • 186
  • 29
  • 23
  • 15
  • 15
  • 13
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Latent State and Parameter Estimation of Stochastic Volatility/Jump Models via Particle Filtering

Soane, Andrew 04 February 2019 (has links)
Particle filtering in stochastic volatility/jump models has gained significant attention in the last decade, with many distinguished researchers adding their contributions to this new field. Golightly (2009), Carvalho et al. (2010), Johannes et al. (2009) and Aihara et al. (2008) all attempt to extend the work of Pitt and Shephard (1999) and Liu and Chen (1998) to adapt particle filtering to latent state and parameter estimation in stochastic volatility/jump models. This dissertation will review their extensions and compare their accuracy at filtering the Bates stochastic volatility model. Additionally, this dissertation will provide an overview of particle filtering and the various contributions over the last three decades. Finally, recommendations will be made as to how to improve the results of this paper and explore further research opportunities.
72

Constructing volatility surfaces for managed funds

Brinkman, Trevor Joseph January 2014 (has links)
Includes bibliographical references / In this dissertation, a methodology is developed for constructing a volatility surface for a managed fund by extending the work of Bakshi et al. (2003) and Taylor (2014). The power utility assumption (with constant relative risk aversion for a specific maturity) and historical returns series data are used for the identified factors in influencing the return of the fund and the fund itself. The coefficient of relative risk aversion for a specific maturity and market is estimated from quoted option prices on a market index. This is used in combination with the identified factors and fund return series to estimate the risk-neutral skewness of the fund. An optimisation procedure is then used to determine the volatility smile of the fund for a specific maturity. Thereafter, the volatility surface of the fund is constructed by repeating each step for different maturities. Although this methodology produces sensible results, the optimisation routine used is sensitive to initial values and constraints.
73

Mean-variance hedging in an illiquid market

Mavuso, Melusi Manqoba January 2015 (has links)
Consider a market consisting of two correlated assets: one liquidly traded asset and one illiquid asset that can only be traded at time 0. For a European derivative written on the illiquid asset, we find a hedging strategy consisting of a constant (time 0) holding in the illiquid asset and dynamic trading strategies in the liquid asset and a riskless bank account that minimizes the expected square replication error at maturity. This mean-variance optimal strategy is first found when the liquidly traded asset is a local martingale under the real world probability measure through an application of the Kunita-Watanabe projection onto the space of attainable claims. The result is then extended to the case where the liquidly traded asset is a continuous square integrable semimartingale, and we again use the Kunita-Watanabe decomposition, now under the variance optimal martingale measure, to find the mean-variance optimal strategy in feedback form. In an example, we consider the case where the two assets are driven by correlated Brownian motions and the derivative is a call option on the illiquid asset. We use this example to compare the terminal hedging profit and loss of the optimal strategy to a corresponding strategy that does not use the static hedge in the illiquid asset and conclude that the use of the static hedge reduces the expected square replication error significantly (by up to 90% in some cases). We also give closed form expressions for the expected square replication error in terms of integrals of well-known special functions.
74

A risk-budgeting framework for the combination of factor equity portfolios

Wegener, Fergus January 2016 (has links)
This dissertation examines a risk-budgeting approach to the construction of factor equity portfolios, proposed by de Carvalho et al. (2014). The approach begins with the construction of active-weighted portfolios with exposure to factors that historically have been linked to excess returns in the market. These factor portfolios are then combined using a risk-budgeting approach. Implied stock-level returns are then estimated using this combined active allocation, and a further optimisation allows for the incorporation of specific investor constraints. The framework constitutes a risk-based approach to portfolio construction in the sense that no direct estimation of expected stock returns is required, but is dependent on a robust estimation of the covariance structure of stock returns. The framework is first evaluated in the context of a simulation study. This section provided confirmation for the risk model estimation methodology used, as well as insight into the intricacies of the framework, in an environment where the underlying structure of data was known. The framework is useful for investors who wish to combine a set of active portfolios, by controlling the allocation of risk, and understanding the exposure of the final portfolio to each of the factor portfolio components. Based on the findings of the simulation study and a back-test of the framework on JSE data, it was found that at the risk-budgeting juncture, the level of prior information imposed (with regard to the performance of factor portfolios) has a significant impact on the performance of final portfolios. In addition, the application of investor constraints, such as long-only and absolute weight limits, ultimately hinder the investor's ability to retain the views taken on in the factor portfolio components. Furthermore, due to significant discrepancies in ex-ante and ex-post tracking error risk measurement, the use of alternative, or adjusted, risk measures is recommended.
75

Robustness of bond portfolio optimisation

Pillay, Divanisha January 2016 (has links)
Korn and Koziol (2006) apply the Markowitz (1952) mean-variance framework to bond portfolio selection by proposing the use of term structure models to estimate the time-varying moments of bond returns. Duffee (2002) introduces a distinction between completely affine and essentially affine term structure models. A completely affine model uses a market price of risk specification that is proportional to the volatility of the risk factors. However, this assumption of proportionality of the market price of risk contradicts the observed behaviour of bond returns. In response, Duffee (2002) introduces a more flexible essentially affine market price of risk specification by breaking the strict proportionality of the completely affine specification. Essentially affine models better represent the empirical features of bond returns whilst preserving the tractability of completely affine models. However, Duffee and Stanton (2012) find that the increased flexibility of the essentially affine model comes at the expense of real-world parameter estimation. Given these parameter estimation issues, this dissertation investigates whether the difficulty in estimating an essentially affine specification is outweighed by the empirical preferability, and whether, all these issues considered, the Markowitz (1952) approach to bond portfolio optimisation is robust. The results indicate that the superior capability of an essentially affine model to forecast expected returns outweighs real-world parameter estimation issues; and that the estimation and mean-variance optimisation procedures are worthwhile.
76

A survey of some regression-based and duality methods to value American and Bermudan options Bernard Joseph.

Joseph, Bernard January 2013 (has links)
Includes abstract. / Includes bibliographical references. / We give a review of regression-based Monte Carlo methods for pricing high-dimensional American and Bermudan options for which backwards methods such as lattice and PDE methods do not work. The continuous-time pricing problem is approximated in discrete time and the problem is formulated as an optimal stopping problem. The optimal stopping time can be expressed through continuation values (the price of the option given that the option is exercised after time j conditioned on the state process at time j). Regression-based Monte Carlo methods apply regression estimates to data generated by artificial samples of the state process in order to approximate continuation values. The resulting estimate of the option price is a lower bound. We then look at a dual formation of the optimal stopping problem which is used to generate an upper bound for the option price. The upper bound can be constructed by using any approximation to the option price. By using an approximation that arises from a lower bound method we have a general method for generating valid confidence intervals for the price of the option. In this way, the upper bound allows for a better estimate of the price to be computed and it provides a way of investigating the tightness of the lower bound by indicating whether more effort is needed to improve it.
77

Investigation of factor rotation routines in principal component analysis of stock returns

Weimar, Nicole January 2014 (has links)
Includes bibliographical references. / This paper investigates rotation routines that will produce uncorrelated rotated principal components for a dataset of stock returns, in an attempt to identify the macroeconomic factors that best explain the variability among risk-adjusted stock returns on the Johannesburg Stock Exchange. An alternative to the more traditional rotation approaches is used, which creates subsets of principal components with similar variances that are rotated in turn. It is found that only one of the three normalisation constraints examined can retain uncorrelated principal components after rotation. The results also show that when subspaces of components are rotated that have close eigenvalues, the different rotation criteria used to rotate principal components will produce similar results. After rotating the suitable subsets using varimax rotation, it is found that the first rotated component can be explained by the African Industrials sector, the second rotated component is related to the African Consumer Services sector while the third rotated component shows a significant relationship to the African Finance factor.
78

Historically implied swaption skews using non-parametric methods

Jackson, Evan January 2016 (has links)
This dissertation aims to derive historically realised volatilities for swaptions of a long-term nature within the South African market, which is illiquid and over-the- counter. To achieve this the dissertation adopts and constructs non-parametric methods which only make use of historical realised data of the underlying variable rather than any implied pricing history of the derivative itself. Stutzer's method of canonical valuation (1996) is adapted for use with interest rate derivatives of a long-term nature. However, under a simulation of swaption prices, canonical valuation is found to have a monotonic increase in pricing error for swaptions of maturities over 2 to 15 years. A new method is constructed, named the relative entropy approach, which is based on the work of Buchen and Kelly (1996) and is capable of pricing long-term interest rate derivatives using a smoothed continuous distribution of the historical realised data of the underlying variable only, while market implied pricing data can also be incorporated for calibration of the derivative to current market prices. Under simulation this method maintains consistent and bounded pricing error across swaption maturities of up to 15 years. This method is then used to obtain historical realised volatilities for swaptions of a long-term nature. The derived ten-year tenor swaption skews under the relative entropy approach observe smile characterisitcs similar to that of the market implied skew over short-term maturities and maintain a volatility smile, albeit diminishing, across moneyness for maturities up to 20 years. The skews are further tested for sensitivity to the input historical data as well as the precision of the skew under implementation of the relative entropy approach. Results show the derived swaption skews to be robust when using a historical data set greater than 1200 observations. The swaption skew is sensitive to the nature of the historical data used which is representative of particular market characteristics over certain historical periods. The relative entropy approach is concluded capable of pricing long-term swaptions in a market where little or no option pricing data exists and could be considered for use in practical applications.
79

Modelling Equities with a Stochastic Volatility Jump Diffusion

Gorven, Matthew 07 February 2019 (has links)
The Bates model provides a parsimonious fit to implied volatility surfaces, and its usefulness in developed markets is well documented. However, there is a lack of research assessing its applicability to developing markets. Additionally, research surrounding its usefulness for hedging long term liabilities is limited, despite its frequent use for this purpose. This dissertation dissects the dynamics of the Bates model into the Heston and Merton models in order to separately examine the effects of stochastic volatility and jumps. Challenges surrounding application of this model are investigated through an evaluation of risk-neutral calibration and simulation methods. The model’s ability to fit the implied volatility surfaces from the JSE Top 40 equity index is analysed. Lastly, an evaluation of the model’s delta and vega hedging performance is presented by comparing it to the hedge performance of other commonly used models.
80

Portfolio selection using Random Matrix theory and L-Moments

Ushan, Wardah January 2015 (has links)
Includes bibliographical references / Markowitz's (1952) seminal work on Modern Portfolio Theory (MPT) describes a methodology to construct an optimal portfolio of risky stocks. The constructed portfolio is based on a trade-off between risk and reward, and will depend on the risk- return preferences of the investor. Implementation of MPT requires estimation of the expected returns and variances of each of the stocks, and the associated covariances between them. Historically, the sample mean vector and variance-covariance matrix have been used for this purpose. However, estimation errors result in the optimised portfolios performing poorly out-of-sample. This dissertation considers two approaches to obtaining a more robust estimate of the variance-covariance matrix. The first is Random Matrix Theory (RMT), which compares the eigenvalues of an empirical correlation matrix to those generated from a correlation matrix of purely random returns. Eigenvalues of the random correlation matrix follow the Marcenko-Pastur density, and lie within an upper and lower bound. This range is referred to as the "noise band". Eigenvalues of the empirical correlation matrix falling within the "noise band" are considered to provide no useful information. Thus, RMT proposes that they be filtered out to obtain a cleaned, robust estimate of the correlation and covariance matrices. The second approach uses L-moments, rather than conventional sample moments, to estimate the covariance and correlation matrices. L-moment estimates are more robust to outliers than conventional sample moments, in particular, when sample sizes are small. We use L-moments in conjunction with Random Matrix Theory to construct the minimum variance portfolio. In particular, we consider four strategies corresponding to the four different estimates of the covariance matrix: the L-moments estimate and sample moments estimate, each with and without the incorporation of RMT. We then analyse the performance of each of these strategies in terms of their risk-return characteristics, their performance and their diversification.

Page generated in 0.0663 seconds