• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 16
  • 16
  • 16
  • 16
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Topics in risk-sensitive stochastic control

Deshpande, Amogh January 2014 (has links)
This thesis consists of three topics whose over-arching theme is based on risk sensitive stochastic control. In the �first topic (chapter 2), we study a problem on benchmark out-performance. We model this as a zero-sum risk-sensitive stochastic game between an investor who as a player wants to maximize the risk-sensitive criterion while the other player ( a stochastic benchmark) tries to minimize this maximum risk-sensitive criterion. We obtain an explicit expression for the strategies for both these two players. In the second topic (chapter 3), we consider a finite horizon risk-sensitive asset management problem. We study it in the context of a zero-sum stochastic game between an investor and the second player called the "market world" which provides a probability measure. Via this game, we connect two (somewhat) disparate areas in stochastics; namely, stochastic stability and risk-sensitive stochastic control in mathematical finance. The connection is through the Follmer-Schweizer minimal martingale measure. We discuss the impact of this measure on the investor's optimal strategy. In the third topic (chapter 4), we study the sufficient stochastic maximum principle of semi-Markov modulated jump diffusion. We study its application in the context of a quadratic loss minimization problem. We also study the finite-horizon risk-sensitive optimization in relation to the underlying sufficient stochastic maximum principle of a semi-markov modulated diffusion.
2

The valuation of exotic barrier options and American options using Monte Carlo simulation

Chirayukool, Pokpong January 2011 (has links)
Monte Carlo simulation is a widely used numerical method for valuing financial derivatives. It can be used to value high-dimensional options or complex path-dependent options. Part one of the thesis is concerned with the valuation of barrier options with complex time-varying barriers. In Part one, a novel simulation method, the contour bridge method, is proposed to value exotic time-varying barrier options. The new method is applied to value several exotic barrier options, including those with quadratic and trigonometric barriers. Part two of this thesis is concerned with the valuation of American options using the Monte Carlo simulation method. Since the Monte Carlo simulation can be computationally expensive, variance reduction methods must be used in order to implement Monte Carlo simulation efficiently. Chapter 5 proposes a new control variate method, based on the use of Bermudan put options, to value standard American options. It is shown that this new control variate method achieves significant gains over previous methods. Chapter 6 focuses on the extension and the generalisation of the standard regression method for valuing American options. The proposed method, the sequential contour Monte Carlo (SCMC) method, is based on hitting time simulation to a fixed set of contours. The SCMC method values American put options without bias and achieves marginal gains over the standard method. Lastly, in Part three, the SCMC method is combined with the contour bridge method to value American knock-in options with a linear barrier. The method can value American barrier options very well and efficiency gains are observed.
3

On inverse problems in mathematical finance

Klimmek, Martin January 2012 (has links)
We consider two inverse problems motivated by questions in mathematical finance. In the first two chapters (Part 1) we recover processes consistent with given perpetual American option prices. In the third and fourth chapters (Part 2) we construct model-independent bounds for prices of contracts based on the realized variance of an asset price process. The two parts are linked by the question of how to recover information about asset price dynamics from option prices: in part one we assume knowledge of perpetual American option prices while in the second part we will assume knowledge of European call and put option prices. Mathematically, the first part of the thesis presents a framework for constructing generalised diffusions consistent with optimal stopping values. The second part aims at constructing bounds for path-dependent functionals of martingales given their terminal distribution.
4

Alternative portfolio methods

Cao, Ruanmin January 2015 (has links)
Portfolio optimization in an uncertain environment has great practical value in investment decision process. But this area is highly fragmented due to fast evolution of market structure and changing investor behavior. In this dissertation, four methods are investigated/designed to explore their efficiency under different circumstances. Parametric portfolio decomposes weights by set of factors whose coefficients are uniquely determined via maximizing utility function. A robust bootstrap method is proposed to assist factor selection. If investors exhibit asymmetric aversion of tail risk, pessimistic models on Choquet utility maximization and coherent risk measures acquire superiority. A new hybrid method that inherits advantage of parameterization and tail risk minimization is designed. Mean-variance, which is optimal with elliptical return distribution, should be employed in the case of capital allocation to trading strategies. Nonparametric classifiers may enhance homogeneity of inputs before feeding the optimizer. Traditional factor portfolio can be extended to functional settings by applying FPCA to return curves sorted by factors. Diversification is always achieved by mixing with detected nonlinear components. This research contributes to existing literature on portfolio choice in three-folds: strength and weakness of each method is clarified; new models that outperform traditional approaches are developed; empirical studies are used to facilitate comparison.
5

Multi-asset option pricing problems : a variational approach

Chuang, Chienmin January 2012 (has links)
Options are important and frequently traded products in the modern financial market. How to price them fairly and reasonably is always an interesting issue for academia and industry. This research is performed under the classical multi-asset Black-Scholes-Merton (BSM) model and can be extended to other exotic models. We show how to reformulate the multi-asset Black-Scholes-Merton partial differential equation/inequality (BSM PDE/PDI) and provide theorems to justify the unique solution of reformulations. In terms of discretization, we adopt the finite element method (FEM) in space and finite difference method (FDM) in time. Moreover, we develop the closed-form formulas for the elemental matrices used in the finite element assembly process in a general high-dimensional framework. The discrete systems of option pricing problems are presented in the form of linear system of equations (LSE) and linear complementary problems (LCP) for European and American/perpetual options respectively. Up to six different algorithms for the LCP are introduced and compared on the basis of computational efficiency and errors. The option values of European, American and perpetual types are calibrated when given various payoffs and up to three assets. Particularly, their numerical free boundaries are identified and presented in the form of (d - 1)-dimensional manifold in a d-assetframework. In the last chapter, we conclude our research with our contributions and potential extension.
6

Modeling from a trader's perspective

Maeda, Jun January 2018 (has links)
I was trading professionally in the years 2006-2014 in the equity derivatives market. This thesis deals with two of the ideas inspired by my experience as a professional trader. The first topic deals with the pricing of a derivatives product in the market with a specific risk concentration. We call the product that causes the concentration a market driver. When the market driver exists, not only the market driver itself, but any derivatives product will not be priced fairly. We introduced a new model based on the Heston model that accounts for the concentration. The model leads to a pair of partial differential equations (PDEs): one semilinear parabolic PDE to price the market driver and one linear parabolic PDE to price all the other products. In solving the semilinear PDE, we use the policy improvement algorithm (PIA) to approximate the solution with those of linear PDEs. We show that the approximated solutions satisfy quadratic local convergence (QLC) which explains the efficiency of the algorithm. This efficiency of the algorithm is proved in a more general setup. The other idea sparked by my experience that is explored in the last chapter of the thesis concerns modeling technical analysis. Technical analysis is a family of methods that traders use to make decisions to purchase/sell assets. There is no mathematical proof that shows that they are correct as far as I am aware. We focus on one of the methods, the method of support and resistance levels, and used the optimal stopping argument to show the validity of the method. As far as I know, this is one of the first results to mathematically prove the effectiveness of a method in technical analysis.
7

Comparing statistical methods and artificial neural networks in bankruptcy prediction

Chu, Jung January 1997 (has links)
The use of multivariate discriminant analysis (MDA) and logistic regression procedure (Logit) in predicting business failure has been explored in numerous studies since 1960s. Recently, a newly developed technique, artificial neural networks (ANNs), has attracted much attention and has been applied to bankruptcy prediction area. At the same time, many papers attempted to compare the predictive ability of these two distinct classes of discriminators in order to find a best failure prediction method. However, most of their results, despite showing the superiority of ANNs, have been sharply criticised either for the unfair comparison or for their specific data selection. There is a need to undertake theory-based research to identify problem characteristics that predict when ANNs will forecast better than statistical models; to identify which input variable characteristics predict when ANNs will improve model estimation; and to identify when this advantage would give substantially improved forecasting performance. Motivated by the limited amount of research on investigating the relative effectiveness of traditional methods as compared to the ANNs under a wide variety of modelling assumptions, one of the objectives of this study is to compare their classification capacities on a theoretical basis, and to evaluate the robustness on certain situations through the simulation study. The investigation is conducted on two popular statistical techniques—the MDA and the Logit, as well as two different learning algorithms of ANNs—the standard generalised delta rule (GDR) and the Projection approach (Proj). This can be regarded as the horizontal assessments of bankruptcy prediction. The other aim of this thesis is to evaluate the impacts of variations in failure prediction models through the empirical study. These variations involve the issues we often encounter in the real world, such as the different sizes of sample, a choice-based sampling bias, the sensitivity of optimal cutoff points to misclassification costs of Type I and Type II errors, and the imbalance of the composition of failed to nonfailed firms between training and testing data sets. This can be viewed as the vertical assessments of bankruptcy prediction. The simulation results indicate that the neural networks are indeed competitive approaches on bankruptcy prediction. In particular, the Projection network, which was developed to overcome the drawbacks that a commonly used GDR backpropagation algorithm often experiences, proves its remarkable superiority not only quantitatively (i.e., lower overall accuracy), but also qualitatively (lower Type I and Type II errors). The Projection network holds a promise for future elaboration. Moreover, the outcomes of empirical experiments enhance our knowledge of some factors in constructing a failure forecasting model. This knowledge is related to both traditional statistical tools and modem neural networks and is essential for decision making.
8

Models for investment capacity expansion

Al-Motairi, Hessah January 2011 (has links)
The objective of this thesis is to develop and analyse two stochastic control problems arising in the context of investment capacity expansion. In both problems the underlying market fluctuations are modelled by a geometric Brownian motion. The decision maker’s aim is to determine admissible capacity expansion strategies that maximise appropriate expected present-value performance criteria. In the first model, capacity expansion has price/demand impact and involves proportional costs. The resulting optimisation problem takes the form of a singular stochastic control problem. In the second model, capacity expansion has no impact on price/demand but is associated with fixed as well as proportional costs, thus resulting in an impulse control problem. Both problems are completely solved and the optimal strategies are fully characterised. In particular, the value functions are constructed explicitly as suitable classical solutions to the associated Hamilton-Jacobi-Bellman equations
9

A chaos theory and nonlinear dynamics approach to the analysis of financial series : a comparative study of Athens and London stock markets

Karytinos, Aristotle D. January 1999 (has links)
This dissertation presents an effort to implement nonlinear dynamic tools adapted from chaos theory in financial applications. Chaos theory might be useful in explaining the dynamics of financial markets, since chaotic models are capable of exhibiting behaviour similar to that observed in empirical financial data. In this context, the scope of this research is to provide an insight into the role that nonlinearities and, in particular, chaos theory may play in explaining the dynamics of financial markets. From a theoretical point of view, the basic features of chaos theory, as well as, the rationales for bringing chaos theory to the attention of financial researchers are discussed. Empirically, the fundamental issue of determining whether chaos can be observed in financial time series is addressed. Regarding the latter, empirical literature has been controversial. A quite exhaustive analysis of the existing literature is provided, revealing the inadequacies in terms of methodology and the testing framework adopted, so far. A new "multiple testing" methodology is developed combining methods and techniques from the fields of both Natural Sciences and the Economics, most of which have not been applied to financial data before. A serious effort has been made to fill, as much as possible, the gap which results from the lack of a proper statistical framework for the chaotic methods. To achieve this the bootstrap methodology is adopted. The empirical part of this work focuses on the comparison of two markets with different levels of maturity; the Athens Stock Exchange (ASE), an emerging market, and London Stock Exchange (LSE). Our aim is to determine whether structural differences exist in these markets in terms of chaotic dynamics. In the empirical level we find nonlinearities in both markets by the use of the BDS test. R/S analysis reveals fractality and long term memory for the ASE series only. Chaotic methods, such as the correlation dimension (and related methods and techniques) and the largest Lyapunov exponent estimation, cannot rule out a chaotic explanation for the ASE market, but no such indication could be found for the LSE market. Noise filtering by the SVD method does not alter these findings. Alternative techniques based on nonlinear nearest neighbour forecasting methods, such as the "piecewise polynomial approximation" and the "simplex" methods, support our aforementioned conclusion concerning the ASE series. In all, our results suggest that, although nonlinearities are present, chaos is not a widespread phenomenon in financial markets and it is more likely to exist in less developed markets such as the ASE. Even then, chaos is strongly mixed with noise and the existence of low-dimensional chaos is highly unlikely. Finally, short-term forecasts trying to exploit the dependencies found in both markets seem to be of no economic importance after accounting for transaction costs, a result which supports further our conclusions about the limited scope and practical implications of chaos in Finance.
10

A non-parametric procedure to estimate a linear discriminant function with an application to credit scoring

Voorduin, Raquel January 2004 (has links)
The present work studies the application of two group discriminant analysis in the field of credit scoring. The view here given provides a completely different approach to how this problem is usually targeted. Credit scoring is widely used among financial institutions and is performed in a number of ways, depending on a wide range of factors, which include available information, support data bases, and informatic resources. Since each financial institution has its own methods of measuring risk, the ways in which an applicant is evaluated for the concession of credit for a particular product are at least as many as credit concessioners. However, there exist certain standard procedures for different products. For example, in the credit card business, when databases containing applicant information are available, usually credit score cards are constructed. These score cards provide an aid to qualify the applicant and decide if he or she represents a high risk for the institution or, on the contrary, a good investment. Score cards are generally used in conjunction with other criteria, such as the institution's own policies. In building score cards, generally parametric regression based procedures are used, where the assumption of an underlying model generating the data has to be made. Another aspect is that, in general, score cards are built taking into consideration only the probability that a particular applicant will not default. In this thesis, the objective will be to present a method of calculating a risk score that, does not depend on the actual process generating the data and that takes into account the costs and profits related to accepting a particular applicant. The ultimate objective of the financial institution should be to maximise profit and this view is a fundamental part of the procedure presented here.

Page generated in 0.1207 seconds