• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 9
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An empirical analysis of factor seasonalities

Li, Ya 22 August 2017 (has links)
I establish the existence of seasonality in 42 popular risk factors in the asset pricing literature. I document extensive empirical evidence for the Keloharju et al. (2016) hypothesis that seasonalities in individual asset returns stem from their exposures to risk factors. It is the seasonal patterns in risk factors that lead to the seasonalities in individual asset portfolios. The empirical findings show that seasonalities are widely present among individual asset portfolios. However, both the all-factor model and the Fama-French (2014) five-factor model demonstrate that these patterns greatly disappear after I eliminate their exposures to the corresponding risk factors. Overall, 76.17% of the returns on 235 test equal-weighted portfolios I examine contain seasonality. My key finding is that 48.68% of equal-weighted portfolio returns with seasonalities no longer contain seasonality after I control for their exposures to all risk factors. Only 52.08% of the equal-weighted portfolio Fama-French five-factor model residual obtain substantial seasonal patterns in the Wald test. Regarding to seasonalities in risk factors, specific seasonal patterns include the January effect, higher returns during February, March, and July, and autocorrelations at irregular lags. The Wald test, a stable seasonality test, the Kruskal-Wallis chi-square test, a combined seasonality test, Fisher's Kappa test, and Bartlett's Kolmogorov-Smirnov test are used to identify the seasonal patterns in individual risk factors. Fama-French SMB (the size factor) and HML (the value factor) in the three-factor model, Fama-French RMW (the operating profitability factor) in the five-factor model, earnings/price, cash flow/price, momentum, short-term reversal, long-term reversal, daily variance, daily residual variance, growth rate of industrial production (value-weighted), term premium (equal-weighted and value-weighted), and profitability display robust seasonalities. Therefore, the first part of the research confirms that risk factors possess substantial seasonal patterns.
2

Calibration and Model Uncertainty of a Two-Factor Mean-Reverting Diffusion Model for Commodity Prices

Chuah, Jue Jun January 2013 (has links)
With the development of various derivative instruments and index products, commodities have become a distinct asset class which can offer enhanced diversification benefits to the traditional asset allocation of stocks and bonds. In this thesis, we begin by discussing some of the key properties of commodity markets which distinguish them from bond and stock markets. Then, we consider the informational role of commodity futures markets. Since commodity prices exhibit mean-reverting behaviour, we will also review several mean-reversion models which are commonly used to capture and describe the dynamics of commodity prices. In Chapter 4, we focus on discussing a two-factor mean-reverting model proposed by Hikspoors and Jaimungal, as a means of providing additional degree of randomness to the long-run mean level. They have also suggested a method to extract the implied market prices of risk, after estimating both the risk-neutral and real-world parameters from the calibration procedure. Given the usefulness of this model, we are motivated to investigate the robustness of this calibration process by applying the methodology to simulated data. The capability to produce stable and accurate parameter estimates will be assessed by selecting various initial guesses for the optimization process. Our results show that the calibration method had a lot of difficulties in estimating the volatility and correlation parameters of the model. Moreover, we demonstrate that multiple solutions obtained from the calibration process would lead to model uncertainty in extracting the implied market prices of risk. Finally, by using historical crude oil data from the same time period, we can compare our calibration results with those obtained by Hikspoors and Jaimungal.
3

Calibration and Model Uncertainty of a Two-Factor Mean-Reverting Diffusion Model for Commodity Prices

Chuah, Jue Jun January 2013 (has links)
With the development of various derivative instruments and index products, commodities have become a distinct asset class which can offer enhanced diversification benefits to the traditional asset allocation of stocks and bonds. In this thesis, we begin by discussing some of the key properties of commodity markets which distinguish them from bond and stock markets. Then, we consider the informational role of commodity futures markets. Since commodity prices exhibit mean-reverting behaviour, we will also review several mean-reversion models which are commonly used to capture and describe the dynamics of commodity prices. In Chapter 4, we focus on discussing a two-factor mean-reverting model proposed by Hikspoors and Jaimungal, as a means of providing additional degree of randomness to the long-run mean level. They have also suggested a method to extract the implied market prices of risk, after estimating both the risk-neutral and real-world parameters from the calibration procedure. Given the usefulness of this model, we are motivated to investigate the robustness of this calibration process by applying the methodology to simulated data. The capability to produce stable and accurate parameter estimates will be assessed by selecting various initial guesses for the optimization process. Our results show that the calibration method had a lot of difficulties in estimating the volatility and correlation parameters of the model. Moreover, we demonstrate that multiple solutions obtained from the calibration process would lead to model uncertainty in extracting the implied market prices of risk. Finally, by using historical crude oil data from the same time period, we can compare our calibration results with those obtained by Hikspoors and Jaimungal.
4

Connections between no-arbitrage and the continuous time mean-variance framework

Cheng, Enoch, January 2009 (has links)
Thesis (Ph. D.)--UCLA, 2009. / Vita. Description based on print version record. Includes bibliographical references (leaves 118-119).
5

Risk and return in financial markets : a study of the Hong Kong stock market /

Tsang, Yat-ming. January 1900 (has links)
Thesis (M. Soc. Sc.)--University of Hong Kong, 1991.
6

Risk and return in financial markets a study of the Hong Kong stock market /

Tsang, Yat-ming. January 1900 (has links)
Thesis (M.Soc.Sc.)--University of Hong Kong, 1991. / Also available in print.
7

Leveraging Security Data for a Quantitative Evaluation of Security Mitigation Strategies

Di Tizio, Giorgio 26 April 2023 (has links)
Keeping users’ and organizations’ data secure is a challenging task. The situation is made more complicated due to the ever-increasing complex dependencies among IT systems. In this scenario, current approaches for risk assessment and mitigation rely on industry best practices based on qualitative assessments that do not provide any measure of their effectiveness. In this Thesis, we argue that the rich availability of data about IT infrastructures and adversaries must be employed to quantitatively measure the risk and the effectiveness of security mitigation strategies. Our goal is to show that quantitative measures of effectiveness and cost using security data are not only possible but also beneficial for both individual users and organizations to identify the most appropriate security plan. To this aim, we employed a heterogeneous set of security data spanning from blacklist feeds and software vulnerability repositories to web third-party dynamics, criminal forums, and threat intelligence reports. We use this data to model attackers and security mitigation strategies and evaluate their effectiveness in mitigating attacks. We start with an evaluation of filter lists of privacy extensions to protect individuals’ privacy when browsing the Web. We then consider the security of billions of users accessing the Top 5K Alexa domains and evaluated the effectiveness and cost of security mitigations at different levels of the Internet infrastructure. We then evaluate the accuracy of SOC analysts in investigating alerts related to cyber attacks targeting a network. Finally, we develop methodologies for the analysis of the effectiveness of ML models to detect criminal discussions in forums and software updates to protect against targeted attacks performed by nation-state groups.
8

Event-based risk management of large scale information technology projects

Alem, Mohammad January 2013 (has links)
Globalisation has come as a double-edged blade for information technology (IT) companies; providing growth opportunities and yet posing many challenges. Software development is moving from a monolithic model to a distributed approach, where many entities and organisations are involved in the development process. Risk management an important area to deal with all the kinds of technical and social issues within companies planning and programming schedules, and this new way of working requires more attention to be paid to the temporal, socio-cultural and control aspects than before. Multinational companies like IBM have begun to consider how to address the distributed nature of its projects across the globe. With outlets across the globe, the company finds various people of different cultures, languages and ethics working on a single and bigger IT projects from different locations. Other IT companies are facing the same problems, despite there being many kinds of approaches available to handle risk management in large scale IT companies. IBM commissioned the Distributed Risk Management Process (DRiMaP) model as a suitable solution. This model focused on the collaborative and on-going control aspects, and paid attention to the need for risk managers, project managers and management to include risk management into all phases of projects and the business cycle. The authors of the DRiMaP model did not subject it to extensive testing. This research sets out to evaluate, improve and extend the model process and thereby develop a new and dynamic approach to distributed information systems development. To do this, this research compares and contrasts the model with other risk management approaches. An Evolutionary Model is developed, and this is subjected to empirical testing through a hybrid constructive research approach. A survey is used to draw out the observations of project participants, a structured interview gathered the opinions of project experts, a software tool was developed to implement the model, and SysML and Monte Carlo methods were applied to this to simulate the functioning of the model. The Evolutionary Model was found to partially address the shortcomings of the DRiMaP model, and to provide a valuable platform for the development of an enterprise risk management solution.
9

A Bayesian approach to financial model calibration, uncertainty measures and optimal hedging

Gupta, Alok January 2010 (has links)
In this thesis we address problems associated with financial modelling from a Bayesian point of view. Specifically, we look at the problem of calibrating financial models, measuring the model uncertainty of a claim and choosing an optimal hedging strategy. Throughout the study, the local volatility model is used as a working example to clarify the proposed methods. This thesis assumes a prior probability density for the unknown parameter in a model we try to calibrate. The prior probability density regularises the ill-posedness of the calibration problem. Further observations of market prices are used to update this prior, using Bayes law, and give a posterior probability density for the unknown model parameter. Resulting Bayes estimators are shown to be consistent for finite-dimensional model parameters. The posterior density is then used to compute the Bayesian model average price. In tests on local volatility models it is shown that this price is closer than the prices of comparable calibration methods to the price given by the true model. The second part of the thesis focuses on quantifying model uncertainty. Using the framework for market risk measures we propose axioms for new classes of model uncertainty measures. Similar to the market risk case, we prove representation theorems for coherent and convex model uncertainty measures. Example measures from the latter class are provided using the Bayesian posterior. These are used to value the model uncertainty for a range of financial contracts priced in the local volatility model. In the final part of the thesis we propose a method for selecting the model, from a set of candidate models, that optimises the hedging of a specified financial contract. In particular we choose the model whose corresponding price and hedge optimises some hedging performance indicator. The selection problem is solved using Bayesian loss functions to encapsulate the loss from using one model to price and hedge when the true model is a different model. Linkages are made with convex model uncertainty measures and traditional utility functions. Numerical experiments on a stochastic volatility model and the local volatility model show that the Bayesian strategy can outperform traditional strategies, especially for exotic options.
10

Technické rezervy neživotního pojištění v interních modelech solventnosti / Technical reserves of non-life insurance in the internal solvency models

Thomayer, Jiří January 2011 (has links)
Title: Technical reserves of non-life insurance in the internal solvency model Author: Bc. Jiří Thomayer Department: Department of Propability and Mathematical Statistics Supervisor: Mgr. Ing. Jakub Mertl Abstract: In this work we study and describe calculation of solvency capital using the standard formula contained in the Directive of the European Union (Solvency II), which should be put into practice in Europe on 1 January 2013. This calcu- lation is described in quantitative impact study 5. We describe a general approach to risk measurement and we show some particular practical measures used to risk measurement. We explain under what conditions the standard formula or its parts can be replaced by internal model. Next, we show disadvantages of using the stan- dard formula and we propose possible internal model to calculate risk premiums and risk reserves in non-life insurance. Finally we apply the proposed model for calculation risk reverses in non-life insurance in practice. Keywords: Standard formula, Risk measurement, Solvency II, Internal model;

Page generated in 0.0601 seconds