• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 17
  • 8
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Risk management of the financial markets.

January 1996 (has links)
by Chan Pui Man. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 108-111). / ABSTRACT --- p.II / TABLE OF CONTENTS --- p.III / ACKNOWLEDGEMENT --- p.VI / Chapter I. --- INTRODUCTION --- p.1 / Chapter II. --- LITERATURE REVIEW --- p.4 / Impact due to Deregulation --- p.5 / Impact due to Globalization --- p.5 / Impact due to Securitization --- p.6 / Impact due to Institutionalisation --- p.6 / Impact due to Computerisation --- p.7 / Chapter III. --- CONCEPT: MANAGEMENT OF RISK --- p.8 / Definition of Risk --- p.9 / Risk Analysis --- p.10 / Risk Assessment --- p.10 / Risk Measurement --- p.10 / Risk Management --- p.11 / Chapter IV. --- TYPE OF RISK --- p.13 / Market/Capital Risk --- p.14 / Reinvestment Risk --- p.15 / Interest Rate Risk --- p.16 / Credit Risk --- p.17 / Liquidity or Funding Risk --- p.18 / Currency and Foreign Exchange Risk --- p.19 / Inflation Risk --- p.19 / Operations Risk --- p.20 / Legal Risk --- p.20 / Political Risk --- p.21 / Systemic Risk --- p.22 / Portfolio Risk --- p.22 / Control Risk --- p.23 / Settlement Risk --- p.23 / Country Risk --- p.24 / Underwriting Risk --- p.24 / Residual or Moral Risk --- p.24 / Strategy Risk and Environment Risk --- p.25 / Chapter V. --- MEASURING CHANGING RISK --- p.26 / Historical Estimates --- p.28 / Non-parametric Methods --- p.29 / Parametric Methods --- p.30 / Chapter VI. --- EVOLUTION OF RISK ESTIMATION --- p.35 / Chapter VII. --- APPLYING PORTFOLIO THEORY INTO RISK ANALYSIS --- p.41 / Modelling Bank Risk --- p.43 / Identification of linkages between an individual loan and bank's overall risk profile --- p.43 / Distribution of expected values --- p.44 / Portfolio expected value --- p.44 / Scenario Analysis and Formation of Loan Risk Measurement --- p.45 / Subsystem --- p.45 / Formation of an Integrated Risk Measurement --- p.45 / Active Management of Portfolio Risk --- p.49 / Chapter VIII. --- RISK ANALYSIS OF INTERNATIONAL INVESTMENT --- p.51 / Discounted-Cash-Flow Analysis --- p.51 / Net Present Value Approach --- p.51 / Internal Rate of Return Approach --- p.54 / Break-even Probability Analysis --- p.55 / Certainty-Equivalent Method --- p.56 / Chapter IX. --- CONSTRUCTING A MODEL FOR RISK ASSESSMENT --- p.58 / "Set up a Model to Estimate ""Capital at Risk""" --- p.58 / Obey the Minimum Standards --- p.60 / Audit and Verify the Model --- p.62 / Chapter X. --- METHODOLOGIES OF RISK MEASUREMENT / Measuring Market Risk : J P Morgan Risk Management Methodology - RiskMetrics´ёØ --- p.64 / Statistical Analysis of Returns and Risk --- p.66 / Market Moves and Locally Gaussian Processes --- p.72 / Stochastic Volatility --- p.72 / Risk and Optionality --- p.73 / Mapping and Term Structure of Interest Rates --- p.73 / Measuring Position Risk --- p.75 / The Simplified Portfolio Approach --- p.77 / The Comprehensive Approach --- p.81 / The Building-Block Approach --- p.83 / Chapter XI. --- ITEMS INVOLVED IN RISK MANAGEMENT --- p.85 / Management Control --- p.35 / Constructing Valuation Methodology --- p.90 / Contents of Reporting --- p.92 / Evaluation of Risk --- p.93 / Counterparty Relationships --- p.93 / Chapter XII. --- AFTERTHOUGHT --- p.95 / APPENDIX --- p.98 / BIBLIOGRAPHY --- p.108
2

Robust approach to risk management and statistical analysis.

January 2012 (has links)
博士論文著重研究關於多項式優化的理論,並討論其在風險管理及統計分析中的應用。我們主要研究的對象乃為在控制理論和穩健優化中常見的所謂S 引理。原始的S 引理最早由Yakubovich 所引入。它給出一個二吹多項式在另一個二吹多項式的非負域上為非負的等價條件。在本論文中,我們把S 引理推廣到一元高吹多項式。由於S 引理與穩健優化密切相關,所以我們的結果可廣泛應用於風險管理及統計分析,包括估算在高階矩約束下的非線性風險量度問題,以及利用半正定規劃來計算同時置信區域帶等重要課題。同時,在相關章節的末段,我們以數值實驗結果來引證有關的新理論的有效性和應用前景。 / In this thesis we study some structural results in polynomial optimization, with an emphasis paid to the applications from risk management problems and estimations in statistical analysis. The key underlying method being studied is related to the so-called S-lemma in control theory and robust optimization. The original S-lemma was developed by Yakubovich, which states an equivalent condition for a quadratic polynomial to be non-negative over the non-negative domain of other quadratic polynomial(s). In this thesis, we extend the S-Lemma to univariate polynomials of any degree. Since robust optimization has a strong connection to the S-Lemma, our results lead to many applications in risk management and statistical analysis, including estimating certain nonlinear risk measures under moment bound constraints, and an SDP formulation for simultaneous confidence bands. Numerical experiments are conducted and presented to illustrate the effectiveness of the methods. / Detailed summary in vernacular field only. / Wong, Man Hong. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 134-147). / Abstract also in Chinese. / Abstract --- p.i / 摘要 --- p.ii / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Meeting the S-Lemma --- p.5 / Chapter 3 --- A strongly robust formulation --- p.13 / Chapter 3.1 --- A more practical extension for robust optimization --- p.13 / Chapter 3.1.1 --- Motivation from modeling aspect --- p.13 / Chapter 3.1.2 --- Discussion of a more robust condition --- p.15 / Chapter 4 --- Theoretical developments --- p.19 / Chapter 4.1 --- Definition of several order relations --- p.19 / Chapter 4.2 --- S-Lemma with a single condition g(x)≥0 --- p.20 / Chapter 5 --- Confidence bands in polynomial regression --- p.47 / Chapter 5.1 --- An introduction --- p.47 / Chapter 5.1.1 --- A review on robust optimization, nonnegative polynomials and SDP --- p.49 / Chapter 5.1.2 --- A review on the confidence bands --- p.50 / Chapter 5.1.3 --- Our contribution --- p.51 / Chapter 5.2 --- Some preliminaries on optimization --- p.52 / Chapter 5.2.1 --- Robust optimization --- p.52 / Chapter 5.2.2 --- Semidefinite programming and LMIs --- p.53 / Chapter 5.2.3 --- Nonnegative polynomials with SDP --- p.55 / Chapter 5.3 --- Some preliminaries on linear regression and confidence region --- p.59 / Chapter 5.4 --- Optimization approach to the confidence bands construction --- p.63 / Chapter 5.5 --- Numerical experiments --- p.66 / Chapter 5.5.1 --- Linear regression example --- p.66 / Chapter 5.5.2 --- Polynomial regression example --- p.67 / Chapter 5.6 --- Conclusion --- p.70 / Chapter 6 --- Moment bound of nonlinear risk measures --- p.72 / Chapter 6.1 --- Introduction --- p.72 / Chapter 6.1.1 --- Motivation --- p.72 / Chapter 6.1.2 --- Robustness and moment bounds --- p.74 / Chapter 6.1.3 --- Literature review in general --- p.76 / Chapter 6.1.4 --- More literature review in actuarial science --- p.78 / Chapter 6.1.5 --- Our contribution --- p.79 / Chapter 6.2 --- Methodological fundamentals behind the moment bounds --- p.81 / Chapter 6.2.1 --- Dual formulations, duality and tight bounds --- p.82 / Chapter 6.2.2 --- SDP and LMIs for some dual problems --- p.84 / Chapter 6.3 --- Worst expectation and worst risk measures on annuity payments --- p.87 / Chapter 6.3.1 --- The worst mortgage payments --- p.88 / Chapter 6.3.2 --- The worst probability of repayment failure --- p.89 / Chapter 6.3.3 --- The worst expected downside risk of exceeding the threshold --- p.90 / Chapter 6.4 --- Numerical examples for risk management --- p.94 / Chapter 6.4.1 --- A mortgage example --- p.94 / Chapter 6.4.2 --- An annuity example --- p.97 / Chapter 6.5 --- Conclusion --- p.100 / Chapter 7 --- Computing distributional robust probability functions --- p.101 / Chapter 7.1 --- Distributional robust function with a single random variable --- p.105 / Chapter 7.2 --- Moment bound of joint probability --- p.108 / Chapter 7.2.1 --- Constraint (7.5) in LMIs --- p.112 / Chapter 7.2.2 --- Constraint (7.6) in LMIs --- p.112 / Chapter 7.2.3 --- Constraint (7.7) in LMIs --- p.116 / Chapter 7.3 --- Several model extensions --- p.119 / Chapter 7.3.1 --- Moment bound of probability of union events --- p.119 / Chapter 7.3.2 --- The variety of domain of x --- p.120 / Chapter 7.3.3 --- Higher moments incorporated --- p.123 / Chapter 7.4 --- Applications of the moment bound --- p.124 / Chapter 7.4.1 --- The Riemann integrable set approximation --- p.124 / Chapter 7.4.2 --- Worst-case simultaneous VaR --- p.124 / Chapter 7.5 --- Conclusion --- p.126 / Chapter 8 --- Concluding Remarks and Future Directions --- p.127 / Chapter A --- Nonnegative univariate polynomials --- p.129 / Chapter B --- First and second moment of (7.2) --- p.131 / Bibliography --- p.134
3

Claim dependence in credibility models

Yeo, Keng Leong, Actuarial Studies, Australian School of Business, UNSW January 2006 (has links)
Existing credibility models have mostly allowed for one source of claim dependence only, that across time for an individual insured risk or a group of homogeneous insured risks. Numerous circumstances demonstrate that this may be inadequate and insufficient. In this dissertation, we developed a two-level common effects model, based loosely on the Bayesian model, which allows for two possible sources of dependence, that across time for the same individual risk and that between risks. For the case of Normal common effects, we are able to derive explicit formulas for the credibility premium. This takes the intuitive form of a weighted average between the individual risk's claims experience, the group's claims experience and the prior mean. We also consider the use of copulas, a tool widely used in other areas of work involving dependence, in constructing credibility premiums. Specifically, we utilise copulas to model the dependence across time for an individual risk or group of homogeneous risks. We develop the construction with several well-known families of copulas and are able to derive explicit formulas for their respective conditional expectations. Whilst some recent work has been done on constructing credibility models with copulas, explicit formulas for the conditional expectations have rarely been made available. Finally, we calibrate these copula credibility models using a real data set. This data set relates to the claims experience of workers' compensation insurance by occupation over a 7-year period for a particular state in the United States. Our results show that for each occupation, claims dependence across time is indeed present. Amongst the copulas considered in our empirical analysis, the Cook-Johnson copula model is found to be the best fit for the data set used. The calibrated copula models are then used for prediction of the next period's claims. We found that the Cook-Johnson copula model gives superior predictions. Furthermore, this calibration exercise allowed us to uncover the importance of examining the nature of the data and comparing it with the characteristics of the copulas we are calibrating to.
4

Asymmetric heavy-tailed distributions : theory and applications to finance and risk management

Zhu, Dongming, 1963- January 2007 (has links)
This thesis focuses on construction, properties and estimation of asymmetric heavy-tailed distributions, as well as on their applications to financial modeling and risk measurement. First of all, we suggest a general procedure to construct a fully asymmetric distribution based on a symmetrically parametric distribution, and establish some natural relationships between the symmetric and asymmetric distributions. Then, three new classes of asymmetric distributions are proposed by using the procedure: the Asymmetric Exponential Power Distributions (AEPD), the Asymmetric Student-t Distributions (ASTD) and the Asymmetric Generalized t Distribution (AGTD). For the first two distributions, we give an interpretation of their parameters and explore basic properties of them, including moments, expected shortfall, characterization by the maximum entropy property, and the stochastic representation. Although neither distribution satisfies the regularity conditions under which the ML estimators have the usual asymptotics, due to a non-differentiable likelihood function, we nonetheless establish asymptotics for the full MLE of the parameters. A closed-form expression for the Fisher information matrix is derived, and Monte Carlo studies are provided. We also illustrate the usefulness of the GARCH-type models with the AEPD and ASTD innovations in the context of predicting downside market risk of financial assets and demonstrate their superiority over skew-normal and skew-Student's t GARCH models. Finally, two new classes of generalized extreme value distributions, which include Jenkinson's GEV (Generalized Extreme Value) distribution (Jenkinson, 1955) as special cases, are proposed by using the maximum entropy principle, and their properties are investigated in detail.
5

Asymmetric heavy-tailed distributions : theory and applications to finance and risk management

Zhu, Dongming, 1963- January 2007 (has links)
No description available.
6

Modeling and Analyzing Systemic Risk in Complex Sociotechnical Systems The Role of Teleology, Feedback, and Emergence

Zhang, Zhizun January 2018 (has links)
Recent systemic failures such as the BP Deepwater Horizon Oil Spill, Global Financial Crisis, and Northeast Blackout have reminded us, once again, of the fragility of complex sociotechnical systems. Although the failures occurred in very different domains and were triggered by different events, there are, however, certain common underlying mechanisms of abnormalities driving these systemic failures. Understanding these mechanisms is essential to avoid such disasters in the future. Moreover, these disasters happened in sociotechnical systems, where both social and technical elements can interact with each other and with the environment. The nonlinear interactions among these components can lead to an “emergent” behavior – i.e., the behavior of the whole is more than the sum of its parts – that can be difficult to anticipate and control. Abnormalities can propagate through the systems to cause systemic failures. To ensure the safe operation and production of such complex systems, we need to understand and model the associated systemic risk. Traditional emphasis of chemical engineering risk modeling is on the technical components of a chemical plant, such as equipment and processes. However, a chemical plant is more than a set of equipment and processes, with the human elements playing a critical role in decision-making. Industrial statistics show that about 70% of the accidents are caused by human errors. So, new modeling techniques that go beyond the classical equipment/process-oriented approaches to include the human elements (i.e., the “socio” part of the sociotechnical systems) are needed for analyzing systemic risk of complex sociotechnical systems. This thesis presents such an approach. This thesis presents a new knowledge modeling paradigm for systemic risk analysis that goes beyond chemical plants by unifying different perspectives. First, we develop a unifying teleological, control theoretic framework to model decision-making knowledge in a complex system. The framework allows us to identify systematically the common failure mechanisms behind systemic failures in different domains. We show how cause-and-effect knowledge can be incorporated into this framework by using signed directed graphs. We also develop an ontology-driven knowledge modeling component and show how this can support decision-making by using a case study in public health emergency. This is the first such attempt to develop an ontology for public health documents. Lastly, from a control-theoretic perspective, we address the question, “how do simple individual components of a system interact to produce a system behavior that cannot be explained by the behavior of just the individual components alone?” Through this effort, we attempt to bridge the knowledge gap between control theory and complexity science.
7

Optimal dynamic portfolio selection under downside risk measure.

January 2014 (has links)
传统的风险控制以终端财富的各阶中心矩作为风险度量,而现在越来越多的投资模型转向以不对称的在某个特定临界值的下行风险作为风险度量。在现有的下行风险中,安全第一准则,风险价值,条件风险价值,下偏矩可能是最有活力的代表。在这篇博士论文中,在已有的静态文献之上,我们讨论了以安全第一准则,风险价值,条件风险价值,下偏矩为风险度量的一系列动态投资组合问题。我们的贡献在于两个方面,一个是建立了可以被解析求解的模型,另一个是得到了最优的投资策略。在终端财富上加上一个上界,使得我们克服了一类下行风险投资组合问题的不适定性。引入的上界不仅仅使得我们的下行风险下的投资组合问题能得到显式解,而且也让我们可以控制下行风险投资组合问题的最优投资的冒险性。用分位数法和鞅方法,我们能够得到上述的各种模型的解析解。在一定的市场条件下,我们得到了对应的拉格朗日问题的乘子的存在性和唯一性, 这也是对应的鞅方法中的核心步骤。更进一步,当市场投资组合集是确定性的时候,我们推出解析的最优财富过程和最优投资策略。 / Instead of controlling "symmetric" risks measured by central moments of terminal wealth, more and more portfolio models have shifted their focus to manage "asymmetric" downside risks that the investment return is below certain threshold. Among the existing downside risk measures, the safety-first principle, the value-at-risk (VaR), the conditional value-at-risk (CVaR) and the lower-partial moments (LPM) are probably the most promising representatives. / In this dissertation, we investigate a general class of dynamic mean-downside risk portfolio selection formulations, including the mean-exceeding probability portfolio selection formulation, the dynamic mean-VaR portfolio selection formulation, the dynamic mean-LPM portfolio selection formulation and the dynamic mean-CVaR portfolio selection formulation in continuous-time, while the current literature has only witnessed their static versions. Our contributions are two-fold, in both building up tractable formulations and deriving corresponding optimal policies. By imposing a limit funding level on the terminal wealth, we conquer the ill-posedness exhibited in the class of mean-downside risk portfolio models. The limit funding level not only enables us to solve dynamic mean-downside risk portfolio optimization problems, but also offers a flexibility to tame the aggressiveness of the portfolio policies generated from the mean-downside risk optimization models. Using quantile method and martingale approach, we derive optimal solutions for all the above mentioned mean-downside risk models. More specifically, for a general market setting, we prove the existence and uniqueness of the Lagrangian multiplies, which is a key step in applying the martingale approach, and establish a theoretical foundation for developing efficient numerical solution approaches. Furthermore, for situations where the opportunity set of the market setting is deterministic, we derive analytical portfolio policies. / Detailed summary in vernacular field only. / Zhou, Ke. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves i-vi). / Abstracts also in Chinese.
8

Estimation of Extra Risk and Benchmark Dose in Dose Response Models

Wang, Na January 2008 (has links) (PDF)
No description available.
9

Efficient portfolio optimisation by hydridised machine learning

26 March 2015 (has links)
D.Ing. / The task of managing an investment portfolio is one that continues to challenge both professionals and private individuals on a daily basis. Contrary to popular belief, the desire of these actors is not in all (or even most) instances to generate the highest profits imaginable, but rather to achieve an acceptable return for a given level of risk. In other words, the investor desires to have his funds generate money for him, while not feeling that he is gambling away his (or his clients’) funds. The reasons for a given risk tolerance (or risk appetite) are as varied as the clients themselves – in some instances, clients will simply have their own arbitrary risk appetites, while other may need to maintain certain values to satisfy their mandates, while other may need to meet regulatory requirements. In order to accomplish this task, many measures and representations of performance data are employed to both communicate and understand the risk-reward trade-offs involved in the investment process. In light of the recent economic crisis, greater understanding and control of investment is being clamoured for around the globe, along with the concomitant finger-pointing and blame-assignation that inevitably follows such turmoil, and such heavy costs. The reputation of the industry, always dubious in the best of times, has also taken a significant knock after the events, and while this author would not like to point fingers, clearly the managers of funds, custodians of other people’s money, are in no small measure responsible for the loss of the funds under their care. It is with these concerns in mind that this thesis explores the potential for utilising the powerful tools found within the disciplines of artificial intelligence and machine learning in order to aid fund managers in the balancing of portfolios, tailoring specifically to their clients’ individual needs. These fields hold particular promise due to their focus on generalised pattern recognition, multivariable optimisation and continuous learning. With these tools in hand, a fund manager is able to continuously rebalance a portfolio for a client, given the client’s specific needs, and achieve optimal results while staying within the client’s risk parameters (in other words, keeping within the clients comfort zone in terms of price / value fluctuations).This thesis will first explore the drivers and constraints behind the investment process, as well as the process undertaken by the fund manager as recommended by the CFA (Certified Financial Analyst) Institute. The thesis will then elaborate on the existing theory behind modern investment theory, and the mathematics and statistics that underlie the process. Some common tools from the field of Technical Analysis will be examined, and their implicit assumptions and limitations will be shown, both for understanding and to show how they can still be utilised once their limitations are explicitly known. Thereafter the thesis will show the various tools from within the fields of machine learning and artificial intelligence that form the heart of the thesis herein. A highlight will be placed on data structuring, and the inherent dangers to be aware of when structuring data representations for computational use. The thesis will then illustrate how to create an optimiser using a genetic algorithm for the purpose of balancing a portfolio. Lastly, it will be shown how to create a learning system that continues to update its own understanding, and create a hybrid learning optimiser to enable fund managers to do their job effectively and safely.
10

Shrinkage method for estimating optimal expected return of self-financing portfolio. / CUHK electronic theses & dissertations collection

January 2011 (has links)
A new estimator for calculating the optimal expected return of a self-financing portfolio is proposed, by considering the joint impact of the sample mean vector and the sample covariance matrix. A shrinkage covariance matrix is designed to substitute the sample covariance matrix in the optimization procedure, which leads to an estimate of the optimal expected return smaller than the plug-in estimate. The new estimator is also applicable for both p < n and p ≥ n. Simulation studies are conducted for two empirical data sets. The simulation results show that the new estimator is superior to the previous methods. / By the seminal work of Markowitz in 1952, modern portfolio theory studies how to maximize the portfolio expected return for a given risk, or minimize the risk for a given expected return. Since these two issues are equivalent, this thesis only focuses on the study of the optimal expected return of a self-financing portfolio for a given risk. / Finally, under certain assumptions, we extend our research in the framework of random matrix theory. / The mean-variance portfolio optimization procedure requires two crucial inputs: the theoretical mean vector and the theoretical covariance matrix of the portfolio in one period. Since the traditional plug-in method using the sample mean vector and the sample covariance matrix of the historical data incurs substantial estimation errors, this thesis explores how the sample mean vector and the sample covariance matrix behave in the optimization procedure based on the idea of conditional expectation and finds that the effect of the sample mean vector is an additive process while the effect of the sample covariance matrix is a multiplicative process. / Liu, Yan. / Adviser: Ngai Hang Chan. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 76-80). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Page generated in 0.1793 seconds