• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 109
  • 40
  • 29
  • 23
  • 18
  • 18
  • 13
  • 11
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 483
  • 483
  • 483
  • 87
  • 85
  • 75
  • 74
  • 67
  • 66
  • 64
  • 61
  • 59
  • 55
  • 55
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Cornish-Fisher Expansion and Value-at-Risk method in application to risk management of large portfolios

Sjöstrand, Maria, Aktaş, Özlem January 2011 (has links)
One of the major problem faced by banks is how to manage the risk exposure in large portfolios. According to Basel II regulation banks has to measure the risk using Value-at-Risk with confidence level 99%. However, this regulation does not specify the way to calculate Valueat- Risk. The easiest way to calculate Value-at-Risk is to assume that portfolio returns are normally distributed. Altough, this is the most common way to calculate Value-at-Risk, there exists also other methods. The previous crisis shows that the regular methods are unfortunately not always enough to prevent bankruptcy. This paper is devoted to compare the classical methods of estimating risk with other methods such as Cornish-Fisher Expansion (CFVaR) and assuming generalized hyperbolic distribution. To be able to do this study, we estimate the risk in a large portfolio consisting of ten stocks. These stocks are chosen from the NASDAQ 100-list in order to have highly liquid stocks (bluechips). The stocks are chosen from different sectors to make the portfolio welldiversified. To investigate the impact of dependence between the stocks in the portfolio we remove the two most correlated stocks and consider the resulting eight stock portfolio as well. In both portfolios we put equal weight to the included stocks. The results show that for a well-diversified large portfolio none of the risk measures are violated. However, for a portfolio consisting of only one highly volatile stock we prove that we have a violation in the classical methods but not when we use the modern methods mentioned above.
352

Optimal Reinsurance Designs: from an Insurer’s Perspective

Weng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer. The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
353

Sample Average Approximation of Risk-Averse Stochastic Programs

Wang, Wei 17 August 2007 (has links)
Sample average approximation (SAA) is a well-known solution methodology for traditional stochastic programs which are risk neutral in the sense that they consider optimization of expectation functionals. In this thesis we establish sample average approximation methods for two classes of non-traditional stochastic programs. The first class is that of stochastic min-max programs, i.e., min-max problems with expected value objectives, and the second class is that of expected value constrained stochastic programs. We specialize these SAA methods for risk-averse stochastic problems with a bi-criteria objective involving mean and mean absolute deviation, and those with constraints on conditional value-at-risk. For the proposed SAA methods, we prove that the results of the SAA problem converge exponentially fast to their counterparts for the true problem as the sample size increases. We also propose implementation schemes which return not only candidate solutions but also statistical upper and lower bound estimates on the optimal value of the true problem. We apply the proposed methods to solve portfolio selection and supply chain network design problems. Our computational results reflect good performance of the proposed SAA schemes. We also investigate the effect of various types of risk-averse stochastic programming models in controlling risk in these problems.
354

Empirical likelihood and extremes

Gong, Yun 17 January 2012 (has links)
In 1988, Owen introduced empirical likelihood as a nonparametric method for constructing confidence intervals and regions. Since then, empirical likelihood has been studied extensively in the literature due to its generality and effectiveness. It is well known that empirical likelihood has several attractive advantages comparing to its competitors such as bootstrap: determining the shape of confidence regions automatically using only the data; straightforwardly incorporating side information expressed through constraints; being Bartlett correctable. The main part of this thesis extends the empirical likelihood method to several interesting and important statistical inference situations. This thesis has four components. The first component (Chapter II) proposes a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve in order to overcome the computational difficulty when we have nonlinear constrains in the maximization problem. The second component (Chapter III and IV) proposes smoothed empirical likelihood methods to obtain interval estimation for the conditional Value-at-Risk with the volatility model being an ARCH/GARCH model and a nonparametric regression respectively, which have applications in financial risk management. The third component(Chapter V) derives the empirical likelihood for the intermediate quantiles, which plays an important role in the statistics of extremes. Finally, the fourth component (Chapter VI and VII) presents two additional results: in Chapter VI, we present an interesting result by showing that, when the third moment is infinity, we may prefer the Student's t-statistic to the sample mean standardized by the true standard deviation; in Chapter VII, we present a method for testing a subset of parameters for a given parametric model of stationary processes.
355

以實現波動率估計投資組合風險值 / Value at Risk of Portfolio with Realized Volatility

李承儒 Unknown Date (has links)
利用風險值作為投資組合的風險管理工具,必須考慮金融資產報酬率通常具有厚尾、高峰、波動叢聚以及資產間訊息與波動性的變化也會交互影響等現象;因此實證上通常以多變量GARCH模型作為估計投資組合變異數矩陣的方法。然而多變量GARCH模型卻存在有維度上的詛咒,當投資組合包含資產數增加時會加重參數估計上的困難度。另一種估計波動率的方法,稱為實現波動率,能比多變量GARCH模型更簡易地處理投資組合高維度的問題。本文即以實現波動率、BEKK多變量GARCH模型與CCC模型,並以中鋼、台積電、國泰金為研究對象,比較三種方法估計風險值的表現。而實證結果得到利用實現波動率確實適合應用在風險值的估計上,且在表現上有略勝一籌的現象。
356

金融風險測度與極值相依之應用─以台灣金融市場為例 / Measuring financial risk and extremal dependence between financial markets in Taiwan

劉宜芳 Unknown Date (has links)
This paper links two applications of Extreme Value Theory (EVT) to analyze Taiwanese financial markets: 1. computation of Value at Risk (VaR) and Expected Shortfall (ES) 2. estimates of cross-market dependence under extreme events. Daily data from the Taiwan Stock Exchange Capitalization Weight Stock Index (TAIEX) and the foreign exchange rate, USD/NTD, are employed to analyze the behavior of each return and the dependence structure between the foreign exchange market and the equity market. In the univariate case, when computing risk measures, EVT provides us a more accurate way to estimate VaR. In bivariate case, when measuring extremal dependence, the results of whole period data show the extremal dependence between two markets is asymptotically independent, and the analyses of subperiods illustrate that the relation is slightly dependent in specific periods. Therefore, there is no significant evidence that extreme events appeared in one market (the equity market or the foreign exchange market) will affect another in Taiwan.
357

Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables

Karniychuk, Maryna 09 January 2007 (has links) (PDF)
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
358

Εκτίμηση μέγιστης δυνητικής ζημίας (VaR) σε χαρτοφυλάκια

Δημητράντζου, Χριστίνα 05 February 2015 (has links)
Η πολύπλοκη μορφή που απέκτησαν οι χρηματοοικονομικές αγορές κατά τη διάρκεια των δύο τελευταίων δεκαετιών, είχε ως αποτέλεσμα την απώλεια πολύ υψηλών κεφαλαίων από τις επιχειρήσεις και από τις τράπεζες. Η ανάγκη για συστηματική μέτρηση του χρηματοοικονομικού κινδύνου οδήγησε στην επινόηση του μεγέθους της αξίας σε κίνδυνο (Value-at-Risk, VaR). Η μέθοδος αυτή παρέχει στον ενδιαφερόμενο έναν αριθμό που εκφράζει τη μέγιστη αναμενόμενη ζημία μίας επένδυσης για δεδομένη χρονική περίοδο και δεδομένο επίπεδο εμπιστοσύνης. Παρά το γεγονός ότι η VaR έχει κάποιους περιορισμούς που απαιτούν τη χρήση stress test και scenario test, συνολικά, η VaR είναι η καλύτερη ανεξάρτητη τεχνική μέτρησης των κινδύνων που είναι διαθέσιμη. Στόχος της παρούσας διπλωματικής εργασίας είναι η μέτρηση της VaR ενός χαρτοφυλακίου. Επιπλέον, μέσα από αυτήν την εργασία θα γίνει κατανοητό τι είναι η VaR, πώς μπορεί να υπολογιστεί, ποια είναι τα κύρια χαρακτηριστικά της και ποια είναι τα πλεονεκτήματα και τα μειονεκτήματά της. Τέλος, ιδιαίτερη έμφαση δίνεται στην παρουσίαση των μεθόδων υπολογισμού της VaR. / The inextricable form of the financial markets during the last two decades result in the loss of high capital from the businesses and banks. The need of a systemic measurement of the financial risk leads to the invention of the Value-at-Risk method. This method provides the interested person with a number which expresses the potential maximum loss of an investment for a given period of time and a given confidence level. Despite the fact that VaR has some restrictions demanding the use of stress test and scenario test, altogether, VaR is the best independent measuring technique of the risks that it is available. The aim of this dissertation is to measure the VaR of a portfolio. Moreover, it will be registered what VaR is, how it can be measured, which are its main characteristics, its advantages and disadvantages. Lastly, more emphasis is given to the presentation of the measuring methods of VaR.
359

Prekybinio portfelio rizikos valdybas banke / Trading portfolio risk management in banking

Dzikevičius, Audrius 04 April 2006 (has links)
Sparčiai kintant finansinių institucijų veiklos sąlygoms, didėjant finansinių rinkų nepastovumui bei veiklos mastams, atsirandant vis naujoms finansinėms priemonėms, o kartu su jomis ir naujoms finansinių institucijų rizikos rūšims, ypač išaugo prekybinio portfelio rizikos valdymo poreikis. Tą įrodo ir tai, kad po eilės solidžių ir pakankamai konservatyvių finansinių institucijų, tokių kaip Baring, Daiwa, Sumitomo, Metallgesellschaft, Orange County, Long Term Capital Management, bankrotų ar milžiniškų nuostolių patyrimo praeito amžiaus paskutiniame dešimtmetyje didžiosios pasaulio finansinės institucijos bei šalių centriniai bankai ėmė iš esmės griežtinti rinkos rizikos valdymo procedūras. Kaip parodė vėlesnė minėtų kompanijų istorijos analiz�����, pagrindinė jų nuostolių ar bankrotų priežastis buvo nesugebėjimas tinkamai valdyti prekybinį portfelį įtakojančios rinkos rizikos. Prekybinio portfelio rizikos valdymas tampa vis aktualesnis ir Lietuvos finansinėms institucijoms, ypač investicinių bei pensijų fondų valdytojams, investuojantiems į užsienio vertybinius popierius, denominuotus kitomis valiutomis nei litas ar euras. / The scientific problem of the dissertation is search of adequacy of the trading portfolio risk management methods and models to the current economic, technological, and informational circumstances of financial institutions. The main features of science novelty characteristic to this research are the following: (i)the comparative study on Value at Risk estimation methods allowed to make important theoretic conclusion that selection of Value at Risk estimation methods depends mostly on characteristics of the portfolio under investigation; theoretic recommendations regarding selection of Value at Risk estimation methods were suggested as well; (ii)the comparative study on performance of volatility forecasting models allowed to make important theoretic conclusion that selection of Value at Risk estimation methods depends on characteristics of the data under investigation and selected criteria for assessment of forecasting accuracy; in the context of risk management, the priority was given to operational rather than statistical accuracy assessment techniques in the context of risk management; (iii)the comparative study on risk adjustment measures allowed making important theoretic conclusion that selection of risk adjustment measures depends mostly on characteristics of the portfolio under investigation; theoretic recommendations regarding selection of risk adjustment measures were suggested as well.
360

Trading portfolio risk management in banking / Prekybinio portfelio rizikos valdymas banke

Dzikevičius, Audrius 04 April 2006 (has links)
The scientific problem of the dissertation is search of adequacy of the trading portfolio risk management methods and models to the current economic, technological, and informational circumstances of financial institutions. The main features of science novelty characteristic to this research are the following: (i)the comparative study on Value at Risk estimation methods allowed to make important theoretic conclusion that selection of Value at Risk estimation methods depends mostly on characteristics of the portfolio under investigation; theoretic recommendations regarding selection of Value at Risk estimation methods were suggested as well; (ii) the comparative study on performance of volatility forecasting models allowed to make important theoretic conclusion that selection of Value at Risk estimation methods depends on characteristics of the data under investigation and selected criteria for assessment of forecasting accuracy; in the context of risk management, the priority was given to operational rather than statistical accuracy assessment techniques in the context of risk management; (iii)the comparative study on risk adjustment measures allowed making important theoretic conclusion that selection of risk adjustment measures depends mostly on characteristics of the portfolio under investigation; theoretic recommendations regarding selection of risk adjustment measures were suggested as well. / Sparčiai kintant finansinių institucijų veiklos sąlygoms, didėjant finansinių rinkų nepastovumui bei apimčiai, atsirandant vis naujoms finansinėms priemonėms, o kartu su jomis ir naujoms finansinių institucijų rizikos rūšims, ypač išaugo prekybinio portfelio rizikos valdymo poreikis. Prekybinio portfelio rizikos valdymas tampa vis aktualesnis ir Lietuvos finansinėms institucijoms, ypač investicinių bei pensijų fondų valdytojams, investuojantiems į užsienio vertybinius popierius, denominuotus kitomis valiutomis nei litai ar eurai. Prekybinio portfelio rizikos valdymas yra labai aktuali tema ir atskirų šalių centriniams bankams bei tarptautinėms finansų sistemos priežiūros institucijoms.

Page generated in 0.0267 seconds