Spelling suggestions: "subject:"[een] RISK MEASURE"" "subject:"[enn] RISK MEASURE""
11 |
A generalized Neyman-Pearson lemma for hedge problems in incomplete marketsRudloff, Birgit 07 October 2005 (has links) (PDF)
Some financial problems as minimizing the shortfall risk when hedging in incomplete markets lead to problems belonging to test theory. This paper considers
a generalization of the Neyman-Pearson lemma. With methods of convex duality
we deduce the structure of an optimal randomized test when testing a compound
hypothesis against a simple alternative. We give necessary and sufficient optimality
conditions for the problem.
|
12 |
Finanční optimalizace / Financial optimizationŠtolc, Zdeněk January 2009 (has links)
This thesis is focused on a theoretical explanation of some models for the optimization stock portfolios with different risk measure. The theory of the nonlinear programming is detailed developed and also basic Markowitz`s model with another optimization models as Konno -- Yamazaki`s model, Roy`s model, semivariance approach and Value at Risk approach, which are based on alternative risk measure. For all models the assumptions of their applications are highlighted and the comparation of these models is made too. Analytical part is concerned in the construction of the effecient portfolios according to the described models is made on the historical market prices of 13 companies traded on Prague Stock Exchange in SPAD.
|
13 |
Metody stochastického programováni pro investiční rozhodování / Stochastic Programming Methods for Investment DecisionsKubelka, Lukáš January 2014 (has links)
This thesis deals with methods of stochastic programming and their application in financial investment. Theoretical part is devoted to basic terms of mathematical optimization, stochastic programming and decision making under uncertainty. Furter, there are introduced basic principles of modern portfolio theory, substantial part is devoted to risk measurement techniques in the context of investment, mostly to the methods Value at Risk and Expected shortfall. Practical part aims to creation of optimization models with an emphasis to minimize investment risk. Created models deal with real data and they are solved in optimization software GAMS.
|
14 |
Arbitrer coût et flexibilité dans la Supply Chain / Balancing cost and flexibility in Supply ChainGaillard de Saint Germain, Etienne 17 December 2018 (has links)
Cette thèse développe des méthodes d'optimisation pour la gestion de la Supply Chain et a pour thème central la flexibilité définie comme la capacité à fournir un service ou un produit au consommateur dans un environnement incertain. La recherche a été menée dans le cadre d'un partenariat entre Argon Consulting, une société indépendante de conseil en Supply Chain et l'École des Ponts ParisTech. Dans cette thèse, nous développons trois sujets rencontrés par Argon Consulting et ses clients et qui correspondent à trois différents niveaux de décision (long terme, moyen terme et court terme).Lorsque les entreprises élargissent leur portefeuille de produits, elles doivent décider dans quelles usines produire chaque article. Il s'agit d'une décision à long terme, car une fois qu'elle est prise, elle ne peut être facilement modifiée. Plus qu'un problème d'affectation où un article est produit par une seule usine, ce problème consiste à décider si certains articles doivent être produits par plusieurs usines et par lesquelles. Cette interrogation est motivée par la grande incertitude de la demande. En effet, pour satisfaire la demande, l'affectation doit pouvoir équilibrer la charge de travail entre les usines. Nous appelons ce problème le multi-sourcing de la production. Comme il ne s'agit pas d'un problème récurrent, il est essentiel de tenir compte du risque au moment de décider le niveau de multi-sourcing. Nous proposons un modèle générique qui inclut les contraintes techniques du problème et une contrainte d'aversion au risque basée sur des mesures de risque issues de la théorie financière. Nous développons un algorithme et une heuristique basés sur les outils standards de la Recherche Opérationnelle et de l'Optimisation Stochastique pour résoudre le problème du multi-sourcing et nous testons leur efficacité sur des données réelles.Avant de planifier la production, certains indicateurs macroscopiques doivent être décidés à horizon moyen terme tels la quantité de matières premières à commander ou la taille des lots produits. Certaines entreprises utilisent des modèles de stock en temps continu, mais ces modèles reposent souvent sur un compromis entre les coûts de stock et les coûts de lancement. Ces derniers sont des coûts fixes payés au lancement de la production et sont difficiles à estimer en pratique. En revanche, à horizon moyen terme, la flexibilité des moyens de production est déjà fixée et les entreprises estiment facilement le nombre maximal de lancements. Poussés par cette observation, nous proposons des extensions de certains modèles classiques de stock en temps continu, sans coût de lancement et avec une limite sur le nombre d'installations. Nous avons utilisé les outils standard de l'Optimisation Continue pour calculer les indicateurs macroscopiques optimaux.Enfin, la planification de la production est une décision à court terme qui consiste à décider quels articles doivent être produits par la ligne de production pendant la période en cours. Ce problème appartient à la classe bien étudiée des problèmes de Lot-Sizing. Comme pour les décisions à moyen terme, ces problèmes reposent souvent sur un compromis entre les coûts de stock et les coûts de lancement. Fondant notre modèle sur ces considérations industrielles, nous gardons le même point de vue (aucun coût de lancement et une borne supérieure sur le nombre de lancement) et proposons un nouveau modèle.Bien qu'il s'agisse de décisions à court terme, les décisions de production doivent tenir compte de la demande future, qui demeure incertaine. Nous résolvons notre problème de planification de la production à l'aide d'outils standard de Recherche Opérationnelle et d'Optimisation Stochastique, nous testons l'efficacité sur des données réelles et nous la comparons aux heuristiques utilisées par les clients d'Argon Consulting / This thesis develops optimization methods for Supply Chain Management and is focused on the flexibility defined as the ability to deliver a service or a product to a costumer in an uncertain environment. The research was conducted throughout a partnership between Argon Consulting, which is an independent consulting firm in Supply Chain Operations and the École des Ponts ParisTech. In this thesis, we explore three topics that are encountered by Argon Consulting and its clients and that correspond to three different levels of decision (long-term, mid-term and short-term).When companies expand their product portfolio, they must decide in which plants to produce each item. This is a long-term decision since once it is decided, it cannot be easily changed. More than a assignment problem where one item is produced by a single plant, this problem consists in deciding if some items should be produced on several plants and by which plants. This is motivated by a highly uncertain demand. So, in order to satisfy the demand, the assignment must be able to balance the workload between plants. We call this problem the multi-sourcing of production. Since it is not a repeated problem, it is essential to take into account the risk when making the multi-sourcing decision. We propose a generic model that includes the technical constraints of the assignment and a risk-averse constraint based on risk measures from financial theory. We develop an algorithm and a heuristic based on standard tools from Operations Research and Stochastic Optimization to solve the multi-sourcing problem and we test their efficiency on real datasets.Before planning the production, some macroscopic indicators must be decided at mid-term level such as the quantity of raw materials to order or the size of produced lots. Continuous-time inventory models are used by some companies but these models often rely on a trade-off between holding costs and setups costs. These latters are fixed costs paid when production is launched and are hard to estimate in practice. On the other hand, at mid-term level, flexibility of the means of production is already fixed and companies easily estimate the maximal number of setups. Motivated by this observation, we propose extensions of some classical continuous-time inventory models with no setup costs and with a bound on the number of setups. We used standard tools from Continuous Optimization to compute the optimal macroscopic indicators.Finally, planning the production is a short-term decision consisting in deciding which items must be produced by the assembly line during the current period. This problem belongs to the well-studied class of Lot-Sizing Problems. As for mid-term decisions, these problems often rely on a trade-off between holding and setup costs. Basing our model on industrial considerations, we keep the same point of view (no setup cost and a bound on the number of setups) and propose a new model. Although these are short-term decisions, production decisions must take future demand into account, which remains uncertain. We solve our production planning problem using standard tools from Operations Research and Stochastic Optimization, test the efficiency on real datasets, and compare it to heuristics used by Argon Consulting's clients
|
15 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
16 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
17 |
Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random VariablesKarniychuk, Maryna 09 January 2007 (has links) (PDF)
In this thesis the performances of different approximations are compared for a standard actuarial and
financial problem: the estimation of quantiles and conditional
tail expectations of the final value of a series of discrete cash
flows.
To calculate the risk measures such as quantiles and Conditional
Tail Expectations, one needs the distribution function of the
final wealth. The final value of a series of discrete payments in
the considered model is the sum of dependent lognormal random
variables. Unfortunately, its distribution function cannot be
determined analytically. Thus usually one has to use
time-consuming Monte Carlo simulations. Computational time still
remains a serious drawback of Monte Carlo simulations, thus
several analytical techniques for approximating the distribution
function of final wealth are proposed in the frame of this thesis.
These are the widely used moment-matching approximations and
innovative comonotonic approximations.
Moment-matching methods approximate the unknown distribution
function by a given one in such a way that some characteristics
(in the present case the first two moments) coincide. The ideas of
two well-known approximations are described briefly. Analytical
formulas for valuing quantiles and Conditional Tail Expectations
are derived for both approximations.
Recently, a large group of scientists from Catholic University
Leuven in Belgium has derived comonotonic upper and comonotonic
lower bounds for sums of dependent lognormal random variables.
These bounds are bounds in the terms of "convex order". In order
to provide the theoretical background for comonotonic
approximations several fundamental ordering concepts such as
stochastic dominance, stop-loss and convex order and some
important relations between them are introduced. The last two
concepts are closely related. Both stochastic orders express which
of two random variables is the "less dangerous/more attractive"
one.
The central idea of comonotonic upper bound approximation is to
replace the original sum, presenting final wealth, by a new sum,
for which the components have the same marginal distributions as
the components in the original sum, but with "more dangerous/less
attractive" dependence structure. The upper bound, or saying
mathematically, convex largest sum is obtained when the components
of the sum are the components of comonotonic random vector.
Therefore, fundamental concepts of comonotonicity theory which are
important for the derivation of convex bounds are introduced. The
most wide-spread examples of comonotonicity which emerge in
financial context are described.
In addition to the upper bound a lower bound can be derived as
well. This provides one with a measure of the reliability of the
upper bound. The lower bound approach is based on the technique of
conditioning. It is obtained by applying Jensen's inequality for
conditional expectations to the original sum of dependent random
variables. Two slightly different version of conditioning random
variable are considered in the context of this thesis. They give
rise to two different approaches which are referred to as
comonotonic lower bound and comonotonic "maximal variance" lower
bound approaches.
Special attention is given to the class of distortion risk
measures. It is shown that the quantile risk measure as well as
Conditional Tail Expectation (under some additional conditions)
belong to this class. It is proved that both risk measures being
under consideration are additive for a sum of comonotonic random
variables, i.e. quantile and Conditional Tail Expectation for a
comonotonic upper and lower bounds can easily be obtained by
summing the corresponding risk measures of the marginals involved.
A special subclass of distortion risk measures which is referred
to as class of concave distortion risk measures is also under
consideration. It is shown that quantile risk measure is not a
concave distortion risk measure while Conditional Tail Expectation
(under some additional conditions) is a concave distortion risk
measure. A theoretical justification for the fact that "concave"
Conditional Tail Expectation preserves convex order relation
between random variables is given. It is shown that this property
does not necessarily hold for the quantile risk measure, as it is
not a concave risk measure.
Finally, the accuracy and efficiency of two moment-matching,
comonotonic upper bound, comonotonic lower bound and "maximal
variance" lower bound approximations are examined for a wide range
of parameters by comparing with the results obtained by Monte
Carlo simulation. It is justified by numerical results that,
generally, in the current situation lower bound approach
outperforms other methods. Moreover, the preservation of convex
order relation between the convex bounds for the final wealth by
Conditional Tail Expectation is demonstrated by numerical results.
It is justified numerically that this property does not
necessarily hold true for the quantile.
|
18 |
Essays in mathematical financeMurgoci, Agatha January 2009 (has links)
Diss. Stockholm : Handelshögskolan, 2009
|
19 |
A generalized Neyman-Pearson lemma for hedge problems in incomplete marketsRudloff, Birgit 07 October 2005 (has links)
Some financial problems as minimizing the shortfall risk when hedging in incomplete markets lead to problems belonging to test theory. This paper considers
a generalization of the Neyman-Pearson lemma. With methods of convex duality
we deduce the structure of an optimal randomized test when testing a compound
hypothesis against a simple alternative. We give necessary and sufficient optimality
conditions for the problem.
|
20 |
Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random VariablesKarniychuk, Maryna 30 November 2006 (has links)
In this thesis the performances of different approximations are compared for a standard actuarial and
financial problem: the estimation of quantiles and conditional
tail expectations of the final value of a series of discrete cash
flows.
To calculate the risk measures such as quantiles and Conditional
Tail Expectations, one needs the distribution function of the
final wealth. The final value of a series of discrete payments in
the considered model is the sum of dependent lognormal random
variables. Unfortunately, its distribution function cannot be
determined analytically. Thus usually one has to use
time-consuming Monte Carlo simulations. Computational time still
remains a serious drawback of Monte Carlo simulations, thus
several analytical techniques for approximating the distribution
function of final wealth are proposed in the frame of this thesis.
These are the widely used moment-matching approximations and
innovative comonotonic approximations.
Moment-matching methods approximate the unknown distribution
function by a given one in such a way that some characteristics
(in the present case the first two moments) coincide. The ideas of
two well-known approximations are described briefly. Analytical
formulas for valuing quantiles and Conditional Tail Expectations
are derived for both approximations.
Recently, a large group of scientists from Catholic University
Leuven in Belgium has derived comonotonic upper and comonotonic
lower bounds for sums of dependent lognormal random variables.
These bounds are bounds in the terms of "convex order". In order
to provide the theoretical background for comonotonic
approximations several fundamental ordering concepts such as
stochastic dominance, stop-loss and convex order and some
important relations between them are introduced. The last two
concepts are closely related. Both stochastic orders express which
of two random variables is the "less dangerous/more attractive"
one.
The central idea of comonotonic upper bound approximation is to
replace the original sum, presenting final wealth, by a new sum,
for which the components have the same marginal distributions as
the components in the original sum, but with "more dangerous/less
attractive" dependence structure. The upper bound, or saying
mathematically, convex largest sum is obtained when the components
of the sum are the components of comonotonic random vector.
Therefore, fundamental concepts of comonotonicity theory which are
important for the derivation of convex bounds are introduced. The
most wide-spread examples of comonotonicity which emerge in
financial context are described.
In addition to the upper bound a lower bound can be derived as
well. This provides one with a measure of the reliability of the
upper bound. The lower bound approach is based on the technique of
conditioning. It is obtained by applying Jensen's inequality for
conditional expectations to the original sum of dependent random
variables. Two slightly different version of conditioning random
variable are considered in the context of this thesis. They give
rise to two different approaches which are referred to as
comonotonic lower bound and comonotonic "maximal variance" lower
bound approaches.
Special attention is given to the class of distortion risk
measures. It is shown that the quantile risk measure as well as
Conditional Tail Expectation (under some additional conditions)
belong to this class. It is proved that both risk measures being
under consideration are additive for a sum of comonotonic random
variables, i.e. quantile and Conditional Tail Expectation for a
comonotonic upper and lower bounds can easily be obtained by
summing the corresponding risk measures of the marginals involved.
A special subclass of distortion risk measures which is referred
to as class of concave distortion risk measures is also under
consideration. It is shown that quantile risk measure is not a
concave distortion risk measure while Conditional Tail Expectation
(under some additional conditions) is a concave distortion risk
measure. A theoretical justification for the fact that "concave"
Conditional Tail Expectation preserves convex order relation
between random variables is given. It is shown that this property
does not necessarily hold for the quantile risk measure, as it is
not a concave risk measure.
Finally, the accuracy and efficiency of two moment-matching,
comonotonic upper bound, comonotonic lower bound and "maximal
variance" lower bound approximations are examined for a wide range
of parameters by comparing with the results obtained by Monte
Carlo simulation. It is justified by numerical results that,
generally, in the current situation lower bound approach
outperforms other methods. Moreover, the preservation of convex
order relation between the convex bounds for the final wealth by
Conditional Tail Expectation is demonstrated by numerical results.
It is justified numerically that this property does not
necessarily hold true for the quantile.
|
Page generated in 0.192 seconds