Spelling suggestions: "subject:"risk minimization"" "subject:"risk minimizations""
11 |
Risk Minimization in Power System Expansion and Power Pool Electricity MarketsAlvarez Lopez, Juan January 2007 (has links)
Centralized power system planning covers time windows that range
from ten to thirty years. Consequently, it is the longest and most
uncertain part of power system economics. One of the challenges that
power system planning faces is the inability to accurately predict
random events; these random events introduce risk in the planning
process. Another challenge stems from the fact that, despite having
a centralized planning scheme, generation plans are set first and
then transmission expansion plans are carried out. This thesis
addresses these problems. A joint model for generation and
transmission expansion for the vertically integrated industry is
proposed. Randomness is considered in demand, equivalent
availability factors of the generators, and transmission capacity
factors of the transmission lines. The system expansion model is
formulated as a two-stage stochastic program with fixed recourse and
probabilistic constraints. The transmission network is included via
a DC approximation. The mean variance Markowitz theory is used as a
risk minimization technique in order to minimize the variance of the
annualized estimated generating cost. This system expansion model is
capable of considering the locations of new generation and
transmission and also of choosing the right mixture of generating
technologies.
The global tendency is to move from regulated power systems to
deregulated power systems. Power pool electricity markets, assuming
that the independent system operator is concerned with the social
cost minimization, face great uncertainties from supply and demand
bids submitted by market participants. In power pool electricity
markets, randomness in the cost and benefit functions through random
demand and supply functions has never been considered before. This
thesis considers as random all the coefficients of the quadratic
cost and benefit functions and uses the mean variance Markowitz
theory to minimize the social cost variance. The impacts that this
risk minimization technique has on nodal prices and on the
elasticities of the supply and demand curves are studied.
All the mathematical models in this thesis are exemplified by the
six-node network proposed by Garver in 1970, by the 21-node network
proposed by the IEEE Reliability Test System Task Force in 1979, and
by the IEEE 57- and 118-node systems.
|
12 |
Risk Minimization in Power System Expansion and Power Pool Electricity MarketsAlvarez Lopez, Juan January 2007 (has links)
Centralized power system planning covers time windows that range
from ten to thirty years. Consequently, it is the longest and most
uncertain part of power system economics. One of the challenges that
power system planning faces is the inability to accurately predict
random events; these random events introduce risk in the planning
process. Another challenge stems from the fact that, despite having
a centralized planning scheme, generation plans are set first and
then transmission expansion plans are carried out. This thesis
addresses these problems. A joint model for generation and
transmission expansion for the vertically integrated industry is
proposed. Randomness is considered in demand, equivalent
availability factors of the generators, and transmission capacity
factors of the transmission lines. The system expansion model is
formulated as a two-stage stochastic program with fixed recourse and
probabilistic constraints. The transmission network is included via
a DC approximation. The mean variance Markowitz theory is used as a
risk minimization technique in order to minimize the variance of the
annualized estimated generating cost. This system expansion model is
capable of considering the locations of new generation and
transmission and also of choosing the right mixture of generating
technologies.
The global tendency is to move from regulated power systems to
deregulated power systems. Power pool electricity markets, assuming
that the independent system operator is concerned with the social
cost minimization, face great uncertainties from supply and demand
bids submitted by market participants. In power pool electricity
markets, randomness in the cost and benefit functions through random
demand and supply functions has never been considered before. This
thesis considers as random all the coefficients of the quadratic
cost and benefit functions and uses the mean variance Markowitz
theory to minimize the social cost variance. The impacts that this
risk minimization technique has on nodal prices and on the
elasticities of the supply and demand curves are studied.
All the mathematical models in this thesis are exemplified by the
six-node network proposed by Garver in 1970, by the 21-node network
proposed by the IEEE Reliability Test System Task Force in 1979, and
by the IEEE 57- and 118-node systems.
|
13 |
Data sampling strategies in stochastic algorithms for empirical risk minimizationCsiba, Dominik January 2018 (has links)
Gradient descent methods and especially their stochastic variants have become highly popular in the last decade due to their efficiency on big data optimization problems. In this thesis we present the development of data sampling strategies for these methods. In the first four chapters we focus on four views on the sampling for convex problems, developing and analyzing new state-of-the-art methods using non-standard data sampling strategies. Finally, in the last chapter we present a more flexible framework, which generalizes to more problems as well as more sampling rules. In the first chapter we propose an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization (ERM) problem. Our modification consists in allowing the method to adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves a provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods. In the second chapter we extend the dual-free analysis of SDCA, to arbitrary mini-batching schemes. Our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes. In the third chapter we study importance sampling of minibatches. Minibatching is a well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance. Another popular technique is importance sampling { a strategy for preferential sampling of more important examples also capable of accelerating the training process. However, despite considerable effort by the community in these areas, and due to the inherent technical difficulty of the problem, there is no existing work combining the power of importance sampling with the strength of minibatching. In this chapter we propose the first importance sampling for minibatches and give simple and rigorous complexity analysis of its performance. We illustrate on synthetic problems that for training data of certain properties, our sampling can lead to several orders of magnitude improvement in training time. We then test the new sampling on several popular datasets, and show that the improvement can reach an order of magnitude. In the fourth chapter we ask whether randomized coordinate descent (RCD) methods should be applied to the ERM problem or rather to its dual. When the number of examples (n) is much larger than the number of features (d), a common strategy is to apply RCD to the dual problem. On the other hand, when the number of features is much larger than the number of examples, it makes sense to apply RCD directly to the primal problem. In this paper we provide the first joint study of these two approaches when applied to L2-regularized ERM. First, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. However, we find that for sparse and structured data, primal RCD can significantly outperform dual RCD even if d ≪ n, and vice versa, dual RCD can be much faster than primal RCD even if n ≫ d. Moreover, we show that, surprisingly, a single sampling strategy minimizes both the (bound on the) number of iterations and the overall expected complexity of RCD. Note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. We confirm our theoretical predictions using extensive experiments with both synthetic and real data sets. In the last chapter we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. Firstly, we introduce the proportion function, which we further use to analyze all the known block-selection rules for coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. We additionally introduce a novel block selection technique called greedy minibatches, for which we provide competitive convergence guarantees. Secondly, the whole theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak- Lojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak- Lojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we give local convergence guarantees for an even larger class of non-convex functions satisfying only a certain smoothness assumption. By combining the two above mentioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our framework. Also, we provide new guarantees for many previously not considered combinations of methods and setups, as well as a huge class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as any new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while any new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.
|
14 |
Automatizace dokumentů jako nástroj minimalizace rizik / Document Automation as a Risk Minimization ToolRoch, Eduard January 2020 (has links)
This diploma thesis deals with the issues of risks in document creation processes and the possibilities of their minimization through the implementation of document automation tools. The thesis identifies the motives and goals of three major companies in the Czech Republic which have decided to implement a system for document automation Legito. Using the FMEA method, this thesis demonstrates the influence of document automation in minimizing identified risk incorporated with document creation. Furthermore, the thesis deals with the analysis of expected risks associated with the implementation of the document automation system and its subsequent adoption by end-users. From the obtained information, the author of the thesis formulates a general process of implementing document automation systems, which defines the various aspects necessary for the successful deployment of the document automation system in companies.
|
15 |
Harm reduction strategie užívání konopných drog z pohledu jejich uživatelů / Harm reduction strategies of cannabis drugs use from the point of view of their usersScherberová, Jana January 2021 (has links)
Background: Cannabis drugs are the most used illicit drug in the Czech Republic. About 1,78 million people use cannabis, most of them are young people aged 15-34 years (Mravčík et al., 2020). Use in the young age, regular and intensive use of large amounts of cannabis is associated with the negative impact on health and life of users. Previous studies have described the harm redcution strategies, less is known about relative occurrence of the hram reduction stategies, especially in the czech environment. Aims: The aim of the study was to investigate what kind of harm reduction strategies are used by cannabis users. Methods: The research was conducted as a quantitative study. This mapping study was aimed to describe the behaviour of cannabis users in relation to use of the harm reduction strategies, and to explore the relative occurrence of these strategies. As a method of data collection was used a questionnaire survey. Results: Most frequently emerged harm reduction strategies among cannabis users are in relation to the effects of use on physical health. Most of these strategies focus on minimising the harms associated with smoking cannabis, particularly marijuana cigarettes. Mental health strategies are based on the concept of set, where users often do not use if they observe negative feelings...
|
16 |
Robust Post-donation Blood Screening under Limited InformationEl-Amine, Hadi 10 June 2016 (has links)
Blood products are essential components of any healthcare system, and their safety, in terms of being free of transfusion-transmittable infections, is crucial. While the Food and Drug Administration (FDA) in the United States requires all blood donations to be tested for a set of infections, it does not dictate which particular tests should be used by blood collection centers. Multiple FDA-licensed blood screening tests are available for each infection, but all screening tests are imperfectly reliable and have different costs. In addition, infection prevalence rates and several donor characteristics are uncertain, while surveillance methods are highly resource- and time-intensive. Therefore, only limited information is available to budget-constrained blood collection centers that need to devise a post-donation blood screening scheme so as to minimize the risk of an infectious donation being released into the blood supply. Our focus is on "robust" screening schemes under limited information. Toward this goal, we consider various objectives, and characterize structural properties of the optimal solutions under each objective. This allows us to gain insight and to develop efficient algorithms. Our research shows that using the proposed optimization-based approaches provides robust solutions with significantly lower expected infection risk compared to other testing schemes that satisfy the FDA requirements. Our findings have important public policy implications. / Ph. D.
|
17 |
Valuation, hedging and the risk management of insurance contractsBarbarin, Jérôme 03 June 2008 (has links)
This thesis aims at contributing to the study of the valuation of insurance liabilities and the management of the assets backing these liabilities. It consists of four parts, each devoted to a specific topic.
In the first part, we study the pricing of a classical single premium life insurance contract with profit, in terms of a guaranteed rate on the premium and a participation rate on the (terminal) financial surplus. We argue that, given the asset allocation of the insurer, these technical parameters should be determined by taking explicitly into account the risk management policy of the insurance company, in terms of a risk measure such as the value-at-risk or the conditional value-at-risk. We then design a methodology that allows us to fix both parameters in such a way that the contract is fairly priced and simultaneously exhibits a risk consistent with the risk management policy.
In the second part, we focus on the management of the surrender option embedded in most life insurance contracts. In Chapter 2, we argue that we should model the surrender time as a random time not adapted to the filtration generated by the financial assets prices, instead of assuming that the surrender time is an optimal stopping time as it is usual in the actuarial literature. We then study the valuation of insurance contracts with a surrender option in such a model. We here follow the financial literature on the default risk and in particular, the reduced-form models.
In Chapter 3 and 4, we study the hedging strategies of such insurance contracts. In Chapter 3, we study their risk-minimizing strategies and in Chapter 4, we focus on their ``locally risk-minimizing' strategies. As a by-product, we study the impact of a progressive enlargement of filtration on the so-called ``minimal martingale measure'.
The third part is devoted to the systematic mortality risk. Due to its systematic nature, this risk cannot be diversified through increasing the size of the portfolio. It is thus also important to study the hedging strategies an insurer should follow to mitigate its exposure to this risk.
In Chapter 5, we study the risk-minimizing strategies for a life insurance contract when no mortality-linked financial assets are traded on the financial market. We here extend Dahl and Moller’s results and show that the risk-minimizing strategy of a life insurance contract is given by a weighted average of risk-minimizing strategies of purely financial claims, where the weights are given by the (stochastic) survival probabilities.
In Chapter 6, we first study the application of the HJM methodology to the modelling of a longevity bonds market and describe a coherent theoretical setting in which we can properly define the longevity bond prices. Then, we study the risk-minimizing strategies for pure endowments and annuities portfolios when these longevity bonds are traded.
Finally, the fourth part deals with the design of ALM strategies for a non-life insurance portfolio. In particular, this chapter aims at studying the risk-minimizing strategies for a non life insurance company when inflation risk and interest rate risk are taken into account. We derive the general form of these strategies when the cumulative payments of the insurer are described by an arbitrary increasing process adapted to the natural filtration of a general marked point process and when the inflation and the term structure of interest rates are simultaneously described by the HJM model of Jarrow and Yildirim. We then systematically apply this result to four specific models of insurance claims. We first study two ``collective' models. We then study two ``individual' models where the claims are notified at a random time and settled through time.
|
18 |
Data-Dependent Analysis of Learning AlgorithmsPhilips, Petra Camilla, petra.philips@gmail.com January 2005 (has links)
This thesis studies the generalization ability of machine learning algorithms in a statistical setting. It focuses on the data-dependent analysis of the generalization performance of learning algorithms in order to make full use of the potential of the actual training sample from which these algorithms learn.¶
First, we propose an extension of the standard framework for the derivation of
generalization bounds for algorithms taking their hypotheses from random classes of functions. This approach is motivated by the fact that the function produced by a learning algorithm based on a random sample of data depends on this sample and is therefore a random function. Such an approach avoids the detour of the worst-case uniform bounds as done in the standard approach. We show that the mechanism which allows one to obtain generalization bounds for random classes in our framework is based on a “small complexity” of certain random coordinate
projections. We demonstrate how this notion of complexity relates to learnability
and how one can explore geometric properties of these projections in order to derive estimates of rates of convergence and good confidence interval estimates for the expected risk. We then demonstrate the generality of our new approach by presenting a range of examples, among them the algorithm-dependent compression schemes and the data-dependent luckiness
frameworks, which fall into our random subclass framework.¶
Second, we study in more detail generalization bounds for a specific algorithm which is of central importance in learning theory, namely the Empirical Risk Minimization algorithm (ERM). Recent results show that one can significantly improve the high-probability estimates for the convergence rates for empirical minimizers by a direct analysis of the ERM algorithm.
These results are based on a new localized notion of complexity of subsets of hypothesis functions with identical expected errors and are therefore dependent on the underlying unknown distribution. We investigate the extent to which one can estimate these high-probability convergence rates in a data-dependent manner. We provide an algorithm which computes a data-dependent upper bound for the expected error of empirical minimizers in terms of the “complexity” of data-dependent local subsets. These subsets are sets of functions of empirical errors of a given range and can be
determined based solely on empirical data.
We then show that recent direct estimates, which are essentially sharp estimates on the high-probability convergence rate for the ERM algorithm, can not be recovered universally from empirical data.
|
19 |
Marchés financiers avec une infinité d'actifs, couverture quadratique et délits d'initiésCampi, Luciano 18 December 2003 (has links) (PDF)
Cette thèse consiste en une série d'applications du calcul stochastique aux mathématiques financières. Elle est composée de quatre chapitres. Dans le premier on étudie le rapport entre la complétude du marché et l'extrémalité des mesures martingales equivalentes dans le cas d'une infinité d'actifs. Dans le deuxième on trouve des conditions équivalentes à l'existence et unicité d'une mesure martingale equivalente sous la quelle le processus des prix suit des lois n-dimensionnelles données à n fixe. Dans le troisième on étend à un marché admettant une infinité dénombrable d'actifs une charactérisation de la stratégie de couverture optimale (pour le critère moyenne-variance) basé sur une technique de changement de numéraire et extension artificielle. Enfin, dans le quatrième on s'occupe du problème de couverture d'un actif contingent dans un marché avec information asymetrique.
|
20 |
Contingent Hedging : Applying Financial Portfolio Theory on Product PortfoliosKarlsson, Victor, Svensson, Rikard, Eklöf, Viktor January 2012 (has links)
In an ever-changing global environment, the ability to adapt to the current economic climate is essential for a company to prosper and survive. Numerous previous re- search state that better risk management and low overall risks will lead to a higher firm value. The purpose of this study is to examine if portfolio theory, made for fi- nancial portfolios, can be used to compose product portfolios in order to minimize risk and optimize returns. The term contingent hedge is defined as an optimal portfolio that can be identified today, that in the future will yield a stable stream of returns at a low level of risk. For companies that might engage in costly hedging activities on the futures market, the benefits of creat- ing a contingent hedge are several. These include creating an optimized portfolio that minimizes risk and avoid trading contracts on futures markets that would incur hefty transaction costs and risks. Using quantitative financial models, product portfolio compositions are generated and compared with the returns and risks profile of individual commodities, as well as the actual product portfolio compositions of publicly traded mining companies. Us- ing Modern Portfolio Theory an efficient frontier is generated, yielding two inde- pendent portfolios, the minimum risk portfolio and the tangency portfolio. The Black-Litterman model is also used to generate yet another portfolio using a Bayesian approach. The portfolios are generated by historic time-series data and compared with the actual future development of commodities; the portfolios are then analyzed and compared. The results indicate that the minimum risk portfolio provides a signif- icantly lower risk than the compositions of all mining companies in the study, as well as the risks of individual commodities. This in turn will lead to several benefits for company management and the firm’s shareholders that are discussed throughout the study. However, as for a return-optimizing portfolio, no significant results can be found. Furthermore, the analysis suggests a series of improvements that could potentially yield an even greater result. The recommendation is that mining companies can use the methods discussed throughout this study as a way to generate a costless contin- gent hedge, rather than engage in hedging activities on futures markets.
|
Page generated in 0.1103 seconds