Spelling suggestions: "subject:"conditional valueatrisk"" "subject:"konditional valueatrisk""
41 |
Optimalizace parametrů zajištění v pojišťovnictví / Optimization of reinsurace parameters in insuranceDlouhá, Veronika January 2017 (has links)
This thesis is dedicated to searching optimal parameters of reinsurance with a focus of quota-share and stop-loss reinsurance. The optimization is based on minimization of value at risk and conditional value at risk of total costs of the insurer for the recieved risk. It also presents a compound random variable and shows various methods of obtaining its probability distribution, for example ap- proximation by lognormal or gamma mixtures distributions or by Panjer recurive method for continuous severity and numerical method of its solution. At the end of the thesis we can find the calculation of the optimal parameters of reinsurance for a compound random variable based on real data. We use various methods to determine probability distribution and premiums. 1
|
42 |
Stochastic Optimization for Integrated Energy System with Reliability Improvement Using Decomposition AlgorithmHuang, Yuping 01 January 2014 (has links)
As energy demands increase and energy resources change, the traditional energy system has been upgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations by mainly considering the integration of traditional fossil fuel generation with renewable generation. Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged market design and operations, even a national energy policy. In fact, only a few considerations were given to the optimization of energy expansion and generation taking into account the variability and uncertainty of energy supply and demand in energy markets. This usually causes an energy system unreliable to cope with unexpected changes, such as a surge in fuel price, a sudden drop of demand, or a large renewable supply fluctuation. Thus, for an overall energy system, optimizing a long-term expansion planning and market operation in a stochastic environment are crucial to improve the system's reliability and robustness. As little consideration was paid to imposing risk measure on the power management system, this dissertation discusses applying risk-constrained stochastic programming to improve the efficiency, reliability and economics of energy expansion and electric power generation, respectively. Considering the supply-demand uncertainties affecting the energy system stability, three different optimization strategies are proposed to enhance the overall reliability and sustainability of an energy system. The first strategy is to optimize the regional energy expansion planning which focuses on capacity expansion of natural gas system, power generation system and renewable energy system, in addition to transmission network. With strong support of NG and electric facilities, the second strategy provides an optimal day-ahead scheduling for electric power generation system incorporating with non-generation resources, i.e. demand response and energy storage. Because of risk aversion, this generation scheduling enables a power system qualified with higher reliability and promotes non-generation resources in smart grid. To take advantage of power generation sources, the third strategy strengthens the change of the traditional energy reserve requirements to risk constraints but ensuring the same level of systems reliability In this way we can maximize the use of existing resources to accommodate internal or/and external changes in a power system. All problems are formulated by stochastic mixed integer programming, particularly considering the uncertainties from fuel price, renewable energy output and electricity demand over time. Taking the benefit of models structure, new decomposition strategies are proposed to decompose the stochastic unit commitment problems which are then solved by an enhanced Benders Decomposition algorithm. Compared to the classic Benders Decomposition, this proposed solution approach is able to increase convergence speed and thus reduce 25% of computation times on the same cases.
|
43 |
How to Get Rich by Fund of Funds Investment - An Optimization Method for Decision MakingColakovic, Sabina January 2022 (has links)
Optimal portfolios have historically been computed using standard deviation as a risk measure.However, extreme market events have become the rule rather than the exception. To capturetail risk, investors have started to look for alternative risk measures such as Value-at-Risk andConditional Value-at-Risk. This research analyzes the financial model referred to as Markowitz 2.0 and provides historical context and perspective to the model and makes a mathematicalformulation. Moreover, practical implementation is presented and an optimizer that capturesthe risk of non-extreme events is constructed, which meets the needs of more customized investment decisions, based on investment preferences. Optimal portfolios are generated and anefficient frontier is made. The results obtained are then compared with those obtained throughthe mean-variance optimization framework. As concluded from the data, the optimal portfoliowith the optimal weights generated performs better regarding expected portfolio return relativeto the risk level for the investment.
|
44 |
Novel Approaches for Some Stochastic and Deterministic Scheduling ProblemsLiao, Lingrui 01 July 2011 (has links)
In this dissertation, we develop novel approaches to independently address two issues that are commonly encountered in machine scheduling problems: uncertainty of problem parameters (in particular, due to job processing times), and batching of jobs for processing on capacitated machines.
Our approach to address the uncertainty issue regards the indeterminate parameters as random variables, and explicitly considers the resulting variability of a performance measure. To incorporate variability into the schedule selection process, we develop a method to evaluate both the expectation and variance of various performance measures for a given schedule. Our method is based on the use of mixture models to approximate a variety of distribution types. The Expectation-Maximization algorithm of Dempster et al. (1977) is applied to derive mixture models of processing time distributions. Our method, then, utilizes these mixture models to calculate the distributions of other random variables in order to derive the expectation and variance of various scheduling performance measures, assuming that the job sequencing decisions are known a priori. To make our method more computationally efficient, we adapt a mixture reduction method to control the number of mixture components used in the intermediate steps. We apply our method to two different scheduling problems: the job shop makespan scheduling problem and the single machine total weighted tardiness scheduling problem, and compare its performance with that of Monte-Carlo method. The results show the efficacy of our mixture approximation method. It generates fairly accurate results while requiring significantly less CPU times. The proposed method offers a good compromise between the Monte Carlo method, which requires extensive effort, and use of simple normal approximation, which produces lower-quality results.
Next, we introduce and demonstrate for the first time in the literature the use of conditional-value-at-risk (CVaR) as a criterion for stochastic scheduling problems in order to obtain risk-averse solutions. This criterion has the tendency of minimizing both the expectation and variance of a performance measure simultaneously, which is an attractive feature in the scheduling area as most of the literature in this area considers the expectation and variance of a performance measure separately. Also, the CVaR has an added advantage of maintaining a linear objective function. We develop a scenario-based mixed integer programming formulation to minimize CVaR for the general scheduling problem involving various performance measures, and employ a decomposition-based approach for its solution. Furthermore, a set of valid inequalities are incorporated to strengthen the relaxed master problem of this decomposition scheme. The proposed approach is demonstrated on the single machine total weighted tardiness scheduling problem. Our computational investigation reveals the efficacy of the proposed decomposition approach and the effectiveness of using the CVaR as an optimization criterion for scheduling problems. Besides providing an exact approach to solve our stochastic scheduling problem, we also develop an efficient heuristic method to enable the use of CVaR for large-sized problems. To that end, we modify the Dynasearch method of Grosso et al. (2004) to minimize CVaR for a stochastic scheduling problem. Furthermore, we extend the application of CVaR to a parallel-machine total weighted tardiness problem. The use of CVaR appears to be quite promising for simultaneously controlling both the expected value and variability of a performance measure in a stochastic scheduling environment.
Scenario-based formulations have frequently been used for stochastic scheduling problems. However, the determination of a lower bound can be a time-consuming task for this approach. Next, we develop a new method for scenario generation that is computationally competitive and that assures attainment of an exact lower bound. Our approach is based on discretization of random parameter distributions of job processing times. We use the idea of Recursive Stratified Sampling to partition the probability space, so that the conditional expectations in each region yield scenario-wise parameter values. These scenarios are, then, used to formulate a two-stage stochastic program, which yields a lower bound for the original stochastic problem. We provide theoretical basis of our bounding approach for both the expectation and CVaR objectives. Our discrete bounding method generates exact lower bounds, as against the probabilistic bounds generated by Sample Average Approximation. We also present results of our numerical experimentation to compare the performances of these two approaches in terms of the bound value obtained and the CPU time required.
The problem pertaining to integrated batching and scheduling of jobs on capacitated parallel machines that we consider arises in the primary manufacturing sector of a pharmaceutical supply chain. We, first, develop a comprehensive mathematical programming model that can accommodate various realistic features of this problem. These features include batch production, sequence-dependent setup time/cost, and inter-period carryover of setup status. We further derive several valid inequalities that are based on the embedded subproblem structure. We also consider an alternative formulation (termed the Plant Location model) based on the lot-sizing perspective of the problem. Noting the resemblance of the campaign sequencing subproblem to the high multiplicity asymmetric traveling salesman problem (HMATSP), we adapt various ideas from the HMATSP to enforce the connectivity of the sequencing graph. Due to the complexity of this problem, we also explore the possibility of applying column generation technique for its solution. Various schemes of problem decomposition are considered, along with the use of dual stabilization technique to improve the convergence of the column generation procedure. We also develop heuristic methods to generate initial feasible solutions that further enhance the performance of the column generation method. A computational experimentation has been conducted on a data set that mimics real-life problem instances. It illustrates the effectiveness of using the proposed column generation method. / Ph. D.
|
45 |
[en] RISK ANALYSIS IN A PORTFOLIO OF COMMODITIES: A CASE STUDY / [pt] ANÁLISE DE RISCOS NUM PORTFÓLIO DE COMMODITIES: UM ESTUDO DE CASOLUCIANA SCHMID BLATTER MOREIRA 23 March 2015 (has links)
[pt] Um dos principais desafios no mercado financeiro é simular preços mantendo a estrutura de correlação entre os inúmeros ativos de um portfólio. Análise de Componentes Principais emerge como uma solução para este último problema. Além disso, dada a incerteza presente nos mercados de commodities de derivados de petróleo, o investidor quer proteger seus ativos de perdas potenciais. Como uma alternativa a esse problema, a otimização de várias medidas de risco, como Value-at-risk, Conditional Value-at-risk e medida Ômega, são ferramentas financeiras importantes. Além disso, o backtest é amplamente utilizado para validar e analisar o desempenho do método proposto. Nesta dissertação, trabalharemos com um portfólio de commodities de petróleo. Vamos unir diferentes técnicas e propor uma nova metodologia que consiste na diminuição da dimensão do portfólio proposto. O passo seguinte é simular os preços dos ativos na carteira e, em seguida, otimizar a alocação do portfólio de commodities de derivados do petróleo. Finalmente, vamos usar técnicas de backtest, a fim de validar nosso método. / [en] One of the main challenges in the financial market is to simulate prices keeping the correlation structure among numerous assets. Principal Component Analysis emerges as solution to the latter problem. Also, given the uncertainty present in commodities markets, an investor wants to protect his/her assets from potential losses, so as an alternative, the optimization of various risk measures, such as Value-at-risk, Conditional Value-at-risk and Omega Ratio, are important financial tools. Additionally, the backtest is widely used to validate and analyze the performance of the proposed methodology. In this dissertation, we will work with a portfolio of oil commodities. We will put together different techniques and propose a new methodology that consists in the (potentially) decrease the dimension of the proposed portfolio. The following step is to simulate the prices of the assets in the portfolio and then optimize the allocation of the portfolio of oil commodities. Finally, we will use backtest techniques in order to validate our method.
|
46 |
Portfolio selection and hedge funds : linearity, heteroscedasticity, autocorrelation and tail-riskBianchi, Robert John January 2007 (has links)
Portfolio selection has a long tradition in financial economics and plays an integral role in investment management. Portfolio selection provides the framework to determine optimal portfolio choice from a universe of available investments. However, the asset weightings from portfolio selection are optimal only if the empirical characteristics of asset returns do not violate the portfolio selection model assumptions. This thesis explores the empirical characteristics of traditional assets and hedge fund returns and examines their effects on the assumptions of linearity-in-the-mean testing and portfolio selection. The encompassing theme of this thesis is the empirical interplay between traditional assets and hedge fund returns. Despite the paucity of hedge fund research, pension funds continue to increase their portfolio allocations to global hedge funds in an effort to pursue higher risk-adjusted returns. This thesis presents three empirical studies which provide positive insights into the relationships between traditional assets and hedge fund returns. The first two empirical studies examine an emerging body of literature which suggests that the relationship between traditional assets and hedge fund returns is non-linear. For mean-variance investors, non-linear asset returns are problematic as they do not satisfy the assumption of linearity required for the covariance matrix in portfolio selection. To examine the linearity assumption as it relates to a mean-variance investor, a hypothesis test approach is employed which investigates the linearity-in-the-mean of traditional assets and hedge funds. The findings from the first two empirical studies reveal that conventional linearity-in-the-mean tests incorrectly conclude that asset returns are nonlinear. We demonstrate that the empirical characteristics of heteroscedasticity and autocorrelation in asset returns are the primary sources of test mis-specification in these linearity-in-the-mean hypothesis tests. To address this problem, an innovative approach is proposed to control heteroscedasticity and autocorrelation in the underlying tests and it is shown that traditional assets and hedge funds are indeed linear-in-the-mean. The third and final study of this thesis explores traditional assets and hedge funds in a portfolio selection framework. Following the theme of the previous two studies, the effects of heteroscedasticity and autocorrelation are examined in the portfolio selection context. The characteristics of serial correlation in bond and hedge fund returns are shown to cause a downward bias in the second sample moment. This thesis proposes two methods to control for this effect and it is shown that autocorrelation induces an overallocation to bonds and hedge funds. Whilst heteroscedasticity cannot be directly examined in portfolio selection, empirical evidence suggests that heteroscedastic events (such as those that occurred in August 1998) translate into the empirical feature known as tail-risk. The effects of tail-risk are examined by comparing the portfolio decisions of mean-variance analysis (MVA) versus mean-conditional value at risk (M-CVaR) investors. The findings reveal that the volatility of returns in a MVA portfolio decreases when hedge funds are included in the investment opportunity set. However, the reduction in the volatility of portfolio returns comes at a cost of undesirable third and fourth moments. Furthermore, it is shown that investors with M-CVaR preferences exhibit a decreasing demand for hedge funds as their aversion for tail-risk increases. The results of the thesis highlight the sensitivities of linearity tests and portfolio selection to the empirical features of heteroscedasticity, autocorrelation and tail-risk. This thesis contributes to the literature by providing refinements to these frameworks which allow improved inferences to be made when hedge funds are examined in linearity and portfolio selection settings.
|
47 |
[en] HEDGING RENEWABLE ENERGY SALES IN THE BRAZILIAN CONTRACT MARKET VIA ROBUST OPTIMIZATION / [pt] MODELO DE CONTRATAÇÃO PARA FONTES RENOVÁVEIS COM RUBUSTEZ AO PREÇO DE CURTO-PRAZOBRUNO FANZERES DOS SANTOS 26 March 2018 (has links)
[pt] O preço da energia no mercado de curto-prazo é caracterizado pela sua alta volatilidade e dificuldade de previsão, representando um alto risco para agentes produtores de energia, especialmente para geradores por fontes renováveis. A abordagem típica empregada por tais empresas para obter a estratégia de contratação ótima de médio e longo prazos é simular um conjunto de caminhos para os fatores de incerteza a fim de caracterizar a distribuição de probabilidade da receita futura e, então, otimizar o portfólio da empresa, maximizando o seu equivalente certo. Contudo, na prática, a modelagem e simulação do preço de curto prazo da energia é um grande desafio para os agentes do setor elétrico devido a sua alta dependência a parâmetros que são difíceis de prever no médio e longo, como o crescimento do PIB, variação da demanda, entrada de novos agentes no mercado, alterações regulatórias, entre outras.
Neste sentido, nesta dissertação, utilizamos otimização robusta para tratar a incerteza presente na distribuição do preço de curto-prazo da energia, enquanto a produção de energia renovável é tratada com cenários simulados exógenos, como é comum em programação estocástica. Mostramos, também, que esta abordagem pode ser interpretada a partir de dois pontos de vista: teste de estresse e aversão à ambiguidade. Com relação ao último, apresentamos um link entre otimização robusta e teoria de ambiguidade. Além disso, incluímos no modelo de formação de portfólio ótimo a possibilidade de considerar um contrato de opção térmica de compra para o hedge do portfólio do agente contra a irregularidade do preço de curto-prazo. Por fim, é apresentado um estudo de caso com dados realistas do sistema elétrico brasileiro para ilustrar a aplicabilidade da metodologia proposta. / [en] Energy spot price is characterized by its high volatility and difficult prediction, representing a major risk for energy companies, especially those that rely on renewable generation. The typical approach employed by such companies to address their mid- and long-term optimal contracting strategy is to simulate a large set of paths for the uncertainty factors to characterize the probability distribution of the future income and, then, optimize the company s portfolio to maximize its certainty equivalent. In practice, however, spot price modeling and simulation is a big challenge for agents due to its high dependence on parameters that are difficult to predict, e.g., GDP growth, demand variation, entrance of new market players, regulatory changes, just to name a few. In this sense, in this dissertation, we make use of robust optimization to treat the uncertainty on spot price distribution while renewable production remains accounted for by exogenously simulated scenarios, as is customary in stochastic programming. We show that this approach can be interpreted from two different point of views: stress test and aversion to ambiguity. Regarding the latter, we provide a link between robust optimization and ambiguity theory, which was an open gap in decision theory. Moreover, we include into the optimal portfolio model, the possibility to consider an energy call option contract to hedge the agent s portfolio against price spikes. A case study with realistic data from the Brazilian system is shown to illustrate the applicability of the proposed methodology.
|
48 |
[en] STOCHASTIC ANALYSIS OF ECONOMIC VIABILITY OF PHOTOVOLTAIC PANELS INSTALLATION IN LARGE CONSUMERS / [pt] ANÁLISE ESTOCÁSTICA DA VIABILIDADE ECONÔMICA DA INSTALAÇÃO DE PAINÉIS FOTOVOLTAICOS EM GRANDES CONSUMIDORESANDRES MAURICIO CESPEDES GARAVITO 25 May 2018 (has links)
[pt] A geração distribuída (GD) vem crescendo nos últimos anos no Brasil, particularmente a geração fotovoltaica, permitindo a pequenos e grandes consumidores ter um papel ativo no sistema elétrico, podendo investir em um sistema próprio de geração. Para os consumidores cativos, além da redução do custo de energia, o consumidor também pode ter uma redução no custo de demanda, que é calculado a partir de um contrato com a distribuidora que o atende. Assim, considerando a possibilidade de instalação de painéis fotovoltaicos, o desafio dos consumidores é estimar com maior acurácia possível sua energia, a energia gerada pelos painéis e as demandas máximas futuras de forma a determinar a quantidade ótima de painéis, bem como o contrato de demanda com a distribuidora. Nesta dissertação, propõe-se resolver este problema a partir da simulação de cenários futuros de consumo de energia, demanda máxima e correlacionando-os com cenários futuros de geração de energia. Em seguida, a partir de um modelo de otimização linear inteiro misto, calcula-se a quantidade ótima de painéis fotovoltaicos e a demanda a ser contratada. Na primeira parte da dissertação, a modelagem Box e Jenkins é utilizada para estimar os parâmetros do modelo estatístico de energia consumida e demanda combinados com a geração de energia dos painéis. Na segunda parte, é utilizado um modelo de otimização estocástica que utiliza uma combinação convexa de Valor Esperado (VE) e Conditional Value-at-Risk (CVaR) como métricas de risco para avaliar o número ótimo de painéis e a melhor contratação de demanda. Para ilustrar a abordagem proposta, é apresentado um caso de estudo real para um grande consumidor considerado na modalidade Verde A4 no Ambiente de Contratação Regulado. Os resultados obtidos mostraram que a utilização de painéis fotovoltaicos em um grande consumidor reduzem o custo anual de energia em até 20 por cento, comparado com o valor real faturado. / [en] Distributed Generation (GD) is growing up in the last years in Brazil, particularly photovoltaic generation, allowing small and large consumers play an important role in the electric system, investing in a own generation system. For the regulated consumers, besides the reduction of energy cost, they also may have a reduction in demand cost, which is computed from peak demand contract with the supply utility company. Therefore, taking into account the possibility of photovoltaic panels installation, the challenge of consumers is estimate with highest accuracy as possible its energy, the energy generation by the panels, and the future peak demand in order to estimate the optimum quantity of panels, as well as the peak demand contract with the utility. A way to solve this problem is to simulate future scenarios of energy consumption, peak demand, and correlate them with future scenarios of energy generation. After that, from a mixed integer linear stochastic optimization model, the optimum quantity of panels and peak demand to be contracted are computed. In the first part, the Box and Jenkins modelling is used to estimate the parameters of the energy consumption and peak demand by statistical model, combined with the energy generation of the panels. In the second part, a stochastic optimization model is applied using a convex combination of the Expected Value (VE) and Conditional Value-at-Risk (CVaR), which were used as risk metrics to rate the optimum number of panels and the best peak demand contract. To illustrate the proposed approach, a real case study of a large consumer presented considering the Green Tariff group A4 in the Regulated Environment. The results show that to use photovoltaic panels can reduce the annual cost by up to 20 per cent, compared with the billed real value.
|
49 |
[pt] ANÁLISE ESTOCÁSTICA DA CONTRATAÇÃO DE ENERGIA ELÉTRICA DE GRANDES CONSUMIDORES NO AMBIENTE DE CONTRATAÇÃO LIVRE CONSIDERANDO CENÁRIOS CORRELACIONADOS DE PREÇOS DE CURTO PRAZO, ENERGIA E DEMANDA / [en] STOCHASTIC ANALYSIS OF ENERGY CONTRACTING IN THE FREE CONTRACT ENVIRONMENT FOR BIG CONSUMERS CONSIDERING CORRELATED SCENARIOS OF SPOT PRICES, ENERGY AND POWER DEMANDDANIEL NIEMEYER TEIXEIRA PAULA 27 October 2020 (has links)
[pt] No Brasil, grandes consumidores podem estabelecer seus contratos de energia elétrica em dois ambientes: Ambiente de Contratação Regulado e Ambiente de Contratação Livre. Grandes consumidores são aqueles que possuem carga igual ou superior a 2 MW e podem ser atendidos sob contratos firmados em quaisquer um desses ambientes. Já os consumidores com demanda contratada inferior a 2 MW e superior a 500 kW podem ter seu contrato de energia estabelecido no Ambiente de Contratação Livre proveniente de geração de energia renovável ou no Ambiente de Contratação Regulada através das distribuidoras de energia. A principal vantagem do Ambiente de Contratação Livre é a possibilidade de negociar contratos com diferentes parâmetros, como, por exemplo, preço, quantidade de energia e prazo. Eventuais diferenças entre a energia contratada e a consumida, são liquidadas ao preço de energia de curto prazo, que pode ser bastante volátil.Neste caso o desafio é estabelecer uma estratégia de contratação que minimize os riscos associados a este ambiente. Esta dissertação propõe uma metodologia que envolve a simulação estatística de cenários correlacionados de energia, demanda máxima e preço de curto prazo (também chamado de PLD – Preço de Liquidação das Diferenças) para serem inseridos em um modelo matemático de otimização estocástica, que define os parâmetros ótimos da contratação de energia e demanda. Na parte estatística, um modelo Box e Jenkins é usado para estimar os parâmetros das séries históricas de energia e demanda máxima com o objetivo de simular cenários correlacionados com o PLD. Na parte de otimização, emprega-se uma combinação convexa entre Valor Esperado (VE) e Conditional Value-at-Risk (CVaR) como medidas de risco para encontrar os valores ótimos dos parâmetros contratuais, como a demanda máxima contratada, o volume mensal de energia a ser contratado, além das flexibilidades inferior e superior da energia contratada. Para ilustrar a abordagem proposta, essa metodologia é aplicada a um estudo de caso real para um grande consumidor no Ambiente de Contratação Livre. Os resultados indicaram que a metodologia proposta pode ser uma ferramenta eficiente para consumidores no Ambiente de Contratação Livre e, dado à natureza do modelo, pode ser generalizado para diferentes contratos e mercados de energia. / [en] In Brazil, big consumers can choose their energy contract between two different energy environments: Regulated Contract Environment and Free Contract Environment. Big consumers are characterized by installed load capacity equal or greater than 2 MW and can firm an energy contract under any of these environments. For those consumers with installed load lower than 2 MW and higher than 500 kW, their energy contracts can be firmed in the Free Contract Environment using renewable energy generation or in the Regulated Contract Environment by local distribution companies. The main advantage of the Free Market Environment is the possibility of negotiating contracts with different parameters such as, for example, price, energy quantity and deadlines. Possible differences between contracted energy and consumed energy are settled by the spot price, which can be rather volatile.
In this case, the challenge is to establish a contracting strategy that minimize the associated risks with this environment. This thesis proposes a methodology that involves statistical simulation of correlated energy, peak demand and Spot Price scenarios to be used in a stochastic optimization model that defines the optimal energy and demand contract parameters. In the statistical part, a Box and Jenkins model is used to estimate parameters for energy and peak demand in order to simulate scenarios correlated with Spot Price. In the optimization part, a convex combination of Expected Value (EV) and Conditional Value-at-Risk (CVaR) is used as risk measures to find the optimal contract parameters, such as the contracted peak demand, the seasonal energy contracted volumes, in addition to the upper and lower energy contracted bound. To illustrate this approach, this methodology is
applied in a real case study for a big consumer with an active Free Market Environment contract. The results indicate that the proposed methodology can be a efficient tool for consumers in the Free Contract Environment and, due to the nature of the model, it can be generalized for different energy contracts and markets.
|
50 |
Simulation-Based Portfolio Optimization with Coherent Distortion Risk Measures / Simuleringsbaserad portföljoptimering med koherenta distortionsriskmåttPrastorfer, Andreas January 2020 (has links)
This master's thesis studies portfolio optimization using linear programming algorithms. The contribution of this thesis is an extension of the convex framework for portfolio optimization with Conditional Value-at-Risk, introduced by Rockafeller and Uryasev. The extended framework considers risk measures in this thesis belonging to the intersecting classes of coherent risk measures and distortion risk measures, which are known as coherent distortion risk measures. The considered risk measures belonging to this class are the Conditional Value-at-Risk, the Wang Transform, the Block Maxima and the Dual Block Maxima measures. The extended portfolio optimization framework is applied to a reference portfolio consisting of stocks, options and a bond index. All assets are from the Swedish market. The returns of the assets in the reference portfolio are modelled with elliptical distribution and normal copulas with asymmetric marginal return distributions. The portfolio optimization framework is a simulation-based framework that measures the risk using the simulated scenarios from the assumed portfolio distribution model. To model the return data with asymmetric distributions, the tails of the marginal distributions are fitted with generalized Pareto distributions, and the dependence structure between the assets are captured using a normal copula. The result obtained from the optimizations is compared to different distributional return assumptions of the portfolio and the four risk measures. A Markowitz solution to the problem is computed using the mean average deviation as the risk measure. The solution is the benchmark solution which optimal solutions using the coherent distortion risk measures are compared to. The coherent distortion risk measures have the tractable property of being able to assign user-defined weights to different parts of the loss distribution and hence value increasing loss severities as greater risks. The user-defined loss weighting property and the asymmetric return distribution models are used to find optimal portfolios that account for extreme losses. An important finding of this project is that optimal solutions for asset returns simulated from asymmetric distributions are associated with greater risks, which is a consequence of more accurate modelling of distribution tails. Furthermore, weighting larger losses with increasingly larger weights show that the portfolio risk is greater, and a safer position is taken. / Denna masteruppsats behandlar portföljoptimering med linjära programmeringsalgoritmer. Bidraget av uppsatsen är en utvidgning av det konvexa ramverket för portföljoptimering med Conditional Value-at-Risk, som introducerades av Rockafeller och Uryasev. Det utvidgade ramverket behandlar riskmått som tillhör en sammansättning av den koherenta riskmåttklassen och distortions riksmåttklassen. Denna klass benämns som koherenta distortionsriskmått. De riskmått som tillhör denna klass och behandlas i uppsatsen och är Conditional Value-at-Risk, Wang Transformen, Block Maxima och Dual Block Maxima måtten. Det utvidgade portföljoptimeringsramverket appliceras på en referensportfölj bestående av aktier, optioner och ett obligationsindex från den Svenska aktiemarknaden. Tillgångarnas avkastningar, i referens portföljen, modelleras med både elliptiska fördelningar och normal-copula med asymmetriska marginalfördelningar. Portföljoptimeringsramverket är ett simuleringsbaserat ramverk som mäter risk baserat på scenarion simulerade från fördelningsmodellen som antagits för portföljen. För att modellera tillgångarnas avkastningar med asymmetriska fördelningar modelleras marginalfördelningarnas svansar med generaliserade Paretofördelningar och en normal-copula modellerar det ömsesidiga beroendet mellan tillgångarna. Resultatet av portföljoptimeringarna jämförs sinsemellan för de olika portföljernas avkastningsantaganden och de fyra riskmåtten. Problemet löses även med Markowitz optimering där "mean average deviation" används som riskmått. Denna lösning kommer vara den "benchmarklösning" som kommer jämföras mot de optimala lösningarna vilka beräknas i optimeringen med de koherenta distortionsriskmåtten. Den speciella egenskapen hos de koherenta distortionsriskmåtten som gör det möjligt att ange användarspecificerade vikter vid olika delar av förlustfördelningen och kan därför värdera mer extrema förluster som större risker. Den användardefinerade viktningsegenskapen hos riskmåtten studeras i kombination med den asymmetriska fördelningsmodellen för att utforska portföljer som tar extrema förluster i beaktande. En viktig upptäckt är att optimala lösningar till avkastningar som är modellerade med asymmetriska fördelningar är associerade med ökad risk, vilket är en konsekvens av mer exakt modellering av tillgångarnas fördelningssvansar. En annan upptäckt är, om större vikter läggs på högre förluster så ökar portföljrisken och en säkrare portföljstrategi antas.
|
Page generated in 0.1055 seconds