• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 109
  • 40
  • 29
  • 23
  • 18
  • 18
  • 13
  • 11
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 483
  • 483
  • 483
  • 87
  • 85
  • 75
  • 74
  • 67
  • 66
  • 64
  • 61
  • 59
  • 55
  • 55
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Medidas de risco e seleção de portfolios / Risk measures and portfolio selection

Magro, Rogerio Correa 15 February 2008 (has links)
Orientador: Roberto Andreani / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T15:35:32Z (GMT). No. of bitstreams: 1 Magro_RogerioCorrea_M.pdf: 1309841 bytes, checksum: 3935050b45cf1bf5bbba46ac64603d72 (MD5) Previous issue date: 2008 / Resumo: Dado um capital C e n opções de investimento (ativos), o problema de seleção de portfolio consiste em aplicar C da melhor forma possivel para um determinado perfil de investidor. Visto que, em geral, os valores futuros destes ativos não são conhecidos, a questão fundamental a ser respondida e: Como mensurar a incerteza? No presente trabalho são apresentadas tres medidas de risco: O modelo de Markowitz, o Value-at-Risk (VaR) e o Conditional Value-At-Risk (CVaR). Defendemos que, sob o ponto de vista teorico, o Valor em Risco (VaR) e a melhor dentre as tres medidas. O motivo de tal escolha deve-se ao fato de que, para o VaR, podemos controlar a influencia que os cenários catastroficos possuem sobre nossas decisões. Em contrapartida, o processo computacional envolvido na escolha de um portfolio ótimo sob a metodologia VaR apresenta-se notadamente mais custoso do que aqueles envolvidos nos calculos das demais medidas consideradas. Dessa forma, nosso objetivo e tentar explorar essa vantagem computacional do Modelo de Markowitz e do CVaR no sentido de tentar aproximar suas decisões aquelas apontadas pela medida eleita. Para tal, consideraremos soluções VaR em seu sentido original (utilizando apenas o parametro de confiabilidade ao buscar portfolios otimos) e soluções com controle de perda (impondo uma cota superior para a perda esperada) / Abstract: Given a capital C and n investment options (assets), the problem of portfolio selection consists of applying C in the best possible way for a certain investor profile. Because, in general, the future values of these assets are unknown, the fundamental question to be answered is: How to measure the uncertainty? In the present work three risk measures are presented: The Markowitz¿s model, the Value-at-Risk (VaR) and the Conditional Value-at-Risk (CVaR). We defended that, under the theoretical point of view, the Value in Risk (VaR) is the best amongst the three measures. The reason of such a choice is due to the fact that, for VaR, we can control the influence that the catastrophic sceneries possess about our decisions. In the other hand, the computational process involved in the choice of a optimal portfolio under the VaR methodology comes notedly more expensive than those involved in the calculations of the other considered measures. In that way, our objective is to try to explore that computational advantage of the Markowitz¿s Model and of CVaR in the sense of trying to approach its decisions the those pointed by the elect measure. For such, we will consider VaR solutions in its original sense (just using the confidence level parameter when looking for optimal portfolios) and solutions with loss control (imposing a superior quota for the expected loss) / Mestrado / Otimização / Mestre em Matemática Aplicada
252

La programmation DC et la méthode Cross-Entropy pour certaines classes de problèmes en finance, affectation et recherche d’informations : codes et simulations numériques / The DC programming and the cross- entropy method for some classes of problems in finance, assignment and search theory

Nguyen, Duc Manh 24 February 2012 (has links)
La présente thèse a pour objectif principal de développer des approches déterministes et heuristiques pour résoudre certaines classes de problèmes d'optimisation en Finance, Affectation et Recherche d’Informations. Il s’agit des problèmes d’optimisation non convexe de grande dimension. Nos approches sont basées sur la programmation DC&DCA et la méthode Cross-Entropy (CE). Grâce aux techniques de formulation/reformulation, nous avons donné la formulation DC des problèmes considérés afin d’obtenir leurs solutions en utilisant DCA. En outre, selon la structure des ensembles réalisables de problèmes considérés, nous avons conçu des familles appropriées de distributions pour que la méthode Cross-Entropy puisse être appliquée efficacement. Toutes ces méthodes proposées ont été mises en œuvre avec MATLAB, C/C++ pour confirmer les aspects pratiques et enrichir notre activité de recherche. / In this thesis we focus on developing deterministic and heuristic approaches for solving some classes of optimization problems in Finance, Assignment and Search Information. They are large-scale nonconvex optimization problems. Our approaches are based on DC programming & DCA and the Cross-Entropy method. Due to the techniques of formulation/reformulation, we have given the DC formulation of considered problems such that we can use DCA to obtain their solutions. Also, depending on the structure of feasible sets of considered problems, we have designed appropriate families of distributions such that the Cross-Entropy method could be applied efficiently. All these proposed methods have been implemented with MATLAB, C/C++ to confirm the practical aspects and enrich our research works.
253

Modelování kybernetického rizika pomocí kopula funkcí / Cyber risk modelling using copulas

Spišiak, Michal January 2020 (has links)
Cyber risk or data breach risk can be estimated similarly as other types of operational risk. First we identify problems of cyber risk models in existing literature. A large dataset consisting of 5,713 loss events enables us to apply extreme value theory. We adopt goodness of fit tests adjusted for distribution functions with estimated parameters. These tests are often overlooked in the literature even though they are essential for correct results. We model aggregate losses in three different industries separately and then we combine them using a copula. A t-test reveals that potential one-year global losses due to data breach risk are larger than the GDP of the Czech Republic. Moreover, one-year global cyber risk measured with a 99% CVaR amounts to 2.5% of the global GDP. Unlike others we compare risk measures with other quantities which allows wider audience to understand the magnitude of the cyber risk. An estimate of global data breach risk is a useful indicator not only for insurers, but also for any organization processing sensitive data.
254

Solvency Capital Requirement (SCR) for Market Risks : A quantitative assessment of the Standard formula and its adequacy for a Swedish insurance company / Kapitalbaskrav för marknadsrisker under Solvens II : En kvantitativ utvärdering av Standardformeln och dess lämplighet för ett svenskt försäkringsbolag

Widing, Björn January 2016 (has links)
The purpose of this project is to validate the adequacy of the Standard formula, used to calculate the Solvency Capital Requirement (SCR), with respect to a Swedish insurance company. The sub-modules evaluated are Equity risk (type 1) and Interest rate risk. The validation uses a quantitative assessment and the concept of Value at Risk (VaR). Additionally, investment strategies for risk free assets are evaluated through a scenario based analysis. The findings support that the Equity shock of 39%, as proposed in the Standard formula, is appropriate for a diversified portfolio of global equities. Furthermore, to some extent; the Equity shock is also sufficient for a diversified global portfolio with an overweight of Swedish equities. Additionally, the findings shows that the Standard formula for Interest rate risks occasionally underestimates the true Interest rate risk. Furthermore, it’s shown that there are some advantage of selecting an investment strategy that stabilizes the Own fund of an insurance company rather than a strategy that minimizes the SCR. / Syftet med detta arbete är att utvärdera Standardformeln, som används för att beräkna solvenskapitalkravet (SCR) under Solvens II, med avseende på dess lämplighet för ett svensk försäkringsbolag. Modulerna som utvärderas är aktierisk (typ 1) och ränterisk. Utvärderingen genomförs med kvantitativa metoder och utifrån konceptet Value at Risk (VaR). Dessutom utvärderas investeringsstrategier för riskfria tillgångar genom en scenariobaserad analys. Resultaten stödjer att den av Standardformeln föreskrivna aktiechocken på -39 % är tillräcklig för en diversifierad global aktieportfölj. Dessutom är aktiechocken även tillräcklig för en diversifierad global portfölj med en viss övervikt mot svenska aktier. Vidare visar resultaten att Standardformeln under vissa omständigheter underskattar ränterisken. Slutligen visar den scenariobaserade analysen att det är fördelaktigt att välja en investeringsstrategi som stabiliserar Own fund, hellre än en strategi som minimerar SCR.
255

Alternative Methods for Value-at-Risk Estimation : A Study from a Regulatory Perspective Focused on the Swedish Market / Alternativa metoder för beräkning av Value-at-Risk : En studie från ett regelverksperspektiv med fokus på den svenska marknaden

Sjöwall, Fredrik January 2014 (has links)
The importance of sound financial risk management has become increasingly emphasised in recent years, especially with the financial crisis of 2007-08. The Basel Committee sets the international standards and regulations for banks and financial institutions, and in particular under market risk, they prescribe the internal application of the measure Value-at-Risk. However, the most established non-parametric Value-at-Risk model, historical simulation, has been criticised for some of its unrealistic assumptions. This thesis investigates alternative approaches for estimating non-parametric Value-at-Risk, by examining and comparing the capability of three counterbalancing weighting methodologies for historical simulation: an exponentially decreasing time weighting approach, a volatility updating method and, lastly, a more general weighting approach that enables the specification of central moments of a return distribution. With real financial data, the models are evaluated from a performance based perspective, in terms of accuracy and capital efficiency, but also in terms of their regulatory suitability, with a particular focus on the Swedish market. The empirical study shows that the capability of historical simulation is improved significantly, from both performance perspectives, by the implementation of a weighting methodology. Furthermore, the results predominantly indicate that the volatility updating model with a 500-day historical observation window is the most adequate weighting methodology, in all incorporated aspects. The findings of this paper offer significant input both to existing research on Value-at-Risk as well as to the quality of the internal market risk management of banks and financial institutions. / Betydelsen av sund finansiell riskhantering har blivit alltmer betonad på senare år, i synnerhet i och med finanskrisen 2007-08. Baselkommittén fastställer internationella normer och regler för banker och finansiella institutioner, och särskilt under marknadsrisk föreskriver de intern tillämpning av måttet Value-at-Risk. Däremot har den mest etablerade icke-parametriska Value-at-Risk-modellen, historisk simulering, kritiserats för några av dess orealistiska antaganden. Denna avhandling undersöker alternativa metoder för att beräkna icke-parametrisk Value-at‑Risk, genom att granska och jämföra prestationsförmågan hos tre motverkande viktningsmetoder för historisk simulering: en exponentiellt avtagande tidsviktningsteknik, en volatilitetsuppdateringsmetod, och slutligen ett mer generellt tillvägagångssätt för viktning som möjliggör specifikation av en avkastningsfördelnings centralmoment. Modellerna utvärderas med verklig finansiell data ur ett prestationsbaserat perspektiv, utifrån precision och kapitaleffektivitet, men också med avseende på deras lämplighet i förhållande till existerande regelverk, med särskilt fokus på den svenska marknaden. Den empiriska studien visar att prestandan hos historisk simulering förbättras avsevärt, från båda prestationsperspektiven, genom införandet av en viktningsmetod. Dessutom pekar resultaten i huvudsak på att volatilitetsuppdateringsmodellen med ett 500 dagars observationsfönster är den mest användbara viktningsmetoden i alla berörda aspekter. Slutsatserna i denna uppsats bidrar i väsentlig grad både till befintlig forskning om Value-at-Risk, liksom till kvaliteten på bankers och finansiella institutioners interna hantering av marknadsrisk.
256

Value at Risk Estimation with Neural Networks: A Recurrent Mixture Density Approach / Value at Risk Estimering med Neurala Nätverk: En Recurrent Mixture Density Approach

Karlsson Lille, William, Saphir, Daniel January 2021 (has links)
In response to financial crises and opaque practices, governmental entities and financial regulatory bodies have implemented several pieces of legislature and directives meant to protect investors and increase transparency. Such regulations often impose strict liquidity requirements and robust estimations of the risk borne by a financial firm at any given time. Value at Risk (VaR) measures how much an investment can stand to lose with a certain probability over a specified period of time and is ubiquitous in its use by institutional investors and banks alike. In practice, VaR estimations are often computed from simulations of historical data or parameterized distributions.  Inspired by the recent success of Arimond et al. (2020) in using a neural network for VaR estimation, we apply a combination of recurrent neural networks and a mixture density output layer for generating mixture density distributions of future portfolio returns from which VaR estimations are made. As in Arimond et al., we suppose the existence of two regimes stylized as bull and bear markets and employ Monte Carlo simulation to generate predictions of future returns. Rather than use a swappable architecture for the parameters in the mixture density distribution, we here let all parameters be generated endogenously in the neural network. The model's success is then validated through Christoffersen tests and by comparing it to the benchmark VaR estimation models, i.e., the mean-variance approach and historical simulation.  We conclude that recurrent mixture density networks show limited promise for the task of predicting effective VaR estimates if used as is, due to the model consistently overestimating the true portfolio loss. However, for practical use, encouraging results were achieved when manually shifting the predictions based on an average of the overestimation observed in the validation set. Several theories are presented as to why overestimation occurs, while no definitive conclusion could be drawn. As neural networks serve as black box models, their use for conforming to regulatory requirements is thus deemed questionable, likewise the assumption that financial data carries an inherent pattern with potential to be accurately approximated. Still, reactivity in the VaR estimations by the neural network is significantly more pronounced than in the benchmark models, motivating continued experimentation with machine learning methods for risk management purposes. Future research is encouraged to identify the source of overestimation and explore different machine learning techniques to attain more accurate VaR predictions. / I respons till finanskriser och svårfattlig verksamhetsutövning har överstatliga organ och finansmyndigheter implementerat lagstiftning och utfärdat direktiv i syfte att skydda investerare och öka transparens. Sådana regleringar förelägger ofta strikta likviditetskrav och krav på redogörelse av den finansiella risk som en marknadsaktör har vid en given tidpunkt. Value at Risk (VaR) mäter hur mycket en investering kan förlora med en viss sannolikhet över en på förhand bestämd tidsperiod och är allestädes närvarande i dess användning av institutionella investerare såväl som banker. I praktiken beräknas estimeringar av VaR framför allt via simulering av historisk data eller en parametrisering av densamma. Inspirerade av Arimond et als (2020) framgång i användning av neurala nätverk för VaR estimering applicerar vi en kombination av "recurrent" neurala nätverk och ett "mixture density output"-lager i syfte att generera mixture density-fördelningar för framtida portföljavkastning. Likt Arimond et al. förutsätter vi existensen av två regimer stiliserade som "bull" och "bear" marknader och applicerar Monte Carlo simulering för att generera prediktioner av framtida avkastning. Snarare än att använda en utbytbar arkitektur för parametrarna i mixture density-fördelningen låter vi samtliga parametrar genereras endogent i det neurala nätverket. Modellens framgång valideras via Christoffersens tester samt jämförelse med de prevalenta metoderna för att estimera VaR, det vill säga mean-variance-metoden och historisk simulering. Vår slutsats är att recurrent mixture density-nätverk enskilt uppvisar begränsad tillämpbarhet för uppgiften av att uppskatta effektiva VaR estimeringar, eftersom modellen konsekvent överestimerar den sanna portföljförlusten. För praktisk användning visade modellen däremot uppmuntrande resultat när dess prediktioner manuellt växlades ner baserat på ett genomsnitt av överestimeringen observerad i valideringsdatat. Flera teorier presenteras kring varför överestimeringen sker men ingen definitiv slutsats kunde dras. Eftersom neurala nätverksmodeller agerar som svarta lådor är deras potential till att bemöta regulatoriska krav tveksam, likväl antagandet att finansiell data har ett inneboende mönster kapabelt till att approximeras. Med detta sagt uppvisar neurala nätverkets VaR estimeringar betydligt mer reaktivitet än i de prevalenta modellerna, varför fortsatt experimentation med maskininlärningsmetoder för riskhantering ändå kan vara motiverat. Framtida forskning uppmuntras för att identifera källan till överestimeringen, samt utforskningen av andra maskininlärningsmetoder för att erhålla mer precisa VaR prediktioner.
257

Imputation and Generation of Multidimensional Market Data

Wall, Tobias, Titus, Jacob January 2021 (has links)
Market risk is one of the most prevailing risks to which financial institutions are exposed. The most popular approach in quantifying market risk is through Value at Risk. Organisations and regulators often require a long historical horizon of the affecting financial variables to estimate the risk exposures. A long horizon stresses the completeness of the available data; something risk applications need to handle.  The goal of this thesis is to evaluate and propose methods to impute financial time series. The performance of the methods will be measured with respect to both price-, and risk metric replication. Two different use cases are evaluated; missing values randomly place in the time series and consecutively missing values at the end-point of a time series. In total, there are five models applied to each use case, respectively.  For the first use case, the results show that all models perform better than the naive approach. The Lasso model lowered the price replication error by 35% compared to the naive model. The result from use case two is ambiguous. Still, we can conclude that all models performed better than the naive model concerning risk metric replication. In general, all models systemically underestimated the downstream risk metrics, implying that they failed to replicate the fat-tailed property of the price movement.
258

[pt] FORMAÇÃO DE PORTFÓLIO SOB INCERTEZA DE UMA EMPRESA DE PRODUÇÃO E REFINO DE PETRÓLEO / [en] PORTFOLIO SELECTION OF AN OIL AND GAS COMPANY UNDER UNCERTAINTY

17 September 2020 (has links)
[pt] A formação do portfólio de uma empresa de Petróleo envolve complexas decisões devido ao ambiente de incertezas e é de extrema importância na definição do futuro estratégico da empresa. Recentemente, a otimização de um portfólio de ativos de exploração e produção de petróleo vem sendo amplamente tratada na literatura, entretanto observa-se uma escassez de trabalhos que consideram a otimização do portfólio de refino. Este trabalho tem por objetivo propor um modelo de formação de portfólio para empresas do setor de óleo e gás, que possuem atividades tanto no segmento de exploração e produção (upstream) quanto no segmento de refino (downstream), levando em conta a integração entre ambos. Assim como nos modelos tradicionais, os preços do barril de petróleo e a produtividade dos campos serão tratadas como incertezas. O modelo proposto utilizará técnicas de programação estocástica com aversão a risco, medido pelo CVaR (Conditional Value-at-Risk). A fim de validar a metodologia proposta, um estudo de caso baseado em uma empresa de óleo e gás será apresentado. A aplicação numérica indicou que o modelo que otimiza o portfólio conjunto de upstream e downstream apresenta resultado da função objetivo até 28 por cento superior ao modelo usualmente tratado na literatura que trata apenas do portfólio de upstream. / [en] The portfolio allocation of an Oil and Gas company involves complex decisions within an uncertain environment and is extremely important in defining the firm s economical and financial future behavior. Recently, the portfolio selection problem for oil exploration and production (E&P) projects has been widely treated in the literature, however, few studies consider the optimization of the combined upstream and downstream portfolio. The purpose of this work is to propose a portfolio selection model for oil and gas companies, which operates both in exploration and production (upstream) and in refining (downstream), considering the integration between them. Crude oil prices and fields performance are the main uncertainties of the problem. The proposed model makes use of risk aversion stochastic programming techniques, measured by CVaR (conditional value at risk). To validate the proposed methodology a case study based on an Oil Company will be presented. The numerical application indicates that the model considering both upstream and downstream portfolio presents objective function results 28 percent higher than the model usually used in the literature that only optimizes the upstream portfolio.
259

Multi-factor approximation : An analysis and comparison ofMichael Pykhtin's paper “Multifactor adjustment”

Zanetti, Michael, Güzel, Philip January 2023 (has links)
The need to account for potential losses in rare events is of utmost importance for corporations operating in the financial sector. Common measurements for potential losses are Value at Risk and Expected Shortfall. These are measures of which the computation typically requires immense Monte Carlo simulations. Another measurement is the Advanced Internal Ratings-Based model that estimates the capital requirement but solely accounts for a single risk factor. As an alternative to the commonly used time-consuming credit risk methods and measurements, Michael Pykhtin presents methods to approximate the Value at Risk and Expected Shortfall in his paper Multi-factor adjustment from 2004. The thesis’ main focus is an elucidation and investigation of the approximation methods that Pykhtin presents. Pykhtin’s approximations are thereafter implemented along with the Monte Carlo methods that is used as a benchmark. A recreation of the results Pykhtin presents is completed with satisfactory, strongly matching results, which is a confident verification that the methods have been implemented in correspondence with the article. The methods are also applied on a small and large synthetic Nordea data set to test the methods on alternative data. Due to the size complexity of the large data set, it cannot be computed in its original form. Thus, a clustering algorithm is used to eliminate this limitation while still keeping characteristics of the original data set. Executing the methods on the synthetic Nordea data sets, the Value at Risk and Expected Shortfall results have a larger discrepancy between approximated and Monte Carlo simulated results. The noted differences are probably due to increased borrower exposures, and portfolio structures not being compatible with Pykhtin’s approximation. The purpose of clustering the small data set is to test the effect on the accuracy and understand the clustering algorithm’s impact before implementing it on the large data set. Clustering the small data set caused deviant results compared to the original small data set, which is expected. The clustered large data set’s approximation results had a lower discrepancy to the benchmark Monte Carlo simulated results in comparison to the small data. The increased portfolio size creates a granularity decreasing the outcome’s variance for both the MC methods, and the approximation methods, hence the low discrepancy. Overall, Pykhtin’s approximations’ accuracy and execution time are relatively good for the experiments. It is however very challenging for the approximate methods to handle large portfolios, considering the issues that the portfolio run into at just a couple of thousand borrowers. Lastly, a comparison between the Advanced Internal Ratings-Based model, and modified Value at Risks and Expected Shortfalls are made. Calculating the capital requirement for the Advanced Internal Ratings-Based model, the absence of complex concentration risk consideration is clearly illustrated by the significantly lower results compared to either of the other methods. In addition, an increasing difference can be identified between the capital requirements obtained from Pykhtin’s approximation and the Monte Carlo method. This emphasizes the importance of utilizing complex methods to fully grasp the inherent portfolio risks. / Behovet av att ta hänsyn till potentiella förluster av sällsynta händelser är av yttersta vikt för företag verksamma inom den finansiella sektorn. Vanliga mått på potentiella förluster är Value at Risk och Expected Shortfall. Dessa är mått där beräkningen vanligtvis kräver enorma Monte Carlo-simuleringar. Ett annat mått är Advanced Internal Ratings-Based-modellen som uppskattar ett kapitalkrav, men som enbart tar hänsyn till en riskfaktor. Som ett alternativ till dessa ofta förekommande och tidskrävande kreditriskmetoderna och mätningarna, presenterar Michael Pykhtin metoder för att approximera Value at Risk och Expected Shortfall i sin uppsats Multi-factor adjustment från 2004. Avhandlingens huvudfokus är en undersökning av de approximativa metoder som Pykhtin presenterar. Pykhtins approximationer implementeras och jämförs mot Monte Carlo-metoder, vars resultat används som referensvärden. Ett återskapande av resultaten Pykhtin presenterar i sin artikel har gjorts med tillfredsställande starkt matchande resultat, vilket är en säker verifiering av att metoderna har implementerats i samstämmighet med artikeln. Metoderna tillämpas även på ett litet och ett stor syntetiskt dataset erhållet av Nordea för att testa metoderna på alternativa data. På grund av komplexiteten hos det stora datasetet kan det inte beräknas i sin ursprungliga form. Således används en klustringsalgoritm för att eliminera denna begränsning samtidigt som egenskaperna hos den ursprungliga datamängden fortfarande bibehålls. Vid appliceringen av metoderna på de syntetiska Nordea-dataseten, identifierades en större diskrepans hos Value at Risk och Expected Shortfall-resultaten mellan de approximerade och Monte Carlo-simulerade resultaten. De noterade skillnaderna beror sannolikt på ökade exponeringar hos låntagarna och att portföljstrukturerna inte är förenliga med Pykhtins approximation. Syftet med klustringen av den lilla datasetet är att testa effekten av noggrannheten och förstå klustringsalgoritmens inverkan innan den implementeras på det stora datasetet. Att gruppera det lilla datasetet orsakade avvikande resultat jämfört med det ursprungliga lilla datasetet, vilket är förväntat. De modifierade stora datasetets approximativa resultat hade en lägre avvikelse mot de Monte Carlo simulerade benchmark resultaten i jämförelse med det lilla datasetet. Den ökade portföljstorleken skapar en finkornighet som minskar resultatets varians för både MC-metoderna och approximationerna, därav den låga diskrepansen. Sammantaget är Pykhtins approximationers noggrannhet och utförandetid relativt bra för experimenten. Det är dock väldigt utmanande för de approximativa metoderna att hantera stora portföljer, baserat på de problem som portföljen möter redan vid ett par tusen låntagare. Slutligen görs en jämförelse mellan Advanced Internal Ratings-Based-modellen, och modifierade Value at Risks och Expected shortfalls. När man beräknar kapitalkravet för Advanced Internal Ratings-Based-modellen, illustreras saknaden av komplexa koncentrationsrisköverväganden tydligt av de betydligt lägre resultaten jämfört med någon av de andra metoderna. Dessutom kan en ökad skillnad identifieras mellan kapitalkraven som erhålls från Pykhtins approximation och Monte Carlo-metoden. Detta understryker vikten av att använda komplexa metoder för att fullt ut förstå de inneboende portföljriskerna.
260

[pt] ESTIMANDO A CURVA FORWARD DE ENERGIA ELÉTRICA NO BRASIL COM UM MODELO DE DOIS AGENTES UTILIZANDO CONTRATOS POR DIFERENÇA E FUNÇÃO ECP-G / [en] OBTAINING THE FORWARD CURVE FOR THE BRAZILIAN POWER MARKET IN A DUAL AGENT MODEL WITH CONTRACTS FOR DIFFERENCE AND ECP-G FUNCTIONAL

FELIPE VAN DE SANDE ARAUJO 25 May 2020 (has links)
[pt] O desenvolvimento de métodos simples e efetivos para estimar o valor da curva forward de energia elétrica pode permitir que participantes do mercado precifiquem adequadamente suas posições especulativas ou defensivas. Uma ferramenta como esta poderia promover maior transparência para a definição dos preços futuros permitindo que os participantes do mercado futuro possam atuar com mais segurança e trazendo com isso um necessário aumento de liquidez. Neste trabalho apresento um modelo com dois agentes representativos que administram sua exposição ao risco através de um contrato por diferenças entre o preço futuro esperado da energia elétrica na região Sudeste no Brasil e um preço de referência. Demonstra-se que este mecanismo pode abranger todos os participantes do mercado, quer sejam especuladores ou agentes envolvidos na comercialização. A função de utilidade de cada participante é modelada utilizando uma versão Generalizada da Preferência CVaR Estendida (ECP-G) e o equilíbrio nesta transação é obtido através da minimização da diferença quadrática do equivalente certo destes agentes. Os resultados obtidos são comparados às previsões de mercado feitas por especialistas para o mesmo período e demonstram aderência dentro e fora da amostra. / [en] The development of simple and effective mechanisms to estimate the value of the forward curve of power could enable market participants to better price hedging or speculative positions. This could in turn provide transparency in future price definition to all market participants and lead to more safety and liquidity in the market for electricity futures and power derivatives. This work presents a model for two market participants, a buyer and a seller of a contract for difference on the future spot price of electricity in southwest Brazil. It is shown that this model is representative of all market participants that have exposure to the future price of power. Each participant s utility function is modelled using a Generalized Extended CVaR Preference (ECP-G) and the market equilibrium is obtained through the minimization of the quadratic difference between the certainty equivalent of both agents. The results are compared with prediction of the future spot price of power made by market specialists and found to yield reasonable results when using out of sample data.

Page generated in 0.1804 seconds