Spelling suggestions: "subject:"postcopula"" "subject:"popula""
251 |
Towards a Stochastic Operation of Switzerland’s Power GridMaury, Alban January 2023 (has links)
As Europe’s power production becomes increasingly reliant on intermittent renewable energy sources, uncertainties are likely to arise in power generation plans. Similarly, with the growing prevalence of electric vehicles, electric demand is also becoming more uncertain. These uncertainties in both production and demand can lead to challenges for European power systems. This thesis proposes the use of Monte-Carlo simulations to translate uncertainties in power generation and demand into uncertainties in the power grid. To integrate stochasticity in the forecasts, this thesis separates the multivariate probabilistic forecasting problem by first forecasting the marginal loads individually and probabilistically. Copula theory is then used to integrate spatial correlations and create realistic scenarios. These scenarios serve as inputs for Monte-Carlo simulations to estimate uncertainties in the power system. The methodology is tested using power injection data and the power system model of Switzerland. The results demonstrate that integrating stochasticity in forecasts improves the reliability of the power system. The proposed approach effectively models the uncertainty in both production and demand and provides valuable information for decision-making. / I takt med att Europas elproduktion blir alltmer beroende av intermittenta förnybara energikällor kommer det sannolikt att uppstå osäkerheter i planerna för elproduktion. På samma sätt blir efterfrågan på elektricitet mer osäker i takt med att elfordon blir allt vanligare. Dessa osäkerheter i både produktion och efterfrågan kan leda till utmaningar för de europeiska kraftsystemen. I denna avhandling föreslås att Monte-Carlo-simuleringar används för att omvandla osäkerheter i elproduktion och efterfrågan till osäkerheter i elnätet. För att integrera stokasticitet i prognoserna separerar denna avhandling det multivariata probabilistiska prognosproblemet genom att först individuellt och probabilistiskt prognostisera belastningar. Kopulateori används sedan för att integrera rumsliga korrelationer och skapa realistiska scenarier. Dessa scenarier tjänar som indata för Monte-Carlo-simuleringar för att uppskatta osäkerheterna i kraftsystemet. Metodiken testas med hjälp av data om inmatning av el och med hjälp av Schweiz kraftsystem. Resultaten visar att integrering av stokasticitet i prognoser förbättrar kraftsystemets tillförlitlighet. Den föreslagna metoden modellerar effektivt osäkerheten i både produktion och efterfrågan och ger värdefull information för beslutsfattandet.
|
252 |
Correction d'estimateurs de la fonction de Pickands et estimateur bayésienChalifoux, Kevin 01 1900 (has links)
Faire l'estimation d'une copule de valeurs extrêmes bivariée revient à estimer A, sa fonction de Pickands qui lui est associée. Cette fonction A:[0,1] \( \rightarrow \) [0,1] doit satisfaire certaines contraintes :
$$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$
$$\text{A est convexe.}$$
Plusieurs estimateurs ont été proposés pour estimer cette fonction A, mais peu respectent ses contraintes imposées. La contribution principale de ce mémoire est d'introduire une technique simple de correction d'estimateurs de la fonction de Pickands de sorte à ce que les estimateurs corrigés respectent les contraintes exigées. La correction proposée utilise une nouvelle propriété du vecteur aléatoire bivarié à valeurs extrêmes, combinée avec l'enveloppe convexe de l'estimateur obtenu pour garantir le respect des contraintes de la fonction A.
La seconde contribution de ce mémoire est de présenter un estimateur bayésien non paramétrique de la fonction de Pickands basé sur la forme introduite par Capéraà et al. (1997). L'estimateur utilise les processus de Dirichlet pour estimer la fonction de répartition d'une transformation du vecteur aléatoire bivarié à valeurs extrêmes.
Des analyses par simulations sont produites sur un ensemble d'estimateurs pour mesurer la performance de la correction et de l'estimateur bayésien proposés, sur un ensemble de 18 distributions de valeurs extrêmes bivariées. La correction améliore l'erreur quadratique moyenne sur l'ensemble des niveaux. L'estimateur bayésien proposé obtient l'erreur quadratique moyenne minimale pour les estimateurs considérés. / Estimating a bivariate extreme-value copula is equivalent to estimating A, its associated Pickands function. This function A: [0,1] \( \rightarrow \) [0,1] must satisfy some constraints :
$$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$
$$\text{A is convex.}$$
Many estimators have been proposed to estimate A, but few satisfy the imposed constraints. The main contribution of this thesis is the introduction of a simple correction technique for Pickands function estimators so that the corrected estimators respect the required constraints. The proposed correction uses a new property of the extreme-value random vector and the convex hull of the obtained estimator to guaranty the respect of the Pickands function constraints.
The second contribution of this thesis is to present a nonparametric bayesian estimator of the Pickands function based on the form introduced by Capéraà, Fougères and Genest (1997). The estimator uses Dirichlet processes to estimate the cumulative distribution function of a transformation of the extreme-value bivariate vector.
Analysis by simulations and a comparison with popular estimators provide a measure of performance for the proposed correction and bayesian estimator. The analysis is done on 18 bivariate extreme-value distributions. The correction reduces the mean square error on all distributions. The bayesian estimator has the lowest mean square error of all the considered estimators.
|
253 |
[pt] MODELOS ESTATÍSTICOS COM PARÂMETROS VARIANDO SEGUNDO UM MECANISMO ADAPTATIVO / [en] STATISTICAL MODELS WITH PARAMETERS CHANGING THROUGH AN ADAPTIVE MECHANISMHENRIQUE HELFER HOELTGEBAUM 23 October 2019 (has links)
[pt] Esta tese é composta de três artigos em que a ligação entre eles são modelos estatísticos com parametros variantes no tempo. Todos os artigos adotam um arcabouço que utiliza um mecanismo guiado pelos dados para a atualização dos parâmetros dos modelos. O primeiro explora a aplicação de uma nova classe de modelos de séries temporais não Gaussianas denominada modelos Generalized Autegressive Scores (GAS). Nessa classe de modelos, os parâmetros são atualizados utilizando o score da densidade preditiva. Motivamos o uso de modelos GAS simulando cenários conjuntos de fator de capacidade eólico. Nos últimos dois artigos, o gradiente descentente estocástico (SGD) é adotado para atualizar os parâmetros que variam no tempo. Tal metodologia utiliza a derivada de uma função custo especificada pelo usuário para guiar a otimização. A estrutura desenvolvida foi projetada para ser aplicada em um contexto de fluxo de dados contínuo, portanto, técnicas de filtragem adaptativa são exploradas para levar em consideração o concept-drift. Exploramos esse arcabouço com aplicações em segurança cibernética e infra-estrutura instrumentada. / [en] This thesis is composed of three papers in which the common ground among them is statistical models with time-varying parameters. All of them adopt a framework that uses a data-driven mechanism to update
its coefficients. The first paper explores the application of a new class of non-Gaussian time series framework named Generalized Autoregressive Scores (GAS) models. In this class of models the parameters are updated using the score of the predictive density. We motivate the use of GAS models by simulating joint scenarios of wind power generation. In the last two papers, Stochastic Gradient Descent (SGD) is adopted to update time-varying parameters. This methodology uses the derivative of a user specified cost function to drive the optimization. The developed framework is designed to be applied in a streaming data context, therefore adaptive filtering techniques are explored to account for concept-drift.We explore this framework on cyber-security and instrumented infrastructure applications.
|
254 |
The Probabilistic Characterization of Severe Rainstorm Events: Applications of Threshold AnalysisPalynchuk, Barry A. 04 1900 (has links)
<p>Hourly archived rainfall records are separated into individual rainfall events with</p> <p>an Inter-Event Time Denition. Individual storms are characterized by their depth,</p> <p>duration, and peak intensity. Severe events are selected from among the events for</p> <p>a given station. A lower limit, or threshold depth is used to make this selection,</p> <p>and an upper duration limit is established. A small number of events per year are</p> <p>left, which have relatively high depth and average intensity appropriate to small</p> <p>to medium catchment responses. The Generalized Pareto Distributions are tted</p> <p>to the storm depth data, and a bounded probability distribution is tted to storm</p> <p>duration. Peak storm intensity is bounded by continuity imposed by storm depth</p> <p>and duration. These physical limits are used to develop an index measure of peak</p> <p>storm intensity, called intensity peak factor, bounded on (0; 1), and tted to the Beta</p> <p>distribution. The joint probability relationship among storm variables is established,</p> <p>combining increasing storm depth, increasing intensity peak factor, with decreasing</p> <p>storm duration as being the best description of increasing rainstorm severity. The</p> <p>joint probability of all three variables can be modelled with a bivariate copula of</p> <p>the marginal distributions of duration and intensity peak factor, combined simply</p> <p>with the marginal distribution of storm depth. The parameters of the marginal</p> <p>distributions of storm variables, and the frequency of occurrence of threshold-excess</p> <p>events are used to assess possible shifts in their values as a function of time and</p> <p>temperature, in order to evaluate potential climate change eects for several stations.</p> <p>Example applications of the joint probability of storm variables are provided that</p> <p>illustrate the need to apply the methods developed.</p> <p>The overall contributions of this research combine applications of existing probabilistic</p> <p>tools, with unique characterizations of rainstorm variables. Relationships</p> <p>between these variables are examined to produce a new description of storm severity,</p> <p>and to begin the assessment of the eects of climate change upon severe rainstorm</p> <p>events.</p> <p>i</p> / Doctor of Philosophy (PhD)
|
255 |
亞洲金融市場整合與其對投資組合策略影響之研究—中國大陸之影響 / Asian Financial Market Integration and Its Effects on Portfolio Strategy— Mainland China's Impacts黃聖仁, Huang, Sheng-Jen Unknown Date (has links)
本研究之宗旨在於探究中國大陸對亞洲區域內國家的金融市場影響程度之變化。由過去的各國股市日報酬率資料間相關程度與政策改變間的影響結果,來觀察是否未來在兩岸政策更開放下會使中國大陸對台灣的影響程度上升,進而使國際間投資組合的風險分散效果下降。本研究自DataStream選取台灣、香港、中國大陸、泰國、印尼、新加坡、馬來西亞、菲律賓、日本以及美國等十國的股價指數日資料,以對數轉換為日報酬率後年化加以分析。選取時間自1991年7月15日(中國大陸上海證券交易所股價指數公開後)至2008年12月31日。本研究選用的方法為使用風險值(VaR; Value at Risk)的概念來取代傳統的標準差,衡量以該十國所分別組成的各投資組合風險值變動情形;以及由風險值所衍生出的Diversification Benefit與Incremental VaR的結果。發現到僅由亞洲區域國家內組成的投資組合風險分散效果逐漸下降;且效果並不如有納入區域外國家(如美國)的投資組合。接著本研究將Gaussian Copula模型放入VaR中以增加對極端值的捕捉能力,結果發現本研究所選用的指數加權移動平均法所求得之相關係數已可有效反應出各國之間的相依程度,即加入Copula的效果有限。另外藉由Copula所求得之相關係數顯示,台灣、香港對中國大陸之間的相依程度已逐漸上升,並開始出現超越美國之現象,其中又以2005年為上升趨勢的起點。最後本研究以向量自我迴歸模型(VARs)來驗證2005年前後中國大陸股市對其他亞洲區域國家的影響力是否存在結構性的改變;並再佐以變異數拆解之方法來觀察2005年前後各國家之間自發性衝擊對彼此之間的影響程度變化。研究結果發現,透過VARs可證明中國大陸對亞洲區域各國的影響力在2005年後轉變為顯著;僅對美國不存在此一現象。另外變異數拆解的結果也顯示各國之間的相依程度在2005年後有明顯的上升,中國大陸對各國的影響程度亦然。透過本研究之結論,在未來兩岸將簽訂金融監理備忘錄使整合關係提升的環境下,需提醒投資人整合關係的上升將使得以之為標的之投資組合風險分散效果下降,需作為投資策略之考量。 / The object of this research is to find out the trend of dependence and correlation between China and other Asian countries. Based on past information about the relationship between equity markets’ correlation and changes in policies, this research can make suggestions to the foreseeable future of Taiwan and China whose relationship will be more solid due to new policy. The data of this research are gathered from DataStream, which includes Taiwan, Hong Kong, China, Thailand, Indonesia, Singapore, Malaysia, Philippines, Japan and United States. Selected from 1991/07/15 (when the Shanghai SE Composite went public) to 2008/12/31, this research calculates the annualized daily return using natural logarithms of two consecutive daily index prices. This research uses Value at Risk (VaR) to measure the risk exposure of portfolios formed by ten countries, and extends to the use of Diversification Benefit and Incremental VaR. The results found out that the diversification effects of portfolio which includes only Asian countries are decreasing and inferior to the effects when cross region countries are included. The second study of this research is to combine Gaussian Copula Model with VaR to capture the effects of extreme values. Empirical results found out that the VaR using Exponentially Weighted Moving Average method is good enough for analyzing Asian stock markets. The correlation in Copula model suggests that the dependence between Taiwan and China had increased since 2005 and has the increasing trend which might overwhelm the dependence between Taiwan and United States. Final research is about using Vector Autoregressions Model (VARs) to testify is there exist any structural change of dependence before and after 2005, and using Variance Decomposition to observe the relationships between these ten countries. The results found out that there exist structural change in 2005, the post-2005 periods shows that for Asian countries the effect from China are significant and greater than pre-2005 periods.
|
256 |
Development and application of new statistical methods for the analysis of multiple phenotypes to investigate genetic associations with cardiometabolic traitsKonigorski, Stefan 27 April 2018 (has links)
Die biotechnologischen Entwicklungen der letzten Jahre ermöglichen eine immer detailliertere Untersuchung von genetischen und molekularen Markern mit multiplen komplexen Traits. Allerdings liefern vorhandene statistische Methoden für diese komplexen Analysen oft keine valide Inferenz.
Das erste Ziel der vorliegenden Arbeit ist, zwei neue statistische Methoden für Assoziationsstudien von genetischen Markern mit multiplen Phänotypen zu entwickeln, effizient und robust zu implementieren, und im Vergleich zu existierenden statistischen Methoden zu evaluieren. Der erste Ansatz, C-JAMP (Copula-based Joint Analysis of Multiple Phenotypes), ermöglicht die Assoziation von genetischen Varianten mit multiplen Traits in einem gemeinsamen Copula Modell zu untersuchen. Der zweite Ansatz, CIEE (Causal Inference using Estimating Equations), ermöglicht direkte genetische Effekte zu schätzen und testen.
C-JAMP wird in dieser Arbeit für Assoziationsstudien von seltenen genetischen Varianten mit quantitativen Traits evaluiert, und CIEE für Assoziationsstudien von häufigen genetischen Varianten mit quantitativen Traits und Ereigniszeiten. Die Ergebnisse von umfangreichen Simulationsstudien zeigen, dass beide Methoden unverzerrte und effiziente Parameterschätzer liefern und die statistische Power von Assoziationstests im Vergleich zu existierenden Methoden erhöhen können - welche ihrerseits oft keine valide Inferenz liefern.
Für das zweite Ziel dieser Arbeit, neue genetische und transkriptomische Marker für kardiometabolische Traits zu identifizieren, werden zwei Studien mit genom- und transkriptomweiten Daten mit C-JAMP und CIEE analysiert. In den Analysen werden mehrere neue Kandidatenmarker und -gene für Blutdruck und Adipositas identifiziert. Dies unterstreicht den Wert, neue statistische Methoden zu entwickeln, evaluieren, und implementieren. Für beide entwickelten Methoden sind R Pakete verfügbar, die ihre Anwendung in zukünftigen Studien ermöglichen. / In recent years, the biotechnological advancements have allowed to investigate associations of genetic and molecular markers with multiple complex phenotypes in much greater depth. However, for the analysis of such complex datasets, available statistical methods often don’t yield valid inference.
The first aim of this thesis is to develop two novel statistical methods for association analyses of genetic markers with multiple phenotypes, to implement them in a computationally efficient and robust manner so that they can be used for large-scale analyses, and evaluate them in comparison to existing statistical approaches under realistic scenarios. The first approach, called the copula-based joint analysis of multiple phenotypes (C-JAMP) method, allows investigating genetic associations with multiple traits in a joint copula model and is evaluated for genetic association analyses of rare genetic variants with quantitative traits. The second approach, called the causal inference using estimating equations (CIEE) method, allows estimating and testing direct genetic effects in directed acyclic graphs, and is evaluated for association analyses of common genetic variants with quantitative and time-to-event traits.
The results of extensive simulation studies show that both approaches yield unbiased and efficient parameter estimators and can improve the power of association tests in comparison to existing approaches, which yield invalid inference in many scenarios.
For the second goal of this thesis, to identify novel genetic and transcriptomic markers associated with cardiometabolic traits, C-JAMP and CIEE are applied in two large-scale studies including genome- and transcriptome-wide data. In the analyses, several novel candidate markers and genes are identified, which highlights the merit of developing, evaluating, and implementing novel statistical approaches. R packages are available for both methods and enable their application in future studies.
|
257 |
[en] MODEL FOR CALCULATING THE NEED FOR CAPITAL TO COVER THE UNDERWRITING RISKS OF NON-LIFE OPERATIONS / [pt] MODELO DE CÁLCULO DA NECESSIDADE DE CAPITAL PARA COBRIR OS RISCOS DE SUBSCRIÇÃO DE OPERAÇÕES NÃO VIDAEDUARDO HENRIQUE ALTIERI 03 May 2019 (has links)
[pt] Importante questão que se coloca atualmente é a capacidade de medição do volume de capital necessário, às sociedades seguradoras, para fazer frente aos diversos tipos de risco que tais companhias suportam no exercício de suas atividades. Esse volume de capital necessário deve ser tal que permita à companhia suportar variabilidades no negócio. As motivações para o desenvolvimento de modelos matemáticos visando à determinação desta necessidade de capital são tanto a preocupação das próprias companhias com a sua gestão de risco, como também aspectos relacionados ao estabelecimento de requerimentos de capital exigidos pelo regulador de seguro às sociedades seguradoras para fazer frente aos riscos suportados. Entre tais riscos, encontra-se a categoria dos riscos de subscrição, relacionados diretamente à operação central de uma seguradora (design de produto, precificação, processo de aceitação, regulação
de sinistros e provisionamento). Esta dissertação apresenta uma proposta de modelo para determinação do volume necessário de capital para fazer frente aos riscos de subscrição, na qual tal categoria de riscos é segregada nos riscos de provisão de sinistros (relativos aos sinistros ocorridos e, assim, relacionados às
provisões de sinistros) e nos riscos de emissão/precificação (relativos aos sinistros à ocorrer num horizonte de tempo de 1 ano, considerando novos negócios). Em especial, o modelo proposto utiliza processos de simulação que levam em consideração a estrutura de dependência das variáveis envolvidas e linhas de
negócio, fazendo uso do conceito de cópulas condicionais. / [en] Important question that arises today is the ability to measure the amount of capital necessary to insurance companies, to cope with various types of risk that these companies support in performing their activities. This volume of capital required must be such as to enable the company to bear variability in business. The motivations for the development of mathematical models aimed at the determination of those capital needs are both the concern of companies with their own risk management, as well as aspects related to establishing capital requirements required by the insurance regulator to insurance companies to face the risks borne. Among such risks, is the category of underwriting risks, directly related to the core operation of an insurance company (product design, pricing, underwriting process, loss settlement and provisioning). This dissertation proposes a model for determining the appropriate amount of capital to cope with the underwriting risks, where such risk category is segregated in reserving risks (relative to incurred events) and pricing risks (relative to events occurring in the time horizon of 1 year, considering new businesses). In particular, the proposed model uses simulation processes that take into account the dependence structure
of the variables involved and lines of business, making use of the concept of conditional copulas.
|
258 |
相依競爭風險邊際分配估計之探討張簡嘉詠 Unknown Date (has links)
競爭風險之下對邊際分配的估計,是許多領域中常遇到的問題。由於主要事件及次要事件互相競爭,只要一種事件先發生即終止對另一事件的觀察,在兩事件同時發生的機率為0之下,連一筆完整的資料我們都無法蒐集到。除非兩事件互為獨立或加上其它條件,否則會有邊際分配無法識別的問題。但是獨立的條件在有些情況下並不合理,為解決相依競爭風險之邊際分配無法識別的問題,可先假定兩事件發生時間之間的關係。
由於關聯結構定義出兩變數間的結合關係,我們可利用關聯結構解釋兩事件發生時間之間的關係。假定兩變數之相關性參數為已知,且採用機率積分轉換的觀念,本論文討論了Zheng 與 Klein提出的關聯結構-圖形估計量,是否會依設限程度、相關性強度和關聯結構形式的不同,以致估計能力有別。 / The problem of estimating marginal distributions in a competing risks study is often met in scientific fields. Because main event and secondary event compete with each other, and a first occurring event prevents us from observing another event promptly, the intact lifetimes or survival times are unable to be collected in the circumstances that the probability of both lifetimes coinciding is 0. Unless lifetimes being independent or adding other conditions, there is a problem that the marginal distributions are non-identifiable. But the condition of independence is not always reasonable, we may assume the relation between lifetimes has some special form
Because the copula defines the association between two variables, it can be employed to explain relation between lifetimes. Assuming that the dependence parameter in the copula framework is known, and adopting the concept of the probability integral transformations, this thesis has demonstrated whether the estimating abilities of the copula-graphic estimator, that Zheng and Klein put forward, are different in rates of censoring, intensities of dependence, and forms of the copula.
|
259 |
Quantitative Portfolio Construction Using Stochastic Programming / Kvantitativ portföljkonstruktion med användning av stokastisk programmering : En studie inom portföljoptimeringAshant, Aidin, Hakim, Elisabeth January 2018 (has links)
In this study within quantitative portfolio optimization, stochastic programming is investigated as an investment decision tool. This research takes the direction of scenario based Mean-Absolute Deviation and is compared with the traditional Mean-Variance model and widely used Risk Parity portfolio. Furthermore, this thesis is done in collaboration with the First Swedish National Pension Fund, AP1, and the implemented multi-asset portfolios are thus tailored to match their investment style. The models are evaluated on two different fund management levels, in order to study if the portfolio performance benefits from a more restricted feasible domain. This research concludes that stochastic programming over the investigated time period is inferior to Risk Parity, but outperforms the Mean-Variance Model. The biggest aw of the model is its poor performance during periods of market stress. However, the model showed superior results during normal market conditions. / I denna studie inom kvantitativ portföljoptimering undersöks stokastisk programmering som ett investeringsbeslutsverktyg. Denna studie tar riktningen för scenariobaserad Mean-Absolute Deviation och jämförs med den traditionella Mean-Variance-modellen samt den utbrett använda Risk Parity-portföljen. Avhandlingen görs i samarbete med Första AP-fonden, och de implementerade portföljerna, med era tillgångsslag, är därför skräddarsydda för att matcha deras investeringsstil. Modellerna utvärderas på två olika fondhanteringsnivåer för att studera om portföljens prestanda drar nytta av en mer restrektiv optimeringsmodell. Den här undersökningen visar att stokastisk programmering under undersökta tidsperioder presterar något sämre än Risk Parity, men överträffar Mean-Variance. Modellens största brist är dess prestanda under perioder av marknadsstress. Modellen visade dock något bättre resultat under normala marknadsförhållanden.
|
260 |
Evaluating Markov Chain Monte Carlo Methods for Estimating Systemic Risk Measures Using Vine Copulas / Utvärdering av Markov Chain Monte Carlo-metoder vid estimering av systemisk risk under portföljmodellering baserad på Vine CopulasGuterstam, Rasmus, Trojenborg, Vidar January 2021 (has links)
This thesis attempts to evaluate the Markov Chain Monte Carlo (MCMC) methods Metropolis-Hastings (MH) and No-U-Turn Sampler (NUTS) to estimate systemic risk measures. The subject of analysis is an equity portfolio provided by a Nordic asset management firm, which is modelled using a vine copula. The evaluation considers three different crisis outcomes on a portfolio level, and the results are compared with a Monte Carlo (MC) benchmark. The MCMC samplers attempt to increase sampling efficiency by sampling from these crisis events directly, which is impossible for an MC sampler. The resulting systemic risk measures are evaluated both on the portfolio level as well as marginal level. The results are divided. In part, the MCMC samplers proved to be efficient in terms of accepted samples, where NUTS outperformed MH. However, due to the practical implementation of the MCMC samplers and the vine copula model, the computational time required outweighed the gains in sampler efficiency - causing the MC sampler to outperform both MCMC samplers in certain settings. For NUTS, there seems to be great potential in the context of estimating systemic risk measures as it explores high-dimensional and multimodal joint distributions efficiently with low autocorrelation. It is concluded that asset management companies can benefit from both using vine copulas to model portfolio risk, as well as using MC or MCMC methods for evaluating systemic risk. However, for the MCMC samplers to be of practical relevance, it is recommended to further investigate efficient implementations of vine copulas in the context of MCMC sampling. / Detta examensarbete utvärderar Markov Chain Monte Carlo (MCMC)-metoderna No-U-Turn Sampler (NUTS) och Metropolis-Hastings (MH) vid uppskattning av systemiska riskmått. För att göra detta används en vine copula för att modellera en portfölj, baserad på empirisk data från ett nordiskt kapitalförvaltningsföretag. Metoderna utvärderas givet tre olika krishändelser och jämförs därefter med ett Monte Carlo (MC) benchmark. MCMC-metoderna försöker öka samplingseffektiviteten genom att simulera direkt från dessa krishändelser, vilket är omöjligt för en klassisk MC-metod. De resulterande systemiska riskmåtten utvärderas både på portföljnivå och på marginalnivå. Resultaten är delade. Dels visade sig MCMC-metoderna vara effektiva när det gäller accepterade samples där NUTS överträffade MH. Dock, med anledning av av den praktiska implementationen av MCMC-metoderna och vine copula modellen var beräkningstiden för hög trots effektiviteten hos metoden - vilket fick MC-metoden att överträffa de andra metoderna i givet dessa särskilda kontexter. När det kommer till att uppskatta systemiska riskmått finns det dock stor potential för NUTS eftersom metoden utforskar högdimensionella och multimodala sannolikhetsfördelningar effektivt med låg autokorrelation. Vi drar även slutsatsen att kapitalförvaltare kan dra nytta av att både använda riskmodeller baserade på vine copulas, samt använda MC- eller MCMC-metoder för att utvärdera systemisk risk. För att MCMC-metoderna ska vara av praktisk relevans rekommenderas det dock att framtida forskning görs där mer effektiva implementeringar av vine copula-baserade modeller görs i samband med MCMC-sampling.
|
Page generated in 0.0346 seconds