• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 55
  • 11
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 49
  • 31
  • 27
  • 26
  • 26
  • 24
  • 22
  • 20
  • 20
  • 20
  • 20
  • 19
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Performance analysis of spectrum sensing techniques for cognitive radio systems

Gismalla Yousif, Ebtihal January 2013 (has links)
Cognitive radio is a technology that aims to maximize the current usage of the licensed frequency spectrum. Cognitive radio aims to provide services for license-exempt users by making use of dynamic spectrum access (DSA) and opportunistic spectrum sharing strategies (OSS). Cognitive radios are defined as intelligent wireless devices capable of adapting their communication parameters in order to operate within underutilized bands while avoiding causing interference to licensed users. An underused band of frequencies in a specific location or time is known as a spectrum hole. Therefore, in order to locate spectrum holes, reliable spectrum sensing algorithms are crucial to facilitate the evolution of cognitive radio networks. Since a large and growing body of literature has mainly focused into the conventional time domain (TD) energy detector, throughout this thesis the problem of spectrum sensing is investigated within the context of a frequency domain (FD) approach. The purpose of this study is to investigate detection based on methods of nonparametric power spectrum estimation. The considered methods are the periodogram, Bartlett's method, Welch overlapped segments averaging (WOSA) and the Multitaper estimator (MTE). Another major motivation is that the MTE is strongly recommended for the application of cognitive radios. This study aims to derive the detector performance measures for each case. Another aim is to investigate and highlight the main differences between the TD and the FD approaches. The performance is addressed for independent and identically distributed (i.i.d.) Rayleigh channels and the general Rician and Nakagami fading channels. For each of the investigated detectors, the analytical models are obtained by studying the characteristics of the Hermitian quadratic form representation of the decision statistic and the matrix of the Hermitian form is identified. The results of the study have revealed the high accuracy of the derived mathematical models. Moreover, it is found that the TD detector differs from the FD detector in a number of aspects. One principal and generalized conclusion is that all the investigated FD methods provide a reduced probability of false alarm when compared with the TD detector. Also, for the case of periodogram, the probability of sensing errors is independent of the length of observations, whereas in time domain the probability of false alarm is increased when the sample size increases. The probability of false alarm is further reduced when diversity reception is employed. Furthermore, compared to the periodogram, both Bartlett method and Welch method provide better performance in terms of lower probability of false alarm but an increased probability of detection for a given probability of false alarm. Also, the performance of both Bartlett's method and WOSA is sensitive to the number of segments, whereas WOSA is also sensitive to the overlapping factor. Finally, the performance of the MTE is dependent on the number of employed discrete prolate spheroidal (Slepian) sequences, and the MTE outperforms the periodogram, Bartlett's method and WOSA, as it provides the minimal probability of false alarm.
152

Non-Parametric Clustering of Multivariate Count Data

Tekumalla, Lavanya Sita January 2017 (has links) (PDF)
The focus of this thesis is models for non-parametric clustering of multivariate count data. While there has been significant work in Bayesian non-parametric modelling in the last decade, in the context of mixture models for real-valued data and some forms of discrete data such as multinomial-mixtures, there has been much less work on non-parametric clustering of Multi-variate Count Data. The main challenges in clustering multivariate counts include choosing a suitable multivariate distribution that adequately captures the properties of the data, for instance handling over-dispersed data or sparse multivariate data, at the same time leveraging the inherent dependency structure between dimensions and across instances to get meaningful clusters. As the first contribution, this thesis explores extensions to the Multivariate Poisson distribution, proposing efficient algorithms for non-parametric clustering of multivariate count data. While Poisson is the most popular distribution for count modelling, the Multivariate Poisson often leads to intractable inference and a suboptimal t of the data. To address this, we introduce a family of models based on the Sparse-Multivariate Poisson, that exploit the inherent sparsity in multivariate data, reducing the number of latent variables in the formulation of Multivariate Poisson leading to a better t and more efficient inference. We explore Dirichlet process mixture model extensions and temporal non-parametric extensions to models based on the Sparse Multivariate Poisson for practical use of Poisson based models for non-parametric clustering of multivariate counts in real-world applications. As a second contribution, this thesis addresses moving beyond the limitations of Poisson based models for non-parametric clustering, for instance in handling over dispersed data or data with negative correlations. We explore, for the first time, marginal independent inference techniques based on the Gaussian Copula for multivariate count data in the Dirichlet Process mixture model setting. This enables non-parametric clustering of multivariate counts without limiting assumptions that usually restrict the marginal to belong to a particular family, such as the Poisson or the negative-binomial. This inference technique can also work for mixed data (combination of counts, binary and continuous data) enabling Bayesian non-parametric modelling to be used for a wide variety of data types. As the third contribution, this thesis addresses modelling a wide range of more complex dependencies such as asymmetric and tail dependencies during non-parametric clustering of multivariate count data with Vine Copula based Dirichlet process mixtures. While vine copula inference has been well explored for continuous data, it is still a topic of active research for multivariate counts and mixed multivariate data. Inference for multivariate counts and mixed data is a hard problem owing to ties that arise with discrete marginal. An efficient marginal independent inference approach based on extended rank likelihood, based on recent work in the statistics literature, is proposed in this thesis, extending the use vines for multivariate counts and mixed data in practical clustering scenarios. This thesis also explores the novel systems application of Bulk Cache Preloading by analysing I/O traces though predictive models for temporal non-parametric clustering of multivariate count data. State of the art techniques in the caching domain are limited to exploiting short-range correlations in memory accesses at the milli-second granularity or smaller and cannot leverage long range correlations in traces. We explore for the first time, Bulk Cache Preloading, the process of pro-actively predicting data to load into cache, minutes or hours before the actual request from the application, by leveraging longer range correlation at the granularity of minutes or hours. This enables the development of machine learning techniques tailored for caching due to relaxed timing constraints. Our approach involves a data aggregation process, converting I/O traces into a temporal sequence of multivariate counts, that we analyse with the temporal non-parametric clustering models proposed in this thesis. While the focus of our thesis is models for non-parametric clustering for discrete data, particularly multivariate counts, we also hope our work on bulk cache preloading paves the way to more inter-disciplinary research for using data mining techniques in the systems domain. As an additional contribution, this thesis addresses multi-level non-parametric admixture modelling for discrete data in the form of grouped categorical data, such as document collections. Non-parametric clustering for topic modelling in document collections, where a document is as-associated with an unknown number of semantic themes or topics, is well explored with admixture models such as the Hierarchical Dirichlet Process. However, there exist scenarios, where a doc-ument requires being associated with themes at multiple levels, where each theme is itself an admixture over themes at the previous level, motivating the need for multilevel admixtures. Consider the example of non-parametric entity-topic modelling of simultaneously learning entities and topics from document collections. This can be realized by modelling a document as an admixture over entities while entities could themselves be modeled as admixtures over topics. We propose the nested Hierarchical Dirichlet Process to address this gap and apply a two level version of our model to automatically learn author entities and topics from research corpora.
153

Modelagem de séries temporais financeiras multidimensionais via processos estocásticos e cópulas de Lévy / Multidimensional Financial Time Series Modelling via Lévy Stochastic Processes and Copulas

Edson Bastos e Santos 16 December 2005 (has links)
O principal objetivo deste estudo é descrever modelos para séries temporais de ativos financeiros que sejam robustos às tradicionais hipóteses: distribuição gaussiana e continuidade. O primeiro capítulo está preocupado em apresentar, de uma maneira geral, os conceitos matemáticos mais importantes relacionadas a processos estocásticos e difusões. O segundo capítulo trata de processos de incrementos independentes e estacionários, i.e., processos de Lévy, suas trajetórias estocásticas, propriedades distribucionais e, a relação entre processos markovianos e martingales. Alguns dos resultados apresentados neste capítulo são: a estrutura e as propriedades dos processos compostos de Poisson, medida de Lévy, decomposição de Lévy-Itô e representação de Lévy-Khinchin. O terceiro capítulo mostra como construir processos de Lévy por meio de transformações lineares, inclinação da medida de Lévy e subordina ção. Uma atenção especial é dada aos processos subordinados, tais como os modelos variância gama, normal gaussiana invertida e hiperbólico generalizado. Neste capítulo também é apresentado um exemplo pragmático com dados brasileiros de estimação de parâmetros por meio do método de máxima Verossimilhança. O quarto capítulo é devotado aos modelos multidimensionais e, introduz os conceito de cópula ordinária e de Lévy. Mostra-se que é possível caracterizar a dependência entre os componentes de um processo de Lévy multidimensional por meio da cópula de Lévy. Entre os resultados apresentados estão as generalizações do teorema de Sklar e a família de cópulas de Arquimedes aos processos de Lévy. Este capítulo também apresenta alguns exemplos que utilizam métodos de Monte Carlo, para simular processos de Lévy bidimensionais. / The main objective of this study is to describe models for financial assets time series that are robust to the traditional hypothesis: gaussian distributed and continuity. The first chapter are devoted to introduce the most important mathematical tools related to difusions and stochastic processes in general. The second chapter is concerned in the study of independent and stationary increments, i.e., Lévy processes, their sample paths behavior, distributional properties, and the relation to Markov and martingales processes. Some of the results presented are the structure and properties of a compound Poisson processes, Lévy measure, Lévy-Itô decomposition and Lévy-Khinchin representation. The third chapter demonstrates how to construct Lévy processes via linear transformation, tempering the Lévy measure and subordination. A special attention is given to several types of subordinated processes, comprising the variance gamma, the normal inverse gaussian and the generalized hyperbolic models. A pragmatic example of parameter estimation for brazilian data using the method of maximum likelihood is also given. Chapter four is devoted to multidimensional models, which introduces the notion of ordinary and Lévy copulas. It is shown that modelling via Lévy copula it is possible to characterize the dependence among components of multidimensional Lévy processes. Some of the results presented are generalizations of the Sklar’s theorem and the Archmedian family of copulas for Lévy processes. This chapter also presents some examples using Monte Carlo methods for simulating bidimensional Lévy processes.
154

Nonparametric estimation of the dependence function for multivariate extreme value distributions / Estimation non paramétrique de la fonction de dépendance des distributions multivariées à valeurs extrêmes

Ayari, Samia 01 December 2016 (has links)
Dans cette thèse, nous abordons l'estimation non paramétrique de la fonction de dépendance des distributions multivariées à valeurs extrêmes. Dans une première partie, on adopte l’hypothèse classique stipulant que les variables aléatoires sont indépendantes et identiquement distribuées (i.i.d). Plusieurs estimateurs non paramétriques sont comparés pour une fonction de dépendance trivariée de type logistique dans deux différents cas. Dans le premier cas, on suppose que les fonctions marginales sont des distributions généralisées à valeurs extrêmes. La distribution marginale est remplacée par la fonction de répartition empirique dans le deuxième cas. Les résultats des simulations Monte Carlo montrent que l'estimateur Gudendorf-Segers (Gudendorf et Segers, 2011) est plus efficient que les autres estimateurs pour différentes tailles de l’échantillon. Dans une deuxième partie, on ignore l’hypothèse i.i.d vue qu’elle n'est pas vérifiée dans l'analyse des séries temporelles. Dans le cadre univarié, on examine le comportement extrêmal d'un modèle autorégressif Gaussien stationnaire. Dans le cadre multivarié, on développe un nouveau théorème qui porte sur la convergence asymptotique de l'estimateur de Pickands vers la fonction de dépendance théorique. Ce fondement théorique est vérifié empiriquement dans les cas d’indépendance et de dépendance asymptotique. Dans la dernière partie de la thèse, l'estimateur Gudendorf-Segers est utilisé pour modéliser la structure de dépendance des concentrations extrêmes d’ozone observées dans les stations qui enregistrent des dépassements de la valeur guide et limite de la norme Tunisienne de la qualité d'air NT.106.04. / In this thesis, we investigate the nonparametric estimation of the dependence function for multivariate extreme value distributions. Firstly, we assume independent and identically distributed random variables (i.i.d). Several nonparametric estimators are compared for a trivariate dependence function of logistic type in two different cases. In a first analysis, we suppose that marginal functions are generalized extreme value distributions. In a second investigation, we substitute the marginal function by the empirical distribution function. Monte Carlo simulations show that the Gudendorf-Segers (Gudendorf and Segers, 2011) estimator outperforms the other estimators for different sample sizes. Secondly, we drop the i.i.d assumption as it’s not verified in time series analysis. Considering the univariate framework, we examine the extremal behavior of a stationary Gaussian autoregressive process. In the multivariate setting, we prove the asymptotic consistency of the Pickands dependence function estimator. This theoretical finding is confirmed by empirical investigations in the asymptotic independence case as well as the asymptotic dependence case. Finally, the Gudendorf-Segers estimator is used to model the dependence structure of extreme ozone concentrations in locations that record several exceedances for both guideline and limit values of the Tunisian air quality standard NT.106.04.
155

Frequency Analysis of Floods - A Nanoparametric Approach

Santhosh, D January 2013 (has links) (PDF)
Floods cause widespread damage to property and life in different parts of the world. Hence there is a paramount need to develop effective methods for design flood estimation to alleviate risk associated with these extreme hydrologic events. Methods that are conventionally considered for analysis of floods focus on estimation of continuous frequency relationship between peak flow observed at a location and its corresponding exceedance probability depicting the plausible conditions in the planning horizon. These methods are commonly known as at-site flood frequency analysis (FFA) procedures. The available FFA procedures can be classified as parametric and nonparametric. Parametric methods are based on the assumption that sample (at-site data) is drawn from a population with known probability density function (PDF). Those procedures have uncertainty associated with the choice of PDF and the method for estimation of its parameters. Moreover, parametric methods are ineffective in modeling flood data if multimodality is evident in their PDF. To overcome those artifacts, a few studies attempted using kernel based nonparametric (NP) methods as an alternative to parametric methods. The NP methods are data driven and they can characterize the uncertainty in data without prior assumptions as to the form of the PDF. Conventional kernel methods have shortcomings associated with boundary leakage problem and normal reference rule (considered for estimation of bandwidth), which have implications on flood quantile estimates. To alleviate this problem, focus of NP flood frequency analysis has been on development of new kernel density estimators (kdes). Another issue in FFA is that information on the whole hydrograph (e.g., time to the peak flow, volume of the flood flow and duration of the flood event) is needed, in addition to peak flow for certain applications. An option is to perform frequency analysis on each of the variables independently. However, these variables are not independent, and hence there is a need to perform multivariate analysis to construct multivariate PDFs and use the corresponding cumulative distribution functions (CDFs) to arrive at estimates of characteristics of design flood hydrograph. In this perspective, recent focus of flood frequency analysis studies has been on development of methods to derive joint distributions of flood hydrograph related variables in a nonparametric setting. Further, in real world scenario, it is often necessary to estimate design flood quantiles at target locations that have limited or no data. Regional Flood Frequency analysis (RFFA) procedures have been developed for use in such situations. These procedures involve use of a regionalization procedure for identification of a homogeneous group of watersheds that are similar to watershed of the target site in terms of flood response. Subsequently regional frequency analysis (RFA) is performed, wherein the information pooled from the group (region) forms basis for frequency analysis to construct a CDF (growth curve) that is subsequently used to arrive at quantile estimates at the target site. Though there are various procedures for RFFA, they are largely confined to only univariate framework considering a parametric approach as the basis to arrive at required quantile estimates. Motivated by these findings, this thesis concerns development of a linear diffusion process based adaptive kernel density estimator (D-kde) based methodologies for at-site as well as regional FFA in univariate as well as bivariate settings. The D-kde alleviates boundary leakage problem and also avoids normal reference rule while estimating optimal bandwidth by using Botev-Grotowski-Kroese estimator (BGKE). Potential of the proposed methodologies in both univariate and bivariate settings is demonstrated by application to synthetic data sets of various sizes drawn from known unimodal and bimodal parametric populations, and to real world data sets from India, USA, United Kingdom and Canada. In the context of at-site univariate FFA (considering peak flows), the performance of D- kde was found to be better when compared to four parametric distribution based methods (Generalized extreme value, Generalized logistic, Generalized Pareto, Generalized Normal), thirty-two ‘kde and bandwidth estimator’ combinations that resulted from application of four commonly used kernels in conjunction with eight bandwidth estimators, and a local polynomial–based estimator. In the context of at-site bivariate FFA considering ‘peakflow-flood volume’ and ‘flood duration-flood volume’ bivariate combinations, the proposed D-kde based methodology was shown to be effective when compared to commonly used seven copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and student’s-T copulas) and Gaussian kernel in conjunction with conventional as well as BGKE bandwidth estimators. Sensitivity analysis indicated that selection of optimum number of bins is critical in implementing D-kde in bivariate setting. In the context of univariate regional flood frequency analysis (RFFA) considering peak flows, a methodology based on D-kde and Index-flood methods is proposed and its performance is shown to be better when compared to that of widely used L-moment and Index-flood based method (‘regional L-moment algorithm’) through Monte-Carlo simulation experiments on homogeneous as well as heterogeneous synthetic regions, and through leave-one-out cross validation experiment performed on data sets pertaining to 54 watersheds in Godavari river basin, India. In this context, four homogeneous groups of watersheds are delineated in Godavari river basin using kernel principal component analysis (KPCA) in conjunction with Fuzzy c-means cluster analysis in L-moment framework, as an improvement over heterogeneous regions in the area (river basin) that are currently being considered by Central Water Commission, India. In the context of bivariate RFFA two methods are proposed. They involve forming site-specific pooling groups (regions) based on either L-moment based bivariate homogeneity test (R-BHT) or bivariate Kolmogorov-Smirnov test (R-BKS), and RFA based on D-kde. Their performance is assessed by application to data sets pertaining to stations in the conterminous United States. Results indicate that the R-BKS method is better than R-BHT in predicting quantiles of bivariate flood characteristics at ungauged sites, although the size of pooling groups formed using R-BKS is, in general, smaller than size of those formed using R-BHT. In general, the performance of the methods is found to improve with increase in size of pooling groups. Overall the results indicate that the D-kde always yields bona fide PDF (and CDF) in the context of univariate as well as bivariate flood frequency analysis, as probability density is nonnegative for all data points and integrates to unity for the valid range of the data. The performance of D-kde based at-site as well as regional FFA methodologies is found to be effective in univariate as well as bivariate settings, irrespective of the nature of population and sample size. A primary assumption underlying conventional FFA procedures has been that the time series of peak flow is stationarity (temporally homogeneous). However, recent studies carried out in various parts of the World question the assumption of flood stationarity. In this perspective, Time Varying Gaussian Copula (TVGC) based methodology is proposed in the thesis for flood frequency analysis in bivariate setting, which allows relaxing the assumption of stationarity in flood related variables. It is shown to be effective than seven commonly used stationary copulas through Monte-Carlo simulation experiments and by application to data sets pertaining to stations in the conterminous United States for which null hypothesis that peak flow data were non-stationary cannot be rejected.
156

Modeling sea-level rise uncertainties for coastal defence adaptation using belief functions / Utilisation des fonctions de croyance pour la modélisation des incertitudes dans les projections de l'élévation du niveau marin pour l'adaptation côtière

Ben Abdallah, Nadia 12 March 2014 (has links)
L’adaptation côtière est un impératif pour faire face à l’élévation du niveau marin,conséquence directe du réchauffement climatique. Cependant, la mise en place d’actions et de stratégies est souvent entravée par la présence de diverses et importantes incertitudes lors de l’estimation des aléas et risques futurs. Ces incertitudes peuvent être dues à une connaissance limitée (de l’élévation du niveau marin futur par exemple) ou à la variabilité naturelle de certaines variables (les conditions de mer extrêmes). La prise en compte des incertitudes dans la chaîne d’évaluation des risques est essentielle pour une adaptation efficace.L’objectif de ce travail est de proposer une méthodologie pour la quantification des incertitudes basée sur les fonctions de croyance – un formalisme de l’incertain plus flexible que les probabilités. Les fonctions de croyance nous permettent de décrire plus fidèlement l’information incomplète fournie par des experts (quantiles,intervalles, etc.), et de combiner différentes sources d’information. L’information statistique peut quand à elle être décrite par de fonctions des croyance définies à partir de la fonction de vraisemblance. Pour la propagation d’incertitudes, nous exploitons l’équivalence mathématique entre fonctions de croyance et intervalles aléatoires, et procédons par échantillonnage Monte Carlo. La méthodologie est appliquée dans l’estimation des projections de la remontée du niveau marin global à la fin du siècle issues de la modélisation physique, d’élicitation d’avis d’experts, et de modèle semi-empirique. Ensuite, dans une étude de cas, nous évaluons l’impact du changement climatique sur les conditions de mers extrêmes et évaluons le renforcement nécessaire d’une structure afin de maintenir son niveau de performance fonctionnelle. / Coastal adaptation is an imperative to deal with the elevation of the global sealevel caused by the ongoing global warming. However, when defining adaptationactions, coastal engineers encounter substantial uncertainties in the assessment of future hazards and risks. These uncertainties may stem from a limited knowledge (e.g., about the magnitude of the future sea-level rise) or from the natural variabilityof some quantities (e.g., extreme sea conditions). A proper consideration of these uncertainties is of principal concern for efficient design and adaptation.The objective of this work is to propose a methodology for uncertainty analysis based on the theory of belief functions – an uncertainty formalism that offers greater features to handle both aleatory and epistemic uncertainties than probabilities.In particular, it allows to represent more faithfully experts’ incomplete knowledge (quantiles, intervals, etc.) and to combine multi-sources evidence taking into account their dependences and reliabilities. Statistical evidence can be modeledby like lihood-based belief functions, which are simply the translation of some inference principles in evidential terms. By exploiting the mathematical equivalence between belief functions and random intervals, uncertainty can be propagated through models by Monte Carlo simulations. We use this method to quantify uncertainty in future projections of the elevation of the global sea level by 2100 and evaluate its impact on some coastal risk indicators used in coastal design. Sea-level rise projections are derived from physical modelling, expert elicitation, and historical sea-level measurements. Then, within a methodologically-oriented case study,we assess the impact of climate change on extreme sea conditions and evaluate there inforcement of a typical coastal defence asset so that its functional performance is maintained.
157

Simulation-Based Portfolio Optimization with Coherent Distortion Risk Measures / Simuleringsbaserad portföljoptimering med koherenta distortionsriskmått

Prastorfer, Andreas January 2020 (has links)
This master's thesis studies portfolio optimization using linear programming algorithms. The contribution of this thesis is an extension of the convex framework for portfolio optimization with Conditional Value-at-Risk, introduced by Rockafeller and Uryasev. The extended framework considers risk measures in this thesis belonging to the intersecting classes of coherent risk measures and distortion risk measures, which are known as coherent distortion risk measures. The considered risk measures belonging to this class are the Conditional Value-at-Risk, the Wang Transform, the Block Maxima and the Dual Block Maxima measures. The extended portfolio optimization framework is applied to a reference portfolio consisting of stocks, options and a bond index. All assets are from the Swedish market. The returns of the assets in the reference portfolio are modelled with elliptical distribution and normal copulas with asymmetric marginal return distributions. The portfolio optimization framework is a simulation-based framework that measures the risk using the simulated scenarios from the assumed portfolio distribution model. To model the return data with asymmetric distributions, the tails of the marginal distributions are fitted with generalized Pareto distributions, and the dependence structure between the assets are captured using a normal copula. The result obtained from the optimizations is compared to different distributional return assumptions of the portfolio and the four risk measures. A Markowitz solution to the problem is computed using the mean average deviation as the risk measure. The solution is the benchmark solution which optimal solutions using the coherent distortion risk measures are compared to. The coherent distortion risk measures have the tractable property of being able to assign user-defined weights to different parts of the loss distribution and hence value increasing loss severities as greater risks. The user-defined loss weighting property and the asymmetric return distribution models are used to find optimal portfolios that account for extreme losses. An important finding of this project is that optimal solutions for asset returns simulated from asymmetric distributions are associated with greater risks, which is a consequence of more accurate modelling of distribution tails. Furthermore, weighting larger losses with increasingly larger weights show that the portfolio risk is greater, and a safer position is taken. / Denna masteruppsats behandlar portföljoptimering med linjära programmeringsalgoritmer. Bidraget av uppsatsen är en utvidgning av det konvexa ramverket för portföljoptimering med Conditional Value-at-Risk, som introducerades av Rockafeller och Uryasev. Det utvidgade ramverket behandlar riskmått som tillhör en sammansättning av den koherenta riskmåttklassen och distortions riksmåttklassen. Denna klass benämns som koherenta distortionsriskmått. De riskmått som tillhör denna klass och behandlas i uppsatsen och är Conditional Value-at-Risk, Wang Transformen, Block Maxima och Dual Block Maxima måtten. Det utvidgade portföljoptimeringsramverket appliceras på en referensportfölj bestående av aktier, optioner och ett obligationsindex från den Svenska aktiemarknaden. Tillgångarnas avkastningar, i referens portföljen, modelleras med både elliptiska fördelningar och normal-copula med asymmetriska marginalfördelningar. Portföljoptimeringsramverket är ett simuleringsbaserat ramverk som mäter risk baserat på scenarion simulerade från fördelningsmodellen som antagits för portföljen. För att modellera tillgångarnas avkastningar med asymmetriska fördelningar modelleras marginalfördelningarnas svansar med generaliserade Paretofördelningar och en normal-copula modellerar det ömsesidiga beroendet mellan tillgångarna. Resultatet av portföljoptimeringarna jämförs sinsemellan för de olika portföljernas avkastningsantaganden och de fyra riskmåtten. Problemet löses även med Markowitz optimering där "mean average deviation" används som riskmått. Denna lösning kommer vara den "benchmarklösning" som kommer jämföras mot de optimala lösningarna vilka beräknas i optimeringen med de koherenta distortionsriskmåtten. Den speciella egenskapen hos de koherenta distortionsriskmåtten som gör det möjligt att ange användarspecificerade vikter vid olika delar av förlustfördelningen och kan därför värdera mer extrema förluster som större risker. Den användardefinerade viktningsegenskapen hos riskmåtten studeras i kombination med den asymmetriska fördelningsmodellen för att utforska portföljer som tar extrema förluster i beaktande. En viktig upptäckt är att optimala lösningar till avkastningar som är modellerade med asymmetriska fördelningar är associerade med ökad risk, vilket är en konsekvens av mer exakt modellering av tillgångarnas fördelningssvansar. En annan upptäckt är, om större vikter läggs på högre förluster så ökar portföljrisken och en säkrare portföljstrategi antas.

Page generated in 0.0547 seconds