71 |
Étude empirique de distributions associées à la Fonction de Pénalité EscomptéeIbrahim, Rabï 03 1900 (has links)
On présente une nouvelle approche de simulation pour la fonction de densité conjointe du surplus avant la ruine et du déficit au moment de la ruine, pour des modèles de risque déterminés par des subordinateurs de Lévy. Cette approche s'inspire de la décomposition "Ladder height" pour la probabilité de ruine dans le Modèle Classique. Ce modèle, déterminé par un processus de Poisson composé, est un cas particulier du modèle plus général déterminé par un subordinateur, pour lequel la décomposition "Ladder height" de la probabilité de ruine s'applique aussi.
La Fonction de Pénalité Escomptée, encore appelée Fonction Gerber-Shiu (Fonction GS), a apporté une approche unificatrice dans l'étude des quantités liées à l'événement de la ruine été introduite. La probabilité de ruine et la fonction de densité conjointe du surplus avant la ruine et du déficit au moment de la ruine sont des cas particuliers de la Fonction GS. On retrouve, dans la littérature, des expressions pour exprimer ces deux quantités, mais elles sont difficilement exploitables de par leurs formes de séries infinies de convolutions sans formes analytiques fermées. Cependant, puisqu'elles sont dérivées de la Fonction GS, les expressions pour les deux quantités partagent une certaine ressemblance qui nous permet de nous inspirer de la décomposition "Ladder height" de la probabilité de ruine pour dériver une approche de simulation pour cette fonction de densité conjointe.
On présente une introduction détaillée des modèles de risque que nous étudions dans ce mémoire et pour lesquels il est possible de réaliser la simulation. Afin de motiver ce travail, on introduit brièvement le vaste domaine des mesures de risque, afin d'en calculer quelques unes pour ces modèles de risque.
Ce travail contribue à une meilleure compréhension du comportement des modèles de risques déterminés par des subordinateurs face à l'éventualité de la ruine, puisqu'il apporte un point de vue numérique absent de la littérature. / We discuss a simulation approach for the joint density function of the surplus prior to ruin and deficit at ruin for risk models driven by Lévy subordinators. This approach is inspired by the Ladder Height decomposition for the probability of ruin of such models. The Classical Risk Model driven by a Compound Poisson process is a particular case of this more generalized one.
The Expected Discounted Penalty Function, also referred to as the Gerber-Shiu Function (GS Function), was introduced as a unifying approach to deal with different quantities related to the event of ruin. The probability of ruin and the joint density function of surplus prior to ruin and deficit at ruin are particular cases of this function. Expressions for those two quantities have been derived from the GS Function, but those are not easily evaluated nor handled as they are infinite series of convolutions with no analytical closed form. However they share a similar structure, thus allowing to use the Ladder Height decomposition of the Probability of Ruin as a guiding method to generate simulated values for this joint density function.
We present an introduction to risk models driven by subordinators, and describe those models for which it is possible to process the simulation. To motivate this work, we also present an application for this distribution, in order to calculate different risk measures for those risk models. An brief introduction to the vast field of Risk Measures is conducted where we present selected measures calculated in this empirical study.
This work contributes to better understanding the behavior of subordinators driven risk models, as it offers a numerical point of view, which is absent in the literature.
|
72 |
Essays on Macro-Financial Linkagesde Rezende, Rafael B. January 2014 (has links)
This doctoral thesis is a collection of four papers on the analysis of the term structure of interest rates with a focus at the intersection of macroeconomics and finance. "Risk in Macroeconomic Fundamentals and Bond Return Predictability" documents that factors related to risks underlying the macroeconomy such as expectations, uncertainty and downside (upside) macroeconomic risks are able to explain variation in bond risk premia. The information provided is found to be, to a large extent, unrelated to that contained in forward rates and current macroeconomic conditions. "Out-of-sample bond excess returns predictability" provides evidence that macroeconomic variables, risks in macroeconomic outcomes as well as the combination of these different sources of information are able to generate statistical as well as economic bond excess returns predictability in an out-of-sample setting. Results suggest that this finding is not driven by revisions in macroeconomic data. The term spread (yield curve slope) is largely used as an indicator of future economic activity. "Re-examining the predictive power of the yield curve with quantile regression" provides new evidence on the predictive ability of the term spread by studying the whole conditional distribution of GDP growth. "Modeling and forecasting the yield curve by extended Nelson-Siegel class of models: a quantile regression approach" deals with yield curve prediction. More flexible Nelson-Siegel models are found to provide better fitting to the data, even when penalizing for additional model complexity. For the forecasting exercise, quantile-based models are found to overcome all competitors. / <p>Diss. Stockholm : Stockholm School of Economics, 2014. Introduction together with 4 papers.</p>
|
73 |
[en] A COMPARATIVE STUDY OF THE FORECAST CAPABILITY OF VOLATILITY MODELS / [pt] ESTUDO COMPARATIVO DA CAPACIDADE PREDITIVA DE MODELOS DE ESTIMAÇÃO DE VOLATILIDADELUIS ANTONIO GUIMARAES BENEGAS 15 January 2002 (has links)
[pt] O conceito de risco é definido como a distribuição de
resultados inesperados devido a alterações nos valores das
variáveis que descrevem o mercado. Entretanto, o risco não
é uma variável observável e sua quantificação depende do
modelo empregado para avaliá-lo. Portanto, o uso de
diferentes modelos pode levar a previsões de risco
significativamente diferentes.O objetivo principal desta
dissertação é realizar um estudo comparativo dos modelos
mais amplamente utilizados (medição de variância amostral
nos últimos k períodos, modelos de amortecimento
exponencial e o GARCH(1,1) de Bollerslev) quanto à
capacidade preditiva da volatilidade.Esta dissertação
compara os modelos de estimação de volatilidade citados
acima quanto à sua capacidade preditiva para carteiras
compostas por um conjunto de ações negociadas no mercado
brasileiro. As previsões de volatilidade desses modelos
serão comparadas com a volatilidade real fora da amostra.
Como a volatilidade real não é uma variável observável,
usou-se o mesmo procedimento adotado pelo RiskMetrics para
o cálculo do fator de decaimento ótimo: assumiu-se a
premissa que o retorno médio de cada uma das carteiras de
ações estudadas é igual a zero e,como conseqüência disso, a
previsão um passo à frente da variância do retorno
realizada na data t é igual ao valor esperado do quadrado
do retorno na data t.O objetivo final é concluir, por meio
de técnicas de backtesting, qual dos modelos de previsão de
volatilidade apresentou melhor performance quanto aos
critérios de comparação vis-à-vis ao esforço computacional
necessário. Dessa forma, pretende-se avaliar qual desses
modelos oferece a melhor relação custo-benefício para o
mercado acionário brasileiro. / [en] The risk concept is defined as the distribution of the
unexpected results from variations in the values of the
variables that describe the market. However, the variable
risk is not observable and its measurement depends on which
model is used in its evaluation. Thus, the application of
different models could result in significant different risk
forecasts.The goal of this study is to carry out a
comparison within the largest used models (sample
variance in the last k observations, exponentially
smoothing models and the Bollerslev s model GARCH(1,1)).
The study compares the models mentioned above regarding its
forecast capability of the volatility for portfolios of
selected brazilian stocks. The volatility forecasts will be
compared to the actual out of sample volatility. As long as
the actual volatility is not an observable variable, the
same procedure adopted by RiskMetrics in the calculation
of the optimum decay factor will be used: it assumes the
premise that the average return of which one of the stock
portfolios is equal zero and, as the consequence of this
fact, the one step variance forecast of the portfolio
return carried out on date t is equal to expected value of
the squared return of date t.The final objective is to
conclude, using backtesting techniques, which of the
forecasting volatility models show the best performance
regarding the comparison criterions vis-a-vis the
demanding computer efforts. By this way, it was aimed to
evaluate which of them offer the best cost-benefit relation
for the brazilian equity market.
|
74 |
Contributions à l'estimation de quantiles extrêmes. Applications à des données environnementales / Some contributions to the estimation of extreme quantiles. Applications to environmental data.Methni, Jonathan El 07 October 2013 (has links)
Cette thèse s'inscrit dans le contexte de la statistique des valeurs extrêmes. Elle y apporte deux contributions principales. Dans la littérature récente en statistique des valeurs extrêmes, un modèle de queues de distributions a été introduit afin d'englober aussi bien les lois de type Pareto que les lois à queue de type Weibull. Les deux principaux types de décroissance de la fonction de survie sont ainsi modélisés. Un estimateur des quantiles extrêmes a été déduit de ce modèle mais il dépend de deux paramètres inconnus, le rendant inutile dans des situations pratiques. La première contribution de cette thèse est de proposer des estimateurs de ces paramètres. Insérer nos estimateurs dans l'estimateur des quantiles extrêmes précédent permet alors d'estimer des quantiles extrêmes pour des lois de type Pareto aussi bien que pour des lois à queue de type Weibull d'une façon unifiée. Les lois asymptotiques de nos trois nouveaux estimateurs sont établies et leur efficacité est illustrée sur des données simulées et sur un jeu de données réelles de débits de la rivière Nidd se situant dans le Yorkshire en Angleterre. La seconde contribution de cette thèse consiste à introduire et estimer une nouvelle mesure de risque appelé Conditional Tail Moment. Elle est définie comme le moment d'ordre a>0 de la loi des pertes au-delà du quantile d'ordre p appartenant à ]0,1[ de la fonction de survie. Estimer le Conditional Tail Moment permet d'estimer toutes les mesures de risque basées sur les moments conditionnels telles que la Value-at-Risk, la Conditional Tail Expectation, la Conditional Value-at-Risk, la Conditional Tail Variance ou la Conditional Tail Skewness. Ici, on s'intéresse à l'estimation de ces mesures de risque dans le cas de pertes extrêmes c'est-à-dire lorsque p tend vers 0 lorsque la taille de l'échantillon augmente. On suppose également que la loi des pertes est à queue lourde et qu'elle dépend d'une covariable. Les estimateurs proposés combinent des méthodes d'estimation non-paramétrique à noyau avec des méthodes issues de la statistique des valeurs extrêmes. Le comportement asymptotique de nos estimateurs est établi et illustré aussi bien sur des données simulées que sur des données réelles de pluviométrie provenant de la région Cévennes-Vivarais. / This thesis can be viewed within the context of extreme value statistics. It provides two main contributions to this subject area. In the recent literature on extreme value statistics, a model on tail distributions which encompasses Pareto-type distributions as well as Weibull tail-distributions has been introduced. The two main types of decreasing of the survival function are thus modeled. An estimator of extreme quantiles has been deduced from this model, but it depends on two unknown parameters, making it useless in practical situations. The first contribution of this thesis is to propose estimators of these parameters. Plugging our estimators in the previous extreme quantiles estimator allows us to estimate extreme quantiles from Pareto-type and Weibull tail-distributions in an unified way. The asymptotic distributions of our three new estimators are established and their efficiency is illustrated on a simulation study and on a real data set of exceedances of the Nidd river in the Yorkshire (England). The second contribution of this thesis is the introduction and the estimation of a new risk measure, the so-called Conditional Tail Moment. It is defined as the moment of order a>0 of the loss distribution above the quantile of order p in (0,1) of the survival function. Estimating the Conditional Tail Moment permits to estimate all risk measures based on conditional moments such as the Value-at-Risk, the Conditional Tail Expectation, the Conditional Value-at-Risk, the Conditional Tail Variance or the Conditional Tail Skewness. Here, we focus on the estimation of these risk measures in case of extreme losses i.e. when p converges to 0 when the size of the sample increases. It is moreover assumed that the loss distribution is heavy-tailed and depends on a covariate. The estimation method thus combines nonparametric kernel methods with extreme-value statistics. The asymptotic distribution of the estimators is established and their finite sample behavior is illustrated both on simulated data and on a real data set of daily rainfalls in the Cévennes-Vivarais region (France).
|
75 |
Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance / Tools and models for the study of some spatial and network risks : application to climate extremes and contagion in financeKoch, Erwan 02 July 2014 (has links)
Cette thèse s’attache à développer des outils et modèles adaptés a l’étude de certains risques spatiaux et en réseaux. Elle est divisée en cinq chapitres. Le premier consiste en une introduction générale, contenant l’état de l’art au sein duquel s’inscrivent les différents travaux, ainsi que les principaux résultats obtenus. Le Chapitre 2 propose un nouveau générateur de précipitations multi-site. Il est important de disposer de modèles capables de produire des séries de précipitations statistiquement réalistes. Alors que les modèles précédemment introduits dans la littérature concernent essentiellement les précipitations journalières, nous développons un modèle horaire. Il n’implique qu’une seule équation et introduit ainsi une dépendance entre occurrence et intensité, processus souvent considérés comme indépendants dans la littérature. Il comporte un facteur commun prenant en compte les conditions atmosphériques grande échelle et un terme de contagion auto-regressif multivarié, représentant la propagation locale des pluies. Malgré sa relative simplicité, ce modèle reproduit très bien les intensités, les durées de sècheresse ainsi que la dépendance spatiale dans le cas de la Bretagne Nord. Dans le Chapitre 3, nous proposons une méthode d’estimation des processus maxstables, basée sur des techniques de vraisemblance simulée. Les processus max-stables sont très adaptés à la modélisation statistique des extrêmes spatiaux mais leur estimation s’avère délicate. En effet, la densité multivariée n’a pas de forme explicite et les méthodes d’estimation standards liées à la vraisemblance ne peuvent donc pas être appliquées. Sous des hypothèses adéquates, notre estimateur est efficace quand le nombre d’observations temporelles et le nombre de simulations tendent vers l’infini. Cette approche par simulation peut être utilisée pour de nombreuses classes de processus max-stables et peut fournir de meilleurs résultats que les méthodes actuelles utilisant la vraisemblance composite, notamment dans le cas où seules quelques observations temporelles sont disponibles et où la dépendance spatiale est importante / This thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is high
|
76 |
Matematické modelování v neživotním pojištění / Mathematical modelling in general insuranceZajíček, Jakub January 2015 (has links)
This diploma thesis deals with the mathematical models in general insurance. The aim of this thesis is to analyse selected mathematical models that are widely used in general insurance for the estimation of insurance portfolio statistics, pricing and the regulatory capital requirement calculation. Claim frequency models, claim severity models, aggregate loss models and generalized linear models are analysed. This thesis consists of a theoretical and a practical part. The theoretical part contains description of selected models. Described models are then applied to a real dataset in the practical part. The real dataset modelling was performed using the statistical software R. It has been proved that maximum likelihood parameter estimations are of better quality than the method of moments or quantile method estimations. The results of aggregate loss distribution computational methods are comparable. This comparability is mostly caused by a large number of observations. In the context of tariff analysis it was found that the most significant factors are driver's age and the driver's area of residence.
|
77 |
Řízení rizik ve stavebním podniku / Risk Management inside Construction CompanyHošková, Tereza January 2015 (has links)
The subject of this thesis is risk management in construction companies. The theoretical part describes the origin and definition of risk, risk classification, sources and methods of risk management and risk reduction. The practical part is focused on a particular contract, in the particular construction company. This project presents a practical approach and risk management solutions with suggestions for preventive measures to eliminate risk factors.
|
78 |
RisikomaßeHuschens, Stefan 30 March 2017 (has links)
Das vorliegende Skript ist aus einer Lehrveranstaltung hervorgegangen, die von mir mehrere Jahre an der Fakultät Wirtschaftswissenschaften der TU Dresden gehalten wurde. Diese Lehrveranstaltung hatte erst die Bezeichnung "Monetäre Risikomaße" und später "Risikomaße".
Mehrere frühere Fassungen dieses Skripts, das häufig überarbeitet und erweitert wurde, trugen den Namen Monetäre Risikomaße (Auflagen 1 bis 7).
Die einzelnen Kapitel enthalten in der Regel die drei abschließenden Abschnitte "Übungsaufgaben", "Beweise" und "Ergänzung und Vertiefung" mit Material zum jeweiligen Kapitel, das nicht in der Vorlesung vorgetragen wurde.
|
79 |
Bankruptcy Distributions and Modelling for Swedish Companies Using Logistic Regression / Konkursfördelning och Modellering för Svenska Företag Genom Användning av Logistisk RegressionEwertzh, Jacob January 2019 (has links)
This thesis discusses the concept of bankruptcy, or default, for Swedish companies. The actual distribution over time is considered both on aggregate level and within different industries. Several models are constructed to best possible describe the default frequency. Mainly logistic regression models are designed for this purpose, but various other models are considered. Some of these are constructed for comparison and for the ambition to produce the most accurate model possible. A large data set of nearly 30 million quarterly observations is used in the analysis. Taking into account micro and macro economic data. The derived models cover different time periods, considering different variables and display varying levels of accuracy. The most exact model is a logistic regression model considering both micro and macro data. It is tested both in sample and out of sample and perform very well in both areas. This model is estimated on first a subset of the data set to be able to compare with a real scenario. Then an equivalent model is constructed from the whole data set to best possibly describe future scenarios. Here Vector Auto-Regressive (VAR) models, and empirical models constructed by OLS regression estimating the firm values, are used in combination with the logistic regression model to predict the future. All three models are used to describe the most likely scenarios, as well as the worst case scenarios. From the worst case scenarios risk measures, such as the empirical value at risk, can be derived. From all this analysis the most significant results are compiled. Namely, that the Logistic regression model performs remarkably well both in-sample and out-of-sample, if macro variables are taken into account. Further, the future results are harder to interpret. Yet, the analysis has arguments for prediction accuracy and interesting results of a continued low default frequency within the next year. / Den här uppsatsen avhandlar konceptet konkurs, för svenska företag. Den faktiska konkursfördelningen över tid analyseras, både på en sammanlagd nivå och inom olika industrier. Flera modeller konstrueras i syfte att bäst beskriva konkursfördelningen. Huvudsakligen är logistiska regressions modeller utformade för detta syfte, men andra typer av modeller är inkluderade i analysen. Några av dessa modeller är skapade för jämförelse, men också för att kunna producera en så exakt modell som möjligt. Ett stort data set med nästan 30 miljoner kvartalsvisa observationer används i analysen. Mikro- och makroekonomiska faktorer är inkluderade i detta data set. De framtagna modellerna omfattar olika tidsperioder mellan 1990–2018, tar in olika faktorer i analysen och visar på olika nivåer av noggrannhet. Modellen som har högst förklaringsgrad är en logistisk regressionsmodell som tar hänsyn till både mikro- och makroekonomiska faktorer. Denna modell analyseras både i och utanför sitt samplingsintervall, och visar på goda resultat i båda områdena. Modellen är först skattad på en delmängd av tidsperioden, för att kunna jämföra den förutspådda fördelningen med en faktisk fördelning. Sedan är en ekvivalent modell skattad på hela intervallet, för att bäst möjligt förutspå framtida scenarion. För detta syfte är Logistiska regressionsmodellen kombinerad med Vektor Autoregressiva (VAR)-modeller som förutspår makroekonomiska faktorer, och empiriska regressionsmodeller som förutspår mikroekonomiska faktorer. Alla tre modelltyper används för att kunna beskriva det mest sannolika scenariot, samt de värsta tänkbara scenariona. Från de värsta tänkbara scenariona kan riskmått, så som empiriska Value at Risk, tas fram. All analys producerar resultat och de viktigaste sammanställs. Dessa är att den logistiska regression modell som tar hänsyn till makroekonomiska faktorer ger bra resultat både i och utanför samplingsintervallet. Vidare är de framtida simulerade resultaten svårare att tolka, men den genomförda analysen har argument för exakthet i förutsägelserna. Därmed presenteras ett troligt framtida scenario med fortsatt låg konkurs frekvens inom det närmaste året.
|
80 |
Dynamic convex risk measures / time consistency, prudence, and sustainabilityPenner, Irina 17 March 2008 (has links)
In dieser Arbeit werden verschiedene Eigenschaften von dynamischen konvexen Risikomaßen für beschränkte Zufallsvariablen untersucht. Dabei gehen wir vor allem der Frage nach, wie die Risikobewertungen in verschiedenen Zeitpunkten von einander abhängen, und wie sich solche Zeitkonsistenzeigenschaften in der Dynamik der Penalty-Funktionen und Risikoprozesse widerspiegeln. Im Kapitel 2 widmen wir uns zunächst der starken Zeitkonsistenz und charakterisieren diese mithilfe von Akzeptanzmengen, Penalty-Funktionen und einer gemeinsamen Supermartingaleigenschaft des Risikoprozesses und seiner Penalty-Funktion. Die Charakterisierung durch Penalty-Funktionen liefert eine explizite Form der Doob- und der Riesz-Zerlegung des Prozesses der Penalty-Funktionen. Anschließend führen wir einen schwächeren Begriff der Zeitkonsistenz ein, den wir Besonnenheit nennen. In Analogie zu dem zeitkonsistenten Fall charakterisieren wir Besonnenheit durch Akzeptanzmengen, Penalty-Funktionen und eine bestimmte Supermartingaleigenschaft. Diese Supermartingaleigenschaft gilt allgemeiner für alle beschränkten adaptierten Prozesse, die sich ohne zusätzliches Risiko aufrechterhalten lassen. Wir nennen solche Prozesse nachhaltig und beschreiben Nachhaltigkeit durch eine gemeinsame Supermartingaleigenschaft des Prozesses und der schrittweisen Penalty-Funktionen. Dieses Resultat kann als eine verallgemeinerte optionale Zerlegung unter konvexen Restriktionen gesehen werden. Mithilfe der Supermartingaleigenschaft identifizieren wir das stark zeitkonsistente dynamische Risikomaß, das aus jedem beliebigen Risikomaß rekursiv konstruiert werden kann, als den kleinsten Prozeß, der nachhaltig ist und den Endverlust minimiert. Diese Beschreibung liefert ein neues Argument für den Einsatz von zeitkonsistenten Risikomaßen. Im Kapitel 3 diskutieren wir das asymptotische Verhalten von zeitkonsistenten und von besonnenen Risikomaßen hinsichtlich der asymptotischen Sicherheit und der asymptotischen Präzision. Im Kapitel 4 werden die allgemeinen Ergebnisse aus den Kapiteln 2 und 3 anhand des entropischen Risikomaßes und des Superhedging-Preisprozesses veranschaulicht. / In this thesis we study various properties of a dynamic convex risk measure for bounded random variables. The main subject is to investigate possible interdependence of conditional risk assessments at different times and the manifestation of these time consistency properties in the dynamics of corresponding penalty functions and risk processes. In Chapter 2 we focus first on the strong notion of time consistency and characterize it in terms of penalty functions, acceptance sets and a joint supermartingale property of the risk measure and its penalty function. The characterization in terms of penalty functions provides the explicit form of the Doob and of the Riesz decomposition of the penalty function process for a time consistent risk measure. Then we introduce and study a weaker notion of time consistency, that we call prudence. Similar to the time consistent case, we characterize prudent dynamic risk measures in terms of acceptance sets, of penalty functions and by a certain supermartingale property. This supermartingale property holds more generally for any bounded adapted process that can be upheld without any additional risk. We call such processes sustainable, and we give an equivalent characterization of sustainability in terms of a combined supermartingale property of a process and one-step penalty functions. This result can be viewed as a generalized optimal decomposition under convex constraints. The supermartingale property allows us to characterize the strongly time consistent risk measure arising from any dynamic risk measure via recursive construction as the smallest process that is sustainable and covers the final loss. Thus our discussion provides a new reason for using strongly time consistent risk measures. In Chapter 3 we discuss the limit behavior of time consistent and of prudent risk measures in terms of asymptotic safety and of asymptotic precision. In the final Chapter 4 we illustrate the general results of Chapter 2 and Chapter 3 by examples. In particular we study the entropic dynamic risk measure and the superhedging price process under convex constraints.
|
Page generated in 0.0636 seconds