• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 49
  • 35
  • 27
  • 18
  • 10
  • 9
  • 9
  • 8
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 306
  • 60
  • 60
  • 47
  • 42
  • 39
  • 36
  • 36
  • 33
  • 31
  • 30
  • 29
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Joint modeling of bivariate time to event data with semi-competing risk

Liao, Ran 08 September 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Survival analysis often encounters the situations of correlated multiple events including the same type of event observed from siblings or multiple events experienced by the same individual. In this dissertation, we focus on the joint modeling of bivariate time to event data with the estimation of the association parameters and also in the situation of a semi-competing risk. This dissertation contains three related topics on bivariate time to event mod els. The first topic is on estimating the cross ratio which is an association parameter between bivariate survival functions. One advantage of using cross-ratio as a depen dence measure is that it has an attractive hazard ratio interpretation by comparing two groups of interest. We compare the parametric, a two-stage semiparametric and a nonparametric approaches in simulation studies to evaluate the estimation perfor mance among the three estimation approaches. The second part is on semiparametric models of univariate time to event with a semi-competing risk. The third part is on semiparametric models of bivariate time to event with semi-competing risks. A frailty-based model framework was used to accommodate potential correlations among the multiple event times. We propose two estimation approaches. The first approach is a two stage semiparametric method where cumulative baseline hazards were estimated by nonparametric methods first and used in the likelihood function. The second approach is a penalized partial likelihood approach. Simulation studies were conducted to compare the estimation accuracy between the proposed approaches. Data from an elderly cohort were used to examine factors associated with times to multiple diseases and considering death as a semi-competing risk.
62

Modeling Non-Gaussian Time-correlated Data Using Nonparametric Bayesian Method

Xu, Zhiguang 20 October 2014 (has links)
No description available.
63

Modelling The Financial Market Using Copula

Gyamfi, Michael January 2017 (has links)
No description available.
64

The Interrogative Marker <i>KA</i> in Japanese

Takahashi, Sonoko January 1995 (has links)
No description available.
65

Contributions aux méthodes bayésiennes approchées pour modèles complexes / Contributions to Bayesian Computing for Complex Models

Grazian, Clara 15 April 2016
Récemment, la grande complexité des applications modernes, par exemple dans la génétique, l’informatique, la finance, les sciences du climat, etc. a conduit à la proposition des nouveaux modèles qui peuvent décrire la réalité. Dans ces cas,méthodes MCMC classiques ne parviennent pas à rapprocher la distribution a posteriori, parce qu’ils sont trop lents pour étudier le space complet du paramètre. Nouveaux algorithmes ont été proposés pour gérer ces situations, où la fonction de vraisemblance est indisponible. Nous allons étudier nombreuses caractéristiques des modèles complexes: comment éliminer les paramètres de nuisance de l’analyse et faire inférence sur les quantités d’intérêt,dans un cadre bayésienne et non bayésienne et comment construire une distribution a priori de référence. / Recently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior.
66

Effects of ESG on Market Risk : A Copula and a Regression Approach to CoVaR / Effekter av ESG på Marknadsrisk : Två Metoder

Thornqvist, Viktor January 2023 (has links)
With a background in EU regulations and an increased interest in Environmental, Social, and Governence (ESG) policies in companies when investing, this thesis considers the individual contributions to market risk in portfolios by different ESG parameters. It explores two different methods to examine if there are effects consistent across the whole Nordic markets, and the possibility to express any effects within portfolios in a clear way. It uses the OMXNORDIC index as the market index and two different fund portfolios as example portfolios, one of which is an article 9 fund. The quantile regression approach does not show any consistent effects across the whole Nordic market from any ESG parameter explored. It does however make for a clear way to present the effects on the portfolio level for each ESG parameter. The employed Copula approach does show some consistent difference between the ESG parameters for the market and in portfolios, as well as differences between the portfolios. Both of the explored methods should allow for comparisons between, and reports on, fund portfolios which would improve the ESG analyses of funds. / Mot bakgrund av EU-lagstiftning och ett ökat intresse i företags förhållning till Environmental, Social, och Governence (ESG) frågor, så utforskar den här uppsatsen ESG-faktorers bidrag till marknadsrisk i fondportföljer och på den nordiska marknaden. Uppsatsen använder två olika metoder för att undersöka om det finns potentiella konsekventa effekter på den Nordiska aktiemarknaden, och möjligheten att presentera resultat på portföljnivå på ett tydligt sätt. OMXNORDIC används som marknadsindex, och två olika fondportföljer används som exempelportföljer, varav en är en artikel 9 fondportfölj. Quantile regression-metoden visar inte på några konsekventa effekter över hela den nordiska marknaden, för någon av ESG-parametrarna. Däremot så resulterar metoden i ett tydligt sätt att presentera påverkan av ESG-parametrarna på portföljnivå. Copula-metoden som används visar på några konsekventa skillnader mellan ESG-parametrar, både för marknaden och i fondportföljerna, samt skillnader mellan portföljerna i sig. Båda metoderna lämpar sig till att jämföra och bygga rapporter på fondportföljer, vilket borde leda till bättre ESG-analyser av fonder.
67

Temporal and Spatial Analysis of Water Quality Time Series

Khalil Arya, Farid January 2015 (has links)
No description available.
68

Pricing Basket of Credit Default Swaps and Collateralised Debt Obligation by Lévy Linearly Correlated, Stochastically Correlated, and Randomly Loaded Factor Copula Models and Evaluated by the Fast and Very Fast Fourier Transform

Fadel, Sayed M. January 2010 (has links)
In the last decade, a considerable growth has been added to the volume of the credit risk derivatives market. This growth has been followed by the current financial market turbulence. These two periods have outlined how significant and important are the credit derivatives market and its products. Modelling-wise, this growth has parallelised by more complicated and assembled credit derivatives products such as mth to default Credit Default Swaps (CDS), m out of n (CDS) and collateralised debt obligation (CDO). In this thesis, the Lévy process has been proposed to generalise and overcome the Credit Risk derivatives standard pricing model's limitations, i.e. Gaussian Factor Copula Model. One of the most important drawbacks is that it has a lack of tail dependence or, in other words, it needs more skewed correlation. However, by the Lévy Factor Copula Model, the microscopic approach of exploring this factor copula models has been developed and standardised to incorporate an endless number of distribution alternatives those admits the Lévy process. Since the Lévy process could include a variety of processes structural assumptions from pure jumps to continuous stochastic, then those distributions who admit this process could represent asymmetry and fat tails as they could characterise symmetry and normal tails. As a consequence they could capture both high and low events¿ probabilities. Subsequently, other techniques those could enhance the skewness of its correlation and be incorporated within the Lévy Factor Copula Model has been proposed, i.e. the 'Stochastic Correlated Lévy Factor Copula Model' and 'Lévy Random Factor Loading Copula Model'. Then the Lévy process has been applied through a number of proposed Pricing Basket CDS&CDO by Lévy Factor Copula and its skewed versions and evaluated by V-FFT limiting and mixture cases of the Lévy Skew Alpha-Stable distribution and Generalized Hyperbolic distribution. Numerically, the characteristic functions of the mth to default CDS's and (n/m) th to default CDS's number of defaults, the CDO's cumulative loss, and loss given default are evaluated by semi-explicit techniques, i.e. via the DFT's Fast form (FFT) and the proposed Very Fast form (VFFT). This technique through its fast and very fast forms reduce the computational complexity from O(N2) to, respectively, O(N log2 N ) and O(N ).
69

Extensions to Gaussian copula models

Fang, Yan 01 May 2012 (has links)
A copula is the representation of a multivariate distribution. Copulas are used to model multivariate data in many fields. Recent developments include copula models for spatial data and for discrete marginals. We will present a new methodological approach for modeling discrete spatial processes and for predicting the process at unobserved locations. We employ Bayesian methodology for both estimation and prediction. Comparisons between the new method and Generalized Additive Model (GAM) are done to test the performance of the prediction. Although there exists a large variety of copula functions, only a few are practically manageable and in certain problems one would like to choose the Gaussian copula to model the dependence. Furthermore, most copulas are exchangeable, thus implying symmetric dependence. However, none of them is flexible enough to catch the tailed (upper tailed or lower tailed) distribution as well as elliptical distributions. An elliptical copula is the copula corresponding to an elliptical distribution by Sklar's theorem, so it can be used appropriately and effectively only to fit elliptical distributions. While in reality, data may be better described by a "fat-tailed" or "tailed" copula than by an elliptical copula. This dissertation proposes a novel pseudo-copula (the modified Gaussian pseudo-copula) based on the Gaussian copula to model dependencies in multivariate data. Our modified Gaussian pseudo-copula differs from the standard Gaussian copula in that it can model the tail dependence. The modified Gaussian pseudo-copula captures properties from both elliptical copulas and Archimedean copulas. The modified Gaussian pseudo-copula and its properties are described. We focus on issues related to the dependence of extreme values. We give our pseudo-copula characteristics in the bivariate case, which can be extended to multivariate cases easily. The proposed pseudo-copula is assessed by estimating the measure of association from two real data sets, one from finance and one from insurance. A simulation study is done to test the goodness-of-fit of this new model. / Graduation date: 2012
70

Modelling dependence in actuarial science, with emphasis on credibility theory and copulas

Purcaru, Oana 19 August 2005 (has links)
One basic problem in statistical sciences is to understand the relationships among multivariate outcomes. Although it remains an important tool and is widely applicable, the regression analysis is limited by the basic setup that requires to identify one dimension of the outcomes as the primary measure of interest (the "dependent" variable) and other dimensions as supporting this variable (the "explanatory" variables). There are situations where this relationship is not of primary interest. For example, in actuarial sciences, one might be interested to see the dependence between annual claim numbers of a policyholder and its impact on the premium or the dependence between the claim amounts and the expenses related to them. In such cases the normality hypothesis fails, thus Pearson's correlation or concepts based on linearity are no longer the best ones to be used. Therefore, in order to quantify the dependence between non-normal outcomes one needs different statistical tools, such as, for example, the dependence concepts and the copulas. This thesis is devoted to modelling dependence with applications in actuarial sciences and is divided in two parts: the first one concerns dependence in frequency credibility models and the second one dependence between continuous outcomes. In each part of the thesis we resort to different tools, the stochastic orderings (which arise from the dependence concepts), and copulas, respectively. During the last decade of the 20th century, the world of insurance was confronted with important developments of the a posteriori tarification, especially in the field of credibility. This was dued to the easing of insurance markets in the European Union, which gave rise to an advanced segmentation. The first important contribution is due to Dionne & Vanasse (1989), who proposed a credibility model which integrates a priori and a posteriori information on an individual basis. These authors introduced a regression component in the Poisson counting model in order to use all available information in the estimation of accident frequency. The unexplained heterogeneity was then modeled by the introduction of a latent variable representing the influence of hidden policy characteristics. The vast majority of the papers appeared in the actuarial literature considered time-independent (or static) heterogeneous models. Noticeable exceptions include the pioneering papers by Gerber & Jones (1975), Sundt (1988) and Pinquet, Guillén & Bolancé (2001, 2003). The allowance for an unknown underlying random parameter that develops over time is justified since unobservable factors influencing the driving abilities are not constant. One might consider either shocks (induced by events like divorces or nervous breakdown, for instance) or continuous modifications (e.g. due to learning effect). In the first part we study the recently introduced models in the frequency credibility theory, which can be seen as models of time series for count data, adapted to actuarial problems. More precisely we will examine the kind of dependence induced among annual claim numbers by the introduction of random effects taking unexplained heterogeneity, when these random effects are static and time-dependent. We will also make precise the effect of reporting claims on the a posteriori distribution of the random effect. This will be done by establishing some stochastic monotonicity property of the a posteriori distribution with respect to the claims history. We end this part by considering different models for the random effects and computing the a posteriori corrections of the premiums on basis of a real data set from a Spanish insurance company. Whereas dependence concepts are very useful to describe the relationship between multivariate outcomes, in practice (think for instance to the computation of reinsurance premiums) one need some statistical tool easy to implement, which incorporates the structure of the data. Such tool is the copula, which allows the construction of multivariate distributions for given marginals. Because copulas characterize the dependence structure of random vectors once the effect of the marginals has been factored out, identifying and fitting a copula to data is not an easy task. In practice, it is often preferable to restrict the search of an appropriate copula to some reasonable family, like the archimedean one. Then, it is extremely useful to have simple graphical procedures to select the best fitting model among some competing alternatives for the data at hand. In the second part of the thesis we propose a new nonparametric estimator for the generator, that takes into account the particularity of the data, namely censoring and truncation. This nonparametric estimation then serves as a benchmark to select an appropriate parametric archimedean copula. This selection procedure will be illustrated on a real data set.

Page generated in 0.0346 seconds