• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 7
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Copula模型在信用連結債券的評價與實證分析 / Valuation and Empirical Analysis of Credit Linked Notes Using Copula Models

林彥儒, Lin, Yen Ju Unknown Date (has links)
信用連結債券的價值主要取決於所連結資產池內的資產違約狀況,使得原始信用風險債券在到期時的本金償付受到其他債券的信用風險影響,因此如何準確且客觀的估計資產池內違約機率便一個很重要的課題,而過去文獻常以給定參數的方式,並且假設資產間的違約狀況彼此獨立下進行評價,對於聯合違約機率的捕捉並不明顯,因此本文延伸Factor Copula模型,建立信用連結債券之評價模型,該模型考慮了資產間的違約相關程度,以期達到符合市場的效果,同時配合統計之因素分析法,試圖找出影響商品價格背後的市場因子。 本研究利用延伸的評價模型以及Copula法,對實際商品做一訂價探討,結果發現,不管是使用樣本內或樣本外的資料去評價時,本研究的評價模型表現都優於Copula法,表示說評價時額外加入市場因子的考慮,對於評價是有正向的幫助;而在因子選取方面,我們選取18項因子後,經由因素分析共可萃取出三大類因素,藉由觀察期望價格與市場報價的均方根誤差,發現國家因素以及產業因素均對於商品價格有所影響,而全球因素對於商品不但沒有顯著影響,同時加入後還會使得計算出的商品期望價格更偏離市場報價,代表說並不是盲目的加入許多因子就能使得模型計算出的價格貼近市場報價,則是要視加入的因子對於資產的影響程度而定。 對於後續研究的建議:由於本研究的實證中存在一些假設,使得評價過程中並不完全符合現實市場現況,若能得到市場上的真實數據,或是改以隨機的方式來計算,相信結果會更貼近市場報價;同時,藉由選取不同的因子來評價,希望能找出國家因素、產業因素以外的其他影響因子,可助於我們更了解此項商品背後的影響因素,使得投資人能藉由觀察市場因子數據來判斷商品未來價格走勢。 / Value of the credit-linked notes depend on the pool of assets whether default or not, so the promised payoff of credit-linked notes is affected by other risky underlying assets. Therefore, how to estimate the probability of default asset pool accurately and objectively will be a very important issue. In the past literature, researchers usually use given parameters, and assume assets probability of default are independent from each other under valuation. Furthermore, it is not obvious to capture the joint probability of default. Thus, this article extends the Factor Copula Model to provide a new methodology of pricing credit-linked notes, which consider the default correlation between the extent of assets in order to achieve result in line with market and with Factor Analysis method added, trying to figure out the impact of commodity price factor behind the market. In the empirical analysis, pricing the actual commodity issued by LB Baden-Wuerttemberg using extend model and Copula model, we found that no matter choose in-the-sample or out-the-sample data to valuation, the models in this article are superior to Copula model by compare the root-mean-square deviation(RMSE). It means add the market factors into our valuation is beneficial. In terms of selection factors, we select eighteen factors prepared by Morgan Stanley Capital International, and three categories of factors may be extracted from Factor Analysis method. By observing RMSE, both national factors and industry factors will influence on the commodity, but world factors not only did not significantly impact on the commodity, but also add it to calculate the expected price further from the market price. Representative said not blind join the many factors can make the model to calculate the price close to the market price, it is a factor depending on the degree of influence of the added asset. For the suggestion of future research. The fact that the presence of empirical assumptions in this study, result in the evaluation process is not entirely realistic to market situation. We suggest to get the real data on the market or use random way to calculate, we believe that the outcome will be closer to the market price. Meanwhile, by selecting different factors to evaluate, trying to discover further factors which significantly impact on the commodity; it will help us better to understand the factors behind the commodity, so investors can predict commodity future prices by observing the market data.
2

Three Essays on US Agricultural Insurance

Kim, Taehoo 01 May 2016 (has links)
Many economists and policy analysts have conducted studies on crop insurance. Three research gaps are identified: i) moral hazard in prevented planting (PP), ii) choice of PP and planting a second crop, and iii) selecting margin protection in the Dairy Margin Protection Program (MPP-Dairy). The first essay analyzes the existence of moral hazard in PP. The PP provision is defined as the “failure to plant an insured crop by the final planting date due to adverse events”. If the farmer decides not to plant a crop, the farmer receives a PP indemnity. Late planting (LP) is an option for the farmer to plant a crop while maintaining crop insurance after the final planting date. Crop insurance may alter farmers’ behavior in selecting PP or LP and could increase the likelihood of PP claims even though farmers can choose LP. This study finds evidence that a farmer with higher insurance coverage tends to choose PP more often (moral hazard). Spatial panel models attest to the existence of moral hazard in PP empirically. If a farmer chooses PP, s/he receives the PP indemnity and may either leave the acreage unplanted or plant a second crop, e.g., soybean for corn. If the farmer plants a second crop after the PP claim, the farmer receives a 35% of PP payment. The current PP provision fails to provide farmers with an incentive to plant a second crop; 99.9% of PP claiming farmers do not plant a second crop. Adjusting PP indemnity payment may encourage farmers to plant a second crop. The second essay explores this question using a stochastic simulation and suggests to increase the PP payment by 10%-15%. The third essay investigates why Wisconsin dairy farmers purchase more supplementary protection than California farmers in a MPP-Dairy introduced in the 2014 Farm Bill. MPP-Dairy provides dairy producers with margin protection when the national dairy margin is below a farmer selected threshold. This study determines whether conditional probabilities regarding regional and national margins have a role in farmer’s decision-making to purchase supplementary coverages using Copula models. Results indicate that Wisconsin farmers have higher conditional probabilities and purchase more buy-up coverages.
3

D- and Ds-optimal Designs for Estimation of Parameters in Bivariate Copula Models

Liu, Hua-Kun 27 July 2007 (has links)
For current status data, the failure time of interest may not be observed exactly. The type of this data consists only of a monitoring time and knowledge of whether the failure time occurred before or after the monitoring time. In order to be able to obtain more information from this data, so the monitoring time is very important. In this work, the optimal designs for determining the monitoring times such that maximum information may be obtained in bivariate copula model (Clayton) are investigated. Here, the D- optimal criterion is used to decide the best monitoring time Ci (i = 1; ¢ ¢ ¢ ; n), then use these monitoring times Ci to estimate the unknown parameters simultaneously by maximizing the corresponding likelihood function. Ds-optimal designs for estimation of association parameter in the copula model are also discussed. Simulation studies are presented to compare the performance of using monitoring time C¤D and C¤Ds to do the estimation.
4

Flexible Multivariate, Spatial, and Causal Models for Extremes

Gong, Yan 17 April 2023 (has links)
Risk assessment for natural hazards and financial extreme events requires the statistical analysis of extreme events, often beyond observed levels. The characterization and extrapolation of the probability of rare events rely on assumptions about the extremal dependence type and about the specific structure of statistical models. In this thesis, we develop models with flexible tail dependence structures, in order to provide a reliable estimation of tail characteristics and risk measures. From a methodological perspective, this thesis makes the following novel developments. 1) We propose new copula-based models for multivariate and spatial extremes with flexible tail dependence structures, which are parsimonious and able to bridge smoothly asymptotic dependence and asymptotic independence classes, in both the upper and the lower tails; 2) Moreover, aiming at describing more general dependence structures using graphs, we propose a novel extremal dependence measure called the partial tail-correlation coefficient (PTCC) under the framework of regular variation to learn complex extremal network structures; 3) Finally, we develop a semi-parametric neural-network-based regression model to identify spatial causal effects at all quantile levels (including low and high quantiles). Overall, we make novel contributions to creating new flexible extremal dependence models, developing and implementing novel Bayesian computation algorithms, and taking advantage of machine learning and causal inference principles for modeling extremes. Our novel methodologies are illustrated by a range of applications to financial, climatic, and health data. Specifically, we apply our bivariate copula model to the historical closing prices of five leading cryptocurrencies and estimate the extremal dependence evolution over time, and we use the PTCC to learn the extreme risk network of historical global currency exchange data. Moreover, our multivariate spatial factor copula model is applied to study the upper and lower extremal dependence structures of the daily maximum and minimum air temperature from the state of Alabama in the southeastern United States; and we also apply the PTCC in extreme river discharge network learning for the Upper Danube basin. Finally, we apply the causal spatial quantile regression model in quantifying spatial quantile treatment effects of maternal smoking on extreme low birth weight of newborns in North Carolina, United States.
5

Assessing Non-Motorist Safety In Motor Vehicle Crashes – A Copula-Based Approach To Jointly Estimate Crash Location And Injury Severity

Marcoux, Robert A 01 January 2023 (has links) (PDF)
Recognizing the distinct non-motorist injury severity profiles by crash location (segment or intersection), we propose a joint modeling framework to study crash location type and non-motorist injury severity as two dimensions of the severity process. We employ a copula-based joint framework that ties the crash location type (represented as a binary logit model) and injury severity (represented as a generalized ordered logit model) through a closed form flexible dependency structure to study the injury severity process. The data for our analysis is drawn from the Central Florida region for the years of 2015 to 2021. The model system explicitly accounts for temporal heterogeneity across the two dimensions. A comprehensive set of independent variables including non-motorist user characteristics, driver and vehicle characteristics, roadway attributes, weather and environmental factors, temporal and sociodemographic factors are considered for the analysis. We also conducted an elasticity analysis to show the actual magnitude of the independent variables on non-motorist injury severity at the two locations. The results highlight the importance of examining the effect of various independent variables on non-motorist injury severity outcome by different crash locations.
6

A goodness-ofit test for semi-parametric copula models for bivariate censored data

Shin, Jimin 07 August 2020 (has links)
In this thesis, we suggest a goodness-ofit test for semi-parametric copula models. We extended the pseudo in-and-out-sample (PIOS) test proposed in [17], which is based on the PIOS test in [28]. The PIOS test is constructed by comparing the pseudo "in-sample" likelihood and pseudo "out-of-sample" likelihood. Our contribution is twoold. First, we use the approximate test statistics instead of the exact test statistics to alleviate the computational burden of calculating the test statistics. Secondly, we propose a parametric bootstrap procedure to approximate the distribution of the test statistic. Unlike the nonparametric bootstrap which resamples from the original data, the parametric procedure resamples the data from the copula model under the null hypothesis. We conduct simulation studies to investigate the performance of the approximate test statistic and parametric bootstrap. The results show that the parametric bootstrap presents higher test power with a well-controlled type I error compared to the nonparametric bootstrap.
7

STATISTICAL METHODS FOR THE GENETIC ANALYSIS OF DEVELOPMENTAL DISORDERS

Sucheston, Lara E. 06 April 2007 (has links)
No description available.
8

Optimal designs for statistical inferences in nonlinear models with bivariate response variables

Hsu, Hsiang-Ling 27 January 2011 (has links)
Bivariate or multivariate correlated data may be collected on a sample of unit in many applications. When the experimenters concern about the failure times of two related subjects for example paired organs or two chronic diseases, the bivariate binary data is often acquired. This type of data consists of a observation point x and indicators which represent whether the failure times happened before or after the observation point. In this work, the observed bivariate data can be written with the following form {x, £_1=I(X1≤ x), £_2=I(X2≤ x)}.The corresponding optimal design problems for parameter estimation under this type of bivariate data are discussed. For this kind of the multivariate responses with explanatory variables, their marginal distributions may be from different distributions. Copula model is a way to formulate the relationship of these responses, and the association between pairs of responses. Copula models for bivariate binary data are considered useful in practice due to its flexibility. In this dissertation for bivariate binary data, the marginal functions are assumed from exponential or Weibull distributions and two assumptions, independent or correlated, about the joint function between variables are considered. When the bivariate binary data is assumed correlated, the Clayton copula model is used as the joint cumulative distribution function. There are few works addressed the optimal design problems for bivariate binary data with copula models. The D-optimal designs aim at minimizing the volume of the confidence ellipsoid for estimating unknown parameters including the association parameter in bivariate copula models. They are used to determine the best observation points. Moreover, the Ds-optimal designs are mainly used for estimation of the important association parameter in Clayton model. The D- and Ds-optimal designs for the above copula model are found through the general equivalence theorem with numerical algorithm. Under different model assumptions, it is observed that the number of support points for D-optimal designs is at most as the number of model parameters for the numerical results. When the difference between the marginal distributions and the association are significant, the association becomes an influential factor which makes the number of supports gets larger. The performances of estimation based on optimal designs are reasonably well by simulation studies. In survival experiments, the experimenter customarily takes trials at some specific points such as the position of the 25, 50 and 75 percentile of distributions. Hence, we consider the design efficiencies when the design points for trials are at three or four particular percentiles. Although it is common in practice to take trials at several quantile positions, the allocations of the proportion of sample size also have great influence on the experimental results. To use a locally optimal design in practice, the prior information for models or parameters are needed. In case there is not enough prior knowledge about the models or parameters, it would be more flexible to use sequential experiments to obtain information in several stages. Hence with robustness consideration, a sequential procedure is proposed by combining D- and Ds-optimal designs under independent or correlated distribution in different stages of the experiment. The simulation results based on the sequential procedure are compared with those by the one step procedures. When the optimal designs obtained from an incorrect prior parameter values or distributions, those results may have poor efficiencies. The sample mean of estimators and corresponding optimal designs obtained from sequential procedure are close to the true values and the corresponding efficiencies are close to 1. Huster (1989) analyzed the corresponding modeling problems for the paired survival data and applied to the Diabetic Retinopathy Study. Huster (1989) considered the exponential and Weibull distributions as possible marginal distributions and the Clayton model as the joint function for the Diabetic Retinopathy data. This data was conducted by the National Eye Institute to assess the effectiveness of laser photocoagulation in delaying the onset of blindness in patients with diabetic retinopathy. This study can be viewed as a prior experiment and provide the experimenter some useful guidelines for collecting data in future studies. As an application with Diabetic Retinopathy Study, we develop optimal designs to collect suitable data and information for estimating the unknown model parameters. In the second part of this work, the optimal design problems for parameter estimations are considered for the type of proportional data. The nonlinear model, based on Jorgensen (1997) and named the dispersion model, provides a flexible class of non-normal distributions and is considered in this research. It can be applied in binary or count responses, as well as proportional outcomes. For continuous proportional data where responses are confined within the interval (0,1), the simplex dispersion model is considered here. D-optimal designs obtained through the corresponding equivalence theorem and the numerical results are presented. In the development of classical optimal design theory, weighted polynomial regression models with variance functions which depend on the explanatory variable have played an important role. The problem of constructing locally D-optimal designs for simplex dispersion model can be viewed as a weighted polynomial regression model with specific variance function. Due to the complex form of the weight function in the information matrix is considered as a rational function, an approximation of the weight function and the corresponding optimal designs are obtained with different parameters. These optimal designs are compared with those using the original weight function.
9

Spatial graphical models with discrete and continuous components

Che, Xuan 16 August 2012 (has links)
Graphical models use Markov properties to establish associations among dependent variables. To estimate spatial correlation and other parameters in graphical models, the conditional independences and joint probability distribution of the graph need to be specified. We can rely on Gaussian multivariate models to derive the joint distribution when all the nodes of the graph are assumed to be normally distributed. However, when some of the nodes are discrete, the Gaussian model no longer affords an appropriate joint distribution function. We develop methods specifying the joint distribution of a chain graph with both discrete and continuous components, with spatial dependencies assumed among all variables on the graph. We propose a new group of chain graphs known as the generalized tree networks. Constructing the chain graph as a generalized tree network, we partition its joint distributions according to the maximal cliques. Copula models help us to model correlation among discrete variables in the cliques. We examine the method by analyzing datasets with simulated Gaussian and Bernoulli Markov random fields, as well as with a real dataset involving household income and election results. Estimates from the graphical models are compared with those from spatial random effects models and multivariate regression models. / Graduation date: 2013
10

不同單因子結構模型下合成型擔保債權憑證定價之研究 / Comparison between different one-factor copula models of synthetic CDOs pricing

黃繼緯, Huang, Chi Wei Unknown Date (has links)
1990年代中期信用衍生信商品開始發展,隨著時代變遷,演化出信用違約交換(Credit Default Swaps, CDS)、擔保債權憑證(Collateralized Debt Obligation, CDO)、合成型擔保債權憑證(Synthetic CDO)等商品,其可以分散風險的特性廣受歡迎,並且成為完備金融市場中重要的一環。在2007年金融海嘯中,信用衍生性商品扮演相當關鍵的角色,所以如何合理定價各類信用衍生性商品就變成相當重要的議題 以往在定價合成型擔保債權憑證時,多採取單因子結構模型來做為報酬函數的主要架構,並假設模型分配為常態分配、t分配、NIG分配等,但單因子結構模型的隱含相關係數具有波動性微笑現象,所以容易造成定價偏誤。 為了解決此問題,本文將引用常態分配假設與NIG分配假設下的隨機風險因子負荷模型(Random Factor Loading Model),觀察隨機風險因子負荷模型是否對於定價偏誤較其他模型有所改善,並且比較各模型在最佳化參數與定價時的效率,藉此歸納出較佳的合成型擔保債權憑證定價模型。 / During the mid-1990s, credit-derivatives began to be popular and evolved into credit default swaps (CDS), collateralized debt obligation (CDO), and synthetic collateralized debt obligation (Synthetic CDO). Because of the feature of risk sharing, credit-derivatives became an important part of financial market and played the key role in the financial crisis of 2007. So how to price credit-derivatives is a very important issue. When pricing Synthetic CDO, most people use the one-factor coupla model as the structure of reward function, and suppose the distribution of model is Normal distribution, t- distribution or Normal Inverse Gaussian distribution(NIG). But the volatility smile of implied volatility always causes the pricing inaccurate. For solving the problem, I use the random factor loading model under Normal distribution and NIG distribution in this study to test whether the random factor loading model is better than one-factor coupla model in pricing, and compare the efficience of optimization parameters. In conclusion, I will induct the best model of Synthetic CDO pricing.

Page generated in 0.0601 seconds