• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 49
  • 35
  • 27
  • 18
  • 10
  • 9
  • 9
  • 8
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 307
  • 61
  • 60
  • 47
  • 42
  • 39
  • 37
  • 36
  • 33
  • 31
  • 30
  • 29
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Elizabeth Gonzalez Patino 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
212

Distribuição exponencial generalizada: uma análise bayesiana aplicada a dados de câncer / Generalized exponential distribution: a Bayesian analysis applied to cancer data

Juliana Boleta 19 December 2012 (has links)
A técnica de análise de sobrevivência tem sido muito utilizada por pesquisadores na área de saúde. Neste trabalho foi usada uma distribuição em análise de sobrevivência recentemente estudada, chamada distribuição exponencial generalizada. Esta distribuição foi estudada sob todos os aspectos: para dados completos e censurados, sob a presençaa de covariáveis e considerando sua extensão para um modelo multivariado derivado de uma função cópula. Para exemplificação desta nova distribuição, foram utilizados dados reais de câncer (leucemia mielóide aguda e câncer gástrico) que possuem a presença de censuras e covariáveis. Os dados referentes ao câncer gástrico tem a particularidade de apresentar dois tempos de sobrevida, um relativo ao tempo global de sobrevida e o outro relativo ao tempo de sobrevida livre do evento, que foi utilizado para a aplicação do modelo multivariado. Foi realizada uma comparação com outras distribuições já utilizadas em análise de sobrevivência, como a distribuiçãoo Weibull e a Gama. Para a análise bayesiana adotamos diferentes distribuições a priori para os parâmetros. Foi utilizado, nas aplicações, métodos de simulação de MCMC (Monte Carlo em Cadeias de Markov) e o software Winbugs. / Survival analysis methods has been extensively used by health researchers. In this work it was proposed the use a survival analysis model recently studied, denoted as generalized exponential distribution. This distribution was studied in all respects: for complete data and censored, in the presence of covariates and considering its extension to a multivariate model derived from a copula function. To exemplify the use of these models, it was considered real cancer lifetime data (acute myeloid leukemia and gastric cancer) in presence of censored data and covariates. The assumed cancer gastric lifetime data has two survival responses, one related to the total lifetime of the patient and another one related to the time free of the disease, that is, multivariate data associated to each patient. In these applications there was considered a comparative study with standard existing lifetime distributions, as Weibull and gamma distributions.For a Bayesian analysis we assumed different prior distributions for the parameters of the model. For the simulation of samples of the joint posterior distribution of interest, we used standard MCMC (Markov Chain Monte Carlo) methods and the software Winbugs.
213

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Patino, Elizabeth Gonzalez 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
214

Spatial graphical models with discrete and continuous components

Che, Xuan 16 August 2012 (has links)
Graphical models use Markov properties to establish associations among dependent variables. To estimate spatial correlation and other parameters in graphical models, the conditional independences and joint probability distribution of the graph need to be specified. We can rely on Gaussian multivariate models to derive the joint distribution when all the nodes of the graph are assumed to be normally distributed. However, when some of the nodes are discrete, the Gaussian model no longer affords an appropriate joint distribution function. We develop methods specifying the joint distribution of a chain graph with both discrete and continuous components, with spatial dependencies assumed among all variables on the graph. We propose a new group of chain graphs known as the generalized tree networks. Constructing the chain graph as a generalized tree network, we partition its joint distributions according to the maximal cliques. Copula models help us to model correlation among discrete variables in the cliques. We examine the method by analyzing datasets with simulated Gaussian and Bernoulli Markov random fields, as well as with a real dataset involving household income and election results. Estimates from the graphical models are compared with those from spatial random effects models and multivariate regression models. / Graduation date: 2013
215

Contribuciones a la dependencia y dimensionalidad en cópulas

Díaz, Walter 18 January 2013 (has links)
El concepto de dependencia aparece por todas partes en nuestra tierra y sus habitantes de manera profunda. Son innumerables los ejemplos de fenómenos interdependientes en la naturaleza, así como en aspectos médicos, sociales, políticos, económicos, entre otros. Más aún, la dependencia es obviamente no determinística, sino de naturaleza estocástica. Es por lo anterior que resulta sorprendente que conceptos y medidas de dependencia no hayan recibido suficiente atención en la literatura estadística. Al menos hasta 1966, cuando el trabajo pionero de E.L. Lehmann probó el lema de Hoeffding. Desde entonces, se han publicado algunas generalizaciones de este. Nosotros hemos obtenido una generalización multivariante para funciones de variación acotada que agrupa a las planteadas anteriormente, al establecer la relación entre los planteamiento presentados por Quesada-Molina (1992) y Cuadras (2002b) y extendiendo este último al caso multivariante. Uno de los conceptos importante en la interpretación estadística esta relacionada con la dimensión. Es por eso que hemos definido la dimensionalidad geométrica de una distribución conjunta H en función del cardinal del conjunto de correlaciones canónicas de H, si H se puede representar mediante una expansión diagonal. La dimensionalidad geométrica ha sido obtenida para algunas de las familias de cópulas más conocidas. Para determinar la dimensionalidad de algunas de las copulas, se utilizaron métodos numéricos. De acuerdo con la dimensionalidad, hemos clasificado a las cópulas en cuatro grupos: las de dimensión cero, finita, numerable o continua. En la mayoría de las cópulas se encontro que poseen dimensión numerable. Con el uso de dos funciones que satisfacen ciertas condiciones de regularidad, se ha obtenido una extensión generalizada para la cópula Gumbel-Barnett, a la que hemos deducido sus principales propiedades y medidas de dependencia para algunas funciones en particular. La cópula FGM es una de las cópulas con más aplicabilidad en campos como el análisis financiero, y a la que se le han obtenido un gran número de generalizaciones para el caso simétrico. Nosotros hemos obtenido dos nuevas generalizaciones. La primera fue obtenida al adicionar dos distribuciones auxiliares y la segunda generalización es para el caso asimétrico. En está última caben algunas de las generalizaciones existentes. Para ambos casos se han deducido los rangos admisibles de los parámetros de asociación, las principales propiedades y las medidas de dependencia. Demostramos que si se conocen las funciones canónicas de una función de distribución, es posible aproximarla a otra función de distribución a través de combinaciones lineales de las funciones canónicas. Como ejemplo, consideramos la cópula FGM en dos dimensiones, en el sentido geométrico, debido a que se conocen sus funciones canónicas, y hemos comprobado numéricamente que su aproximación a otras cópulas con dimensión numerable es aceptablemente bueno. / Contributions to Dependence and Dimensionality in copulas The concept of dependency is everywhere in our land and its inhabitants in a profound way. There are countless examples of interdependent phenomena in nature, or related to medical, social, political and economic aspects. Moreover, dependence is obviously non deterministic, but stochastic in nature. For this reason, it is surprising that concepts and measures of dependence have not been paid enough attention in the statistical literature; at least until 1966 when the pioneering work of E.L. Lehmann proved Hoeffding’s lemma, some generalizations of this have been released since then. We have obtained a multivariate generalization for functions of bounded variation that groups the above mentioned generalizations, by ascertaining the relation between the approaches presented by Quesada-Molina (1992) and Cuadras (2002b) and extending the latter to the multivariate case. One of the important concepts in statistical interpretation deals with dimensionality, which is why we have defined the geometric dimensionality of a joint distribution H as a function of the cardinal of the set of canonical correlations of H, if H can be represented by a diagonal expansion. The geometrical dimensionality has been obtained for some of the best known families of copulas. To determine the dimensionality of some copulas, numerical methods were used. According to the dimensionality, we have classified the copulas into four groups: the zero-, finite-, countable- or continuous-dimensional. Most of the copulas were found to possess countable dimension. With the use of two functions that satisfy certain regularity conditions, we have obtained a generalized extension of the Gumbel-Barnett copula, for which we have derived its main properties and measures of dependence, particularly for some functions. The FGM copula is one of the copulas with more applicability in fields such as financial analysis, and for which a large number of generalizations for the symmetric case have been obtained. We have obtained two new generalizations: the first was obtained by adding two auxiliary distributions and the second generalization is to the asymmetric case, in the latter some existing generalizations do fit. For both cases, the allowable ranges of association parameters, as well as the main properties and dependence measures have been deducted. We show that if the canonical functions of a distribution function are known, it is possible to approximate it to another distribution function through linear combinations of canonical functions. As an example, consider the two-dimensional FGM copula, in the geometric sense, because their canonical functions are known and we have numerically found that their approximation to other copulas with countable dimension is acceptably good.
216

不同單因子結構模型下合成型擔保債權憑證定價之研究 / Comparison between different one-factor copula models of synthetic CDOs pricing

黃繼緯, Huang, Chi Wei Unknown Date (has links)
1990年代中期信用衍生信商品開始發展,隨著時代變遷,演化出信用違約交換(Credit Default Swaps, CDS)、擔保債權憑證(Collateralized Debt Obligation, CDO)、合成型擔保債權憑證(Synthetic CDO)等商品,其可以分散風險的特性廣受歡迎,並且成為完備金融市場中重要的一環。在2007年金融海嘯中,信用衍生性商品扮演相當關鍵的角色,所以如何合理定價各類信用衍生性商品就變成相當重要的議題 以往在定價合成型擔保債權憑證時,多採取單因子結構模型來做為報酬函數的主要架構,並假設模型分配為常態分配、t分配、NIG分配等,但單因子結構模型的隱含相關係數具有波動性微笑現象,所以容易造成定價偏誤。 為了解決此問題,本文將引用常態分配假設與NIG分配假設下的隨機風險因子負荷模型(Random Factor Loading Model),觀察隨機風險因子負荷模型是否對於定價偏誤較其他模型有所改善,並且比較各模型在最佳化參數與定價時的效率,藉此歸納出較佳的合成型擔保債權憑證定價模型。 / During the mid-1990s, credit-derivatives began to be popular and evolved into credit default swaps (CDS), collateralized debt obligation (CDO), and synthetic collateralized debt obligation (Synthetic CDO). Because of the feature of risk sharing, credit-derivatives became an important part of financial market and played the key role in the financial crisis of 2007. So how to price credit-derivatives is a very important issue. When pricing Synthetic CDO, most people use the one-factor coupla model as the structure of reward function, and suppose the distribution of model is Normal distribution, t- distribution or Normal Inverse Gaussian distribution(NIG). But the volatility smile of implied volatility always causes the pricing inaccurate. For solving the problem, I use the random factor loading model under Normal distribution and NIG distribution in this study to test whether the random factor loading model is better than one-factor coupla model in pricing, and compare the efficience of optimization parameters. In conclusion, I will induct the best model of Synthetic CDO pricing.
217

Reliability-based structural design: a case of aircraft floor grid layout optimization

Chen, Qing 07 January 2011 (has links)
In this thesis, several Reliability-based Design Optimization (RBDO) methods and algorithms for airplane floor grid layout optimization are proposed. A general RBDO process is proposed and validated by an example. Copula as a mathematical method to model random variable correlations is introduced to discover the correlations between random variables and to be applied in producing correlated data samples for Monte Carlo simulations. Based on Hasofer-Lind (HL) method, a correlated HL method is proposed to evaluate a reliability index under correlation. As an alternative method for computing a reliability index, the reliability index is interpreted as an optimization problem and two nonlinear programming algorithms are introduced to evaluate reliability index. To evaluate the reliability index by Monte Carlo simulation in a time efficient way, a kriging-based surrogate model is proposed and compared to the original model in terms of computing time. Since in RBDO optimization models the reliability constraint obtained by MCS does not have an analytical form, a kriging-based response surface is built. Kriging-based response surface models are usually segment functions that do not have a uniform expression over the design space; however, most optimization algorithms require a uniform expression for constraints. To solve this problem, a heuristic gradient-based direct searching algorithm is proposed. These methods and algorithms, together with the RBDO general process, are applied to the layout optimization of aircraft floor grid structural design.
218

Pricing for First-to-Default Credit Default Swap with Copula

林智勇, Lin,Chih Yung Unknown Date (has links)
The first-to-default Credit Default Swap (CDS) with multiple assets is priced when the default barrier is changing over time, which is contrast to the assumption in most of the structural-form models. The survival function of each asset follows the lognormal distribution and the interest rate is constant over time in this article. We define the joint survival function of these assets by employing the normal and Student-t copula functions to characterize the dependence among different default probability of each asset. In addition, we investigate the empirical evidences in the pricing of CDS with two or three companies by changing the values of parameters in the model. The more interesting results show that the joint default probability increases as these assets are more positive correlated. Consequently, the price of the first-to-default CDS is much higher.
219

因子相關性結構模型之下合成型擔保債權憑證之評價與避險 / The Pricing and Hedging of Synthetic CDO Under Factor Copula Models

林恩平 Unknown Date (has links)
近年全球市場出現一些以信用違約交換(CDS)為基礎來編列之信用指數(credit indices),如DJ iTraxx Europe與DJ CDX.NA等,而以這些信用指數為參考資產組合之合成型擔保債權憑證(Synthetic CDO)契約也定期被推出,由於其為標準化契約,故次級市場相當具有流動性,使得全球合成型擔保債權憑證無論在交易量或發行量皆成長快速。   本研究在單因子相關性結構模型之架構下,利用Hull & White (2004)所提出之機率杓斗法則(Probability Bucketing Method)建立合成型擔保債權憑證之評價模型,並於評價之外增加分券(Tranche)風險衡量指標之計算,我們發現額外得到分券之風險衡量指標僅需增加約4%的程式運算時間。本研究之評價模型同時可用於分券避險參數之求算,且不會有蒙地卡羅模擬法(Monte Carlo Simulation)之下避險參數不穩定的情形。 我們發現分券已實現之損失會使分券所面對之風險下降,而分券的信用增強(Credit Enhancement)遭受損耗則使分券所面對之風險上升,故權益分券(Equity Tranche)於契約前期所面對之信用風險大於契約後期,次償分券(Mezzanine Tranche)則是於契約後期面對較大之信用風險。關於分券避險,我們可選擇利用標的信用指數或單一資產信用違約(Single-name CDS)交換來進行避險。最後我們對分券進行違約相關性(Correlation)與違約回復率(Recovery Rate)之敏感度分析,發現權益分券的信用價差與資產違約相關性呈反向關係,而與違約回復率呈正向關係;相反的,先償分券(Senior Tranche)的信用價差則與相關係數呈正向關係,與違約回復率呈反向關係;兩參數對次償分券信用價差之影響則沒有一定的趨勢。
220

信用衍生性商品-擔保債權憑證之評價與分析

呂建霖 Unknown Date (has links)
在近年來陸續發生大公司違約與倒閉事件後,信用違約風險即逐漸被金融業及學術界所重視。理論上,當多個標的資產之信用衍生性商品用來衡量標的資產之信用風險時,需考慮多個標的資產間的違約相關性,才能準確地衡量信用風險。故在信用風險管理與信用衍生性商品的評價中,違約相關性的估計與衡量顯得特別重要。 本研究為信用衍生性商品之擔保債權憑證評價,採用Li(2000)之Copula方法與Hull and White(2004)之Probability Bucketing方法做為評價擔保債權憑證之模型,透過個別資產之邊際違約機率與Copula函數之選擇,及其相關參數之估算,即可求算出具違約相關性之多變數聯合機率函數,以利擔保債權憑證之評價,並模擬出可能損失分配,進而求出各個分券的信用價差及預期損失。 本研究評價的個案商品為玉山銀行債權資產證券化2005-2,在使用上述的兩種評價模型及相關的替代變數,可以求出此商品理論的信用價差及預期損失,再與實際的發行價格比較,做出合理的評價及解釋。隨著國內目前證券化腳步的發展,在未來的信用評等資料庫、各家公司歷史違約機率資料與相關違約資訊均完整下,並加以考慮實務與總體經濟情況,以本研究所使用的兩個評價模型完整的評價已發行之擔保債權憑證,可以更精準的衡量出擔保債權憑證各個分券的信用價差與預期損失。所以本研究提供了實務界一個可行之擔保債權憑證評價方法,可以衡量出各個分券的信用價差與預期損失,做為投資標的與風險規避之用,使得擔保債權憑證的發展趨於完整。

Page generated in 0.0258 seconds