• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 25
  • 15
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Degradation modeling for reliability analysis with time-dependent structure based on the inverse gaussian distribution / Modelagem de degradação para análise de confiabilidade com estrutura dependente do tempo baseada na distribuição gaussiana inversa

Morita, Lia Hanna Martins 07 April 2017 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-08-29T19:13:47Z No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:48Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:55Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Made available in DSpace on 2017-09-25T18:27:54Z (GMT). No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) Previous issue date: 2017-04-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Conventional reliability analysis techniques are focused on the occurrence of failures over time. However, in certain situations where the occurrence of failures is tiny or almost null, the estimation of the quantities that describe the failure process is compromised. In this context the degradation models were developed, which have as experimental data not the failure, but some quality characteristic attached to it. Degradation analysis can provide information about the components lifetime distribution without actually observing failures. In this thesis we proposed different methodologies for degradation data based on the inverse Gaussian distribution. Initially, we introduced the inverse Gaussian deterioration rate model for degradation data and a study of its asymptotic properties with simulated data. We then proposed an inverse Gaussian process model with frailty as a feasible tool to explore the influence of unobserved covariates, and a comparative study with the traditional inverse Gaussian process based on simulated data was made. We also presented a mixture inverse Gaussian process model in burn-in tests, whose main interest is to determine the burn-in time and the optimal cutoff point that screen out the weak units from the normal ones in a production row, and a misspecification study was carried out with the Wiener and gamma processes. Finally, we considered a more flexible model with a set of cutoff points, wherein the misclassification probabilities are obtained by the exact method with the bivariate inverse Gaussian distribution or an approximate method based on copula theory. The application of the methodology was based on three real datasets in the literature: the degradation of LASER components, locomotive wheels and cracks in metals. / As técnicas convencionais de análise de confiabilidade são voltadas para a ocorrência de falhas ao longo do tempo. Contudo, em determinadas situações nas quais a ocorrência de falhas é pequena ou quase nula, a estimação das quantidades que descrevem os tempos de falha fica comprometida. Neste contexto foram desenvolvidos os modelos de degradação, que possuem como dado experimental não a falha, mas sim alguma característica mensurável a ela atrelada. A análise de degradação pode fornecer informações sobre a distribuição de vida dos componentes sem realmente observar falhas. Assim, nesta tese nós propusemos diferentes metodologias para dados de degradação baseados na distribuição gaussiana inversa. Inicialmente, nós introduzimos o modelo de taxa de deterioração gaussiana inversa para dados de degradação e um estudo de suas propriedades assintóticas com dados simulados. Em seguida, nós apresentamos um modelo de processo gaussiano inverso com fragilidade considerando que a fragilidade é uma boa ferramenta para explorar a influência de covariáveis não observadas, e um estudo comparativo com o processo gaussiano inverso usual baseado em dados simulados foi realizado. Também mostramos um modelo de mistura de processos gaussianos inversos em testes de burn-in, onde o principal interesse é determinar o tempo de burn-in e o ponto de corte ótimo para separar os itens bons dos itens ruins em uma linha de produção, e foi realizado um estudo de má especificação com os processos de Wiener e gamma. Por fim, nós consideramos um modelo mais flexível com um conjunto de pontos de corte, em que as probabilidades de má classificação são estimadas através do método exato com distribuição gaussiana inversa bivariada ou em um método aproximado baseado na teoria de cópulas. A aplicação da metodologia foi realizada com três conjuntos de dados reais de degradação de componentes de LASER, rodas de locomotivas e trincas em metais.
12

Modelos assimétricos inflacionados de zeros / Zero-inflated asymmetric models

Dias, Mariana Ferreira 28 November 2014 (has links)
A principal motivação desse estudo é a análise da quantidade de sangue recebido em transfusão (padronizada pelo peso) por crianças com problemas hepáticos. Essa quantidade apresenta distribuição assimétrica, além de valores iguais a zero para as crianças que não receberam transfusão. Os modelos lineares generalizados, usuais para variáveis positivas, não permitem a inclusão de zeros. Para os dados positivos, foram ajustados tais modelos com distribuição gama e normal inversa. Também foi considerado o modelo log-normal. A análise de resíduos padronizados indicou heterocedasticidade, e portanto a variabilidade extra foi modelada utilizando a classe de modelos GAMLSS. A terceira abordagem consiste em modelos baseados na mistura de zeros e distribuições para valores positivos, incluídos recentemente na família dos modelos GAMLSS. Estes aliam a distribuição assimétrica para os dados positivos e a probabilidade da ocorrência de zeros. Na análise dos dados de transfusão, observa-se que a distribuição normal inversa apresentou melhor ajuste por acomodar dados com forte assimetria em relação às demais distribuições consideradas. Foram significativos os efeitos das variáveis explicativas Kasai (ocorrência de operação prévia) e PELD (nível de uma medida da gravidade do paciente com 4 níveis) assim como os efeitos de interação sobre a média e variabilidade da quantidade de sangue recebida. A possibilidade de acrescentar efeitos de variáveis explicativas para modelar o parâmetro de dispersão, permite que a variabilidade extra, além de sua dependência da média, seja melhor explicada e melhore o ajuste do modelo. A probabilidade de não receber transfusão depende de modo significativo somente de PELD. A proposta de um só modelo que alia a presença de zeros e diversas distribuições assimétricas facilita o ajuste dos dados e a análise de resíduos. Seus resultados são equivalentes à abordagem em que a ocorrência ou não de transfusão é analisada por meio de modelo logístico independente da modelagem dos dados positivos com distribuições assimétricas. / The main motivation of this study is to analyze the amount of blood transfusions received (by weight) of children with liver problems. This amount shows asymmetric distribution as well as present zero values for children who did not receive transfusions. The usual generalized linear models for positive variables do not allow the inclusion of zeros. For positive data, such models with gamma and inverse normal distributions were fitted in this study. Log-normal distribution was also considered. Analysis of the standardized residuals indicated heterocedasticity and therefore the extra variability was modelled using GAMLSS. The third approach consists of models based on a mixture of zeros and distributions for positive values, also recently included in the family of GAMLSS models. These models combine the asymmetric distribution of positive data and the probability of occurrence of zeros. In the data analysis of transfusion, the inverse normal distribution showed better goodness of fit to allow adjustment of data with greater asymmetry than the other distributions considered. The effects of the explanatory variables Kasai (occurrence of previous operation) and PELD level (a measure of the severity of the patient with 4 levels) and interaction effects on the mean and variability of the amount of blood received were signicant. The inclusion of explanatory variables to model the dispersion parameter, allows to model the extra variability, beyond its dependence on the average, and improves fitness of the model. The probability of not receiving transfusion depends signicantly only PELD. The proposal of a unified model that combines the presence of zeros and several asymmetric distributions greatly facilitates the fitness of the model and the evaluation of fitness. An advantage is the equivalence between this model and a separate logistic model to for the probability of the occurrence of transfusion and a model for the positive skewed data.
13

Modelos assimétricos inflacionados de zeros / Zero-inflated asymmetric models

Mariana Ferreira Dias 28 November 2014 (has links)
A principal motivação desse estudo é a análise da quantidade de sangue recebido em transfusão (padronizada pelo peso) por crianças com problemas hepáticos. Essa quantidade apresenta distribuição assimétrica, além de valores iguais a zero para as crianças que não receberam transfusão. Os modelos lineares generalizados, usuais para variáveis positivas, não permitem a inclusão de zeros. Para os dados positivos, foram ajustados tais modelos com distribuição gama e normal inversa. Também foi considerado o modelo log-normal. A análise de resíduos padronizados indicou heterocedasticidade, e portanto a variabilidade extra foi modelada utilizando a classe de modelos GAMLSS. A terceira abordagem consiste em modelos baseados na mistura de zeros e distribuições para valores positivos, incluídos recentemente na família dos modelos GAMLSS. Estes aliam a distribuição assimétrica para os dados positivos e a probabilidade da ocorrência de zeros. Na análise dos dados de transfusão, observa-se que a distribuição normal inversa apresentou melhor ajuste por acomodar dados com forte assimetria em relação às demais distribuições consideradas. Foram significativos os efeitos das variáveis explicativas Kasai (ocorrência de operação prévia) e PELD (nível de uma medida da gravidade do paciente com 4 níveis) assim como os efeitos de interação sobre a média e variabilidade da quantidade de sangue recebida. A possibilidade de acrescentar efeitos de variáveis explicativas para modelar o parâmetro de dispersão, permite que a variabilidade extra, além de sua dependência da média, seja melhor explicada e melhore o ajuste do modelo. A probabilidade de não receber transfusão depende de modo significativo somente de PELD. A proposta de um só modelo que alia a presença de zeros e diversas distribuições assimétricas facilita o ajuste dos dados e a análise de resíduos. Seus resultados são equivalentes à abordagem em que a ocorrência ou não de transfusão é analisada por meio de modelo logístico independente da modelagem dos dados positivos com distribuições assimétricas. / The main motivation of this study is to analyze the amount of blood transfusions received (by weight) of children with liver problems. This amount shows asymmetric distribution as well as present zero values for children who did not receive transfusions. The usual generalized linear models for positive variables do not allow the inclusion of zeros. For positive data, such models with gamma and inverse normal distributions were fitted in this study. Log-normal distribution was also considered. Analysis of the standardized residuals indicated heterocedasticity and therefore the extra variability was modelled using GAMLSS. The third approach consists of models based on a mixture of zeros and distributions for positive values, also recently included in the family of GAMLSS models. These models combine the asymmetric distribution of positive data and the probability of occurrence of zeros. In the data analysis of transfusion, the inverse normal distribution showed better goodness of fit to allow adjustment of data with greater asymmetry than the other distributions considered. The effects of the explanatory variables Kasai (occurrence of previous operation) and PELD level (a measure of the severity of the patient with 4 levels) and interaction effects on the mean and variability of the amount of blood received were signicant. The inclusion of explanatory variables to model the dispersion parameter, allows to model the extra variability, beyond its dependence on the average, and improves fitness of the model. The probability of not receiving transfusion depends signicantly only PELD. The proposal of a unified model that combines the presence of zeros and several asymmetric distributions greatly facilitates the fitness of the model and the evaluation of fitness. An advantage is the equivalence between this model and a separate logistic model to for the probability of the occurrence of transfusion and a model for the positive skewed data.
14

GARCH models based on Brownian Inverse Gaussian innovation processes / Gideon Griebenow

Griebenow, Gideon January 2006 (has links)
In classic GARCH models for financial returns the innovations are usually assumed to be normally distributed. However, it is generally accepted that a non-normal innovation distribution is needed in order to account for the heavier tails often encountered in financial returns. Since the structure of the normal inverse Gaussian (NIG) distribution makes it an attractive alternative innovation distribution for this purpose, we extend the normal GARCH model by assuming that the innovations are NIG-distributed. We use the normal variance mixture interpretation of the NIG distribution to show that a NIG innovation may be interpreted as a normal innovation coupled with a multiplicative random impact factor adjustment of the ordinary GARCH volatility. We relate this new volatility estimate to realised volatility and suggest that the random impact factors are due to a news noise process influencing the underlying returns process. This GARCH model with NIG-distributed innovations leads to more accurate parameter estimates than the normal GARCH model. In order to obtain even more accurate parameter estimates, and since we expect an information gain if we use more data, we further extend the model to cater for high, low and close data, as well as full intraday data, instead of only daily returns. This is achieved by introducing the Brownian inverse Gaussian (BIG) process, which follows naturally from the unit inverse Gaussian distribution and standard Brownian motion. Fitting these models to empirical data, we find that the accuracy of the model fit increases as we move from the models assuming normally distributed innovations and allowing for only daily data to those assuming underlying BIG processes and allowing for full intraday data. However, we do encounter one problematic result, namely that there is empirical evidence of time dependence in the random impact factors. This means that the news noise processes, which we assumed to be independent over time, are indeed time dependent, as can actually be expected. In order to cater for this time dependence, we extend the model still further by allowing for autocorrelation in the random impact factors. The increased complexity that this extension introduces means that we can no longer rely on standard Maximum Likelihood methods, but have to turn to Simulated Maximum Likelihood methods, in conjunction with Efficient Importance Sampling and the Control Variate variance reduction technique, in order to obtain an approximation to the likelihood function and the parameter estimates. We find that this time dependent model assuming an underlying BIG process and catering for full intraday data fits generated data and empirical data very well, as long as enough intraday data is available. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2006.
15

LACTONE-CARBOXYLATE INTERCONVERSION AS A DETERMINANT OF THE CLEARANCE AND ORAL BIOAVAILABILTY OF THE LIPOPHILIC CAMPTOTHECIN ANALOG AR-67

Adane, Eyob Debebe 01 January 2010 (has links)
The third generation camptothecin analog, AR-67, is undergoing early phase clinical trials as a chemotherapeutic agent. Like all camptothecins it undergoes pH dependent reversible hydrolysis between the lipophilic lactone and the hydrophilic carboxylate. The physicochemical differences between the lactone and carboxylate could potentially give rise to differences in transport across and/or entry into cells. In vitro studies indicated reduced intracellular accumulation and/or apical to basolateral transport of AR-67 lactone in P-gp and/or BCRP overexpressing MDCKII cells and increased cellular uptake of carboxylate in OATP1B1 and OATP1B3 overexpressing HeLa-pIRESneo cells. Pharmacokinetic studies were conducted in rats to study the disposition and oral bioavailability of the lactone and carboxylate and to evaluate the extent of the interaction with uptake and efflux transporters. A pharmacokinetic model accounting for interconversion in the plasma was developed and its performance evaluated through simulations and in vivo transporter inhibition studies using GF120918 and rifampin. The model predicted well the likely scenarios to be encountered clinically from pharmacogenetic differences in transporter proteins, drug-drug interactions and organ function alterations. Oral bioavailability studies showed similarity following lactone and carboxylate administration and indicated the significant role ABC transporters play in limiting the oral bioavailability.
16

GARCH models based on Brownian Inverse Gaussian innovation processes / Gideon Griebenow

Griebenow, Gideon January 2006 (has links)
In classic GARCH models for financial returns the innovations are usually assumed to be normally distributed. However, it is generally accepted that a non-normal innovation distribution is needed in order to account for the heavier tails often encountered in financial returns. Since the structure of the normal inverse Gaussian (NIG) distribution makes it an attractive alternative innovation distribution for this purpose, we extend the normal GARCH model by assuming that the innovations are NIG-distributed. We use the normal variance mixture interpretation of the NIG distribution to show that a NIG innovation may be interpreted as a normal innovation coupled with a multiplicative random impact factor adjustment of the ordinary GARCH volatility. We relate this new volatility estimate to realised volatility and suggest that the random impact factors are due to a news noise process influencing the underlying returns process. This GARCH model with NIG-distributed innovations leads to more accurate parameter estimates than the normal GARCH model. In order to obtain even more accurate parameter estimates, and since we expect an information gain if we use more data, we further extend the model to cater for high, low and close data, as well as full intraday data, instead of only daily returns. This is achieved by introducing the Brownian inverse Gaussian (BIG) process, which follows naturally from the unit inverse Gaussian distribution and standard Brownian motion. Fitting these models to empirical data, we find that the accuracy of the model fit increases as we move from the models assuming normally distributed innovations and allowing for only daily data to those assuming underlying BIG processes and allowing for full intraday data. However, we do encounter one problematic result, namely that there is empirical evidence of time dependence in the random impact factors. This means that the news noise processes, which we assumed to be independent over time, are indeed time dependent, as can actually be expected. In order to cater for this time dependence, we extend the model still further by allowing for autocorrelation in the random impact factors. The increased complexity that this extension introduces means that we can no longer rely on standard Maximum Likelihood methods, but have to turn to Simulated Maximum Likelihood methods, in conjunction with Efficient Importance Sampling and the Control Variate variance reduction technique, in order to obtain an approximation to the likelihood function and the parameter estimates. We find that this time dependent model assuming an underlying BIG process and catering for full intraday data fits generated data and empirical data very well, as long as enough intraday data is available. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2006.
17

Generating Generalized Inverse Gaussian Random Variates by Fast Inversion

Leydold, Josef, Hörmann, Wolfgang January 2009 (has links) (PDF)
We demonstrate that for the fast numerical inversion of the (generalized) inverse Gaussian distribution two algorithms based on polynomial interpolation are well-suited. Their precision is close to machine precision and they are much faster than the bisection method recently proposed by Y. Lai. / Series: Research Report Series / Department of Statistics and Mathematics
18

IG-GARJI模型下之住宅抵押貸款保險評價 / Valuation of Mortgage Insurance Contracts in IG-GARJI model

林思岑, Lin, Szu Tsen Unknown Date (has links)
住宅抵押貸款保險(Mortgage Insurance)為管理違約風險的重要工具,在2008年次級房貸風暴後更加受到金融機構的關注。為了能更準確且更有效率的預測房價及合理評價住宅抵押貸款保險,本文延續Christoffersen, Heston and Jacobs (2006)對股票報酬率的研究,提出新的GARCH模型,利用Inverse Gaussian分配取代常態分配來捕捉房價序列中存在的自我相關以及典型現象(stylized facts),並且同時考慮房價市場中所隱含的價格跳躍現象。本文將新模型命名為IG-GARJI模型,以便和傳統GARCH模型作區分。由於傳統的GARCH模型在計算保險價格時,通常不存在封閉解,必須藉由模擬的方法來計算價格,會增加預測的誤差,本文提供IG-GARJI模型半封閉解以增進預測效率與準確度,並利用Bühlmann et al. (1996)提出的Esscher transform方法找出其風險中立機率測度,而後運用Heston and Nandi (2000)提出之遞迴方法,找出適合的住宅抵押貸款保險評價模型。實證結果顯示,在新建房屋市場中,使用Inverse Gaussian分配會比常態分配的表現要好;對於非新建房屋,不同模型間沒有顯著的差異。另外,本文亦引用Bardhan, Karapandža, and Urošević (2006)的觀點,利用不同評價模型來比較若房屋所有權無法及時轉換時,對住宅抵押貸款保險價格帶來的影響,為住宅抵押貸款保險提供更準確的評價方法。 / Mortgage insurance products represent an attractive alternative for managing default risk. After the subprime crisis in 2008, more and more financial institutions have paid highly attention on the credit risk and default risk in mortgage market. For the purpose of giving a more accurate and more efficient model in forecasting the house price and evaluate mortgage insurance contracts properly, we follow Christoffersen, Heston and Jacobs (2006) approach to propose a new GARCH model with Inverse Gaussian innovation instead of normal distribution which is capable of capturing the auto-correlated characteristic as well as the stylized facts revealed in house price series. In addition, we consider the jump risk within the model, which is widely discussed in the house market. In order to separate our new model from traditional GARCH model, we named our model IG-GARJI model. Generally, traditional GARCH model do not exist an analytical solution, it may increase the prediction error with respect to the simulation procedure for evaluating mortgage insurance. We propose a semi-analytical solution of our model to enhance the efficiency and accuracy. Furthermore, our approach is implemented the Esscher transform introduced by Bühlmann et al. (1996) to identify a martingale measure. Then use the recursive procedure proposed by Heston and Nandi (2000) to evaluate the mortgage insurance contract. The empirical results indicate that the model with Inverse Gaussian distribution gives better performance than the model with normal distribution in newly-built house market and we could not find any significant difference between each model in previously occupied house market. Moreover, we follow Bardhan, Karapandža, and Urošević (2006) approach to investigate the impact on the mortgage insurance premium due to the legal efficiency. Our model gives another alternative to value the mortgage contracts.
19

A Study of Gamma Distributions and Some Related Works

Chou, Chao-Wei 11 May 2004 (has links)
Characterization of distributions has been an important topic in statistical theory for decades. Although there have been many well known results already developed, it is still of great interest to find new characterizations of commonly used distributions in application, such as normal or gamma distribution. In practice, sometimes we make guesses on the distribution to be fitted to the data observed, sometimes we use the characteristic properties of those distributions to do so. In this paper we will restrict our attention to the characterizations of gamma distribution as well as some related studies on the corresponding parameter estimation based on the characterization properties. Some simulation studies are also given.
20

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015

Page generated in 0.0715 seconds