• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 37
  • 26
  • 17
  • 10
  • 8
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 73
  • 68
  • 67
  • 51
  • 44
  • 42
  • 39
  • 32
  • 31
  • 29
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

美國退休福利保險公司狀態轉換保險評價模型 / The Pricing Model of Pension Benefit Guaranty Corporation Insurance with Regime Switching Processes

王暐豪, Wang, Wei Hao Unknown Date (has links)
本文研究美國退休福利保險公司(PBGC)保險價值的計算,延伸 Marcus (1987)模型,提出狀態轉換過程保險價值模型計算,也就是將市場分為兩種情況,正成長率視為正常狀態,負成長率為衰退狀態,利用狀態轉換過程評價 PBGC 契約在經濟困難而終止和介入終止下合理的保險價值。在參數估計方面,本文以 S&P500股價指數和一年期國庫券資料參數估計值及Marcus(1987)和Pennacchi and Lewis(1994)的方式給定參數,以 EM-PSO-Gradient 延伸 EM-Gradient 方法並以最大概似函數值、AIC 準則和 BIC 準則比較估計結果。最後固定其他參數, 探討狀態轉換過程保險價值模型對參數調整後保險價值的影響之敏感度分析。 / In this paper, we evaluate Pension Benefit Guaranty Corporation insurance values through regime switching models, which is the extension of the models of Marcus (1987). That is, we can separate periods of economy with faster growth from those with slower growth when observing long-term trends in economy and calculate the reasonable PBGC insurance values under distress termination and intervention termination by regime switching processes. We set parameters by estimating S&P 500 index and 1-year treasury bills by EM-PSO-Gradient, which is the extensive method of EM-Gradient and refer the methods of setting parameters from Marcus (1987) and Pennacchi and Lewis (1994). After that, we compare the maximum likelihood estimates, AIC and BIC of the estimative results. Finally, we do sensitivity analysis through given the other parameters and look into what would impact on our models of insurance values when adjusting one parameter.
162

Dichotomous-Data Reliability Models with Auxiliary Measurements

俞一唐, Yu, I-Tang Unknown Date (has links)
我們提供一個新的可靠度模型,DwACM,並提供一個模式選擇準則CCP,我們利用DwACM和CCP來選擇衰變量。 / We propose a new reliability model, DwACM (Dichotomous-data with Auxiliary Continuous Measurements model) to describe a data set which consists of classical dichotomous response (Go or No Go) associated with a set of continuous auxiliary measurement. In this model, the lifetime of each individual is considered as a latent variable. Given the value of the latent variable, the dichotomous response is either 0 or 1 depending on if it fails or not at the measuring time. The continuous measurement can be regarded as observations of an underlying possible degradation candidate of which descending process is a function of the lifetime. Under the assumption that the failure of products is defined as the time at which the continuous measurement reaches a threshold, these two measurements can be linked in the proposed model. Statistical inference under this model are both in frequentist and Bayesian frameworks. To evaluate the continuous measurements, we provide a criterion, CCP (correct classification probability), to select the best degradation measurement. We also report our simulation studies of the performances of parameters estimators and CCP.
163

條件評估法中處理「不知道」回應之研究 / Analysis of contingency valuation survey data with “Don’t Know” responses

王昱博, Wang, Yu Bo Unknown Date (has links)
本文主要著重在處理條件評估法下,「不知道」受訪者的回應。當「不知道」受訪者的產生機制並未符合完全隨機時,考量他們的真實意向就顯得極為重要。 文中使用中央研究院生醫所在其研究計畫「竹東及朴子地區心臟血管疾病長期追蹤研究」(CardioVascular Disease risk FACtor Two-township Study,簡稱CVDFACTS)第五循環中的研究調查資料。   由於以往的文獻對於「不知道」受訪者的處理,皆有不足之處。如Wang (1997)所提出的方法,就只能針對某種特定的「不知道」受訪者來做處理;而Caudill and Groothuis (2005)所提的方法,由於將「不知道」受訪者的差補與願付價格的估計分開,亦使其估計結果不具備一些好的性質。在本文中,我們提出一個能同時處理「不知道」受訪者且估計願付價格的方法。除了使得統計上較有效率外,也保有EM演算法的一個特性:願付價格模型中的估計參數為最大概似估計值。此外,在加入三要素混合模型(Tsai (2005))後,我們也可避免用到極端受訪者的訊息去差補那些「不知道」受訪者的意向。   在分析願付價格的過程中,我們發現此筆資料的「不知道」受訪者,其產生的機制為隨機,而非為完全隨機,這意謂著不考量「不知道」受訪者的分析結果,必定會產生偏差。而在比較有考量「不知道」受訪者與沒有的情況後,其結果確實應證了我們的想法:只要「不知道」受訪者不是完全隨機產生的,那麼不考量他們必定會產生某種程度的偏差。 / This paper investigates how to deal with “Don’t Know” (DK) responses in contingent valuation surveys, which must be taken into consideration when they are not completely at random. The data we use is collected from the fifth cycle of the Cardiovascular Disease Risk Factor Two-township Study (CVDFACTS), which is a series of long-term surveys conducted by the Institute of Biomedical Sciences, Academia Sinica. Previous methods used in dealing with DK responses have not been satisfactory because they only focus on some types of DK respondents (Wang (1997)), or separate the imputation of DK responses from the WTP estimation (Caudill and Groothuis (2005)). However, in this paper, we introduce an integrated method to cope with the incomplete data caused by DK responses. Besides being more efficient, the single-step method guarantees maximum likelihood estimates of the WTP model to be obtained due to the good property that the EM algorithm possesses. Furthermore, by adding the concept of the three-component mixture model (Tsai (2005)), some extreme information are drawn out when imputing the DK inclinations. In this hypertension data, the mechanism of the DK responses is “Don’t know at random”, which means the analysis of DK-dropped results in a bias. By using our method, the difference between DK-dropped and DK-included is actually revealed, which proves our suspicion that a DK-dropped analysis is accompanied by a biased result when DK is not completely at random.
164

Analyse bayésienne et classification pour modèles continus modifiés à zéro

Labrecque-Synnott, Félix 08 1900 (has links)
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique. / Zero-inflated models, both discrete and continuous, have a large variety of applications and fairly well-known properties. Some work has been done on zero-deflated and zero-modified discrete models. The usual formulation of continuous zero-inflated models -- a mixture between a continuous density and a Dirac mass at zero -- precludes their extension to cover the zero-deflated case. We introduce an alternative formulation of zero-inflated continuous models, along with a natural extension to the zero-deflated case. Parameter estimation is first studied within the classical frequentist framework. Several methods for obtaining the maximum likelihood estimators are proposed. The problem of point estimation is considered from a Bayesian point of view. Hypothesis testing, aiming at determining whether data are zero-inflated, zero-deflated or not zero-modified, is also considered under both the classical and Bayesian paradigms. The proposed estimation and testing methods are assessed through simulation studies and applied to aggregated rainfall data. The data is shown to be zero-deflated, demonstrating the relevance of the proposed model. We next consider the clustering of samples of zero-deflated data. Such data present strong non-normality. Therefore, the usual methods for determining the number of clusters are expected to perform poorly. We argue that Bayesian clustering based on the marginal distribution of the observations would take into account the particularities of the model and exhibit better performance. Several clustering methods are compared using a simulation study. The proposed method is applied to aggregated rainfall data sampled from 28 measuring stations in British Columbia.
165

Actuarial applications of multivariate phase-type distributions : model calibration and credibility

Hassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
166

Contribution des familles exponentielles en traitement des images / Contribution of the exponential families to image processing

Ben Arab, Taher 26 April 2014 (has links)
Cette thèse est consacrée à l'évaluation des familles exponentielles pour les problèmes de la modélisation des bruits et de la segmentation des images couleurs. Dans un premier temps, nous avons développé une nouvelle caractérisation des familles exponentielles naturelles infiniment divisible basée sur la fonction trace de la matrice de variance covariance associée. Au niveau application, cette nouvelle caractérisation a permis de détecter la nature de la loi d'un bruit additif associé à un signal où à une image couleur. Dans un deuxième temps, nous avons proposé un nouveau modèle statistique paramétrique mulltivarié basé sur la loi de Riesz. La loi de ce nouveau modèle est appelée loi de la diagonale modifiée de Riesz. Ensuite, nous avons généralisé ce modèle au cas de mélange fini de lois. Enfin, nous avons introduit un algorithme de segmentation statistique d'image ouleur, à travers l'intégration de la méthode des centres mobiles (K-means) au niveau de l'initialisation pour une meilleure définition des classes de l'image et l'algorithme EM pour l'estimation des différents paramètres de chaque classe qui suit la loi de la diagonale modifiée de la loi de Riesz. / This thesis is dedicated to the evaluation of the exponential families for the problems of the noise modeling and the color images segmentation. First, we developed a new characterization of the infinitely divisible natural exponential families based on the trace function of the associated variance-covariance matrix. At the application level, this new characterization allowed to detect the nature of the law of an additive noise associated with a signal or with a color image. Second, we proposed a new parametric multivariate statistical model based on Riesz's distribution. The law of this new model is called the modified diagonal Riesz distribution. Then we generalized this model in the case of a finished mixture of distibution. Finally we introduced an algorithm of statistical segmentation of color images through the integration of the k-means method at the level of the initialization for a better definition of the image classes and the algorithm EM for the estimation of the different parameters of every class which follows the modified diagonal Riesz distribution.
167

Les généralisations des récursivités de Kalman et leurs applications / Kalman recursion generalizations and their applications

Kadhim, Sadeq 20 April 2018 (has links)
Nous considérions des modèles à espace d'état où les observations sont multicatégorielles et longitudinales, et l'état est décrit par des modèles du type CHARN. Nous estimons l'état au moyen des récursivités de Kalman généralisées. Celles-ci reposent sur l'application d'une variété de filtres particulaires et de l’algorithme EM. Nos résultats sont appliqués à l'estimation du trait latent en qualité de vie. Ce qui fournit une alternative et une généralisation des méthodes existantes dans la littérature. Ces résultats sont illustrés par des simulations numériques et une application aux données réelles sur la qualité de vie des femmes ayant subi une opération pour cause de cancer du sein / We consider state space models where the observations are multicategorical and longitudinal, and the state is described by CHARN models. We estimate the state by generalized Kalman recursions, which rely on a variety of particle filters and EM algorithm. Our results are applied to estimating the latent trait in quality of life, and this furnishes an alternative and a generalization of existing methods. These results are illustrated by numerical simulations and an application to real data in the quality of life of patients surged for breast cancer
168

Calibração linear assimétrica / Asymmetric Linear Calibration

Figueiredo, Cléber da Costa 27 February 2009 (has links)
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos. / This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
169

Calibração linear assimétrica / Asymmetric Linear Calibration

Cléber da Costa Figueiredo 27 February 2009 (has links)
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos. / This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
170

雙變量脆弱性韋伯迴歸模式之研究

余立德, Yu, Li-Ta Unknown Date (has links)
摘要 本文主要考慮群集樣本(clustered samples)的存活分析,而每一群集中又分為兩種組別(groups)。假定同群集同組別內的個體共享相同但不可觀測的隨機脆弱性(frailty),因此面臨的是雙變量脆弱性變數的多變量存活資料。首先,驗證雙變量脆弱性對雙變量對數存活時間及雙變量存活時間之相關係數所造成的影響。接著,假定雙變量脆弱性服從雙變量對數常態分配,條件存活時間模式為韋伯迴歸模式,我們利用EM法則,推導出雙變量脆弱性之多變量存活模式中母數的估計方法。 關鍵詞:雙變量脆弱性,Weibull迴歸模式,對數常態分配,EM法則 / Abstract Consider survival analysis for clustered samples, where each cluster contains two groups. Assume that individuals within the same cluster and the same group share a common but unobservable random frailty. Hence, the focus of this work is on bivariate frailty model in analysis of multivariate survival data. First, we derive expressions for the correlation between the two survival times to show how the bivariate frailty affects these correlation coefficients. Then, the bivariate log-normal distribution is used to model the bivariate frailty. We modified EM algorithm to estimate the parameters for the Weibull regression model with bivariate log-normal frailty. Key words:bivariate frailty, Weibull regression model, log-normal distribution, EM algorithm.

Page generated in 0.0648 seconds