• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 14
  • 13
  • 12
  • 11
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Incorporation of Genetic Marker Information in Estimating Modelparameters for Complex Traits with Data From Large Complex Pedigrees

Luo, Yuqun 20 December 2002 (has links)
No description available.
32

Implenting a Systematic Gibbs Sampler Method to Explore Probability Bias in AI Agents

Bisht, Charu January 2024 (has links)
In an era increasingly shaped by artificial intelligence (AI), the necessity for unbiased decision-making from AI systems intensifies. Recognizing the inherent biases in humandecision-making is evident through various psychological theories. Prospect Theory, prominently featured among them, utilizes a probability weighing function (PWF) to gain insights into human decision processes. This observation prompts an intriguing question: Can this framework be extended to comprehend AI decision-making? This study employs a systematic Gibbs sampler method to measure probability weighing function of AI and validate this methodology against a dataset comprising 1 million distinct AI decision strategies. Subsequently, exemplification of its application on Recurrent Neural Networks (RNN) and Artificial Neural Networks (ANN) is seen. This allows us to discern the nuanced shapes of the Probability Weighting Functions (PWFs) inherent in ANN and RNN, thereby facilitating informed speculation on the potential presence of “probability bias” within AI. In conclusion, this research serves as a foundational step in the exploration of "probability bias" in AI decision-making. The demonstrated reliability of the systematic Gibbs sampler method significantly contributes to ongoing research, primarily by enabling the extraction of Probability Weighting Functions (PWFs). The emphasis here lies in laying the groundwork –obtaining the PWFs from AI decision processes. The subsequent phases, involving in-depth understanding and deductive conclusions about the implications of these PWFs, fall outside the current scope of this study. With the ability to discern the shapes of PWFs for AI, this research paves the way for future investigations and various tests to unravel the deeper meaning of probability bias in AI decision-making.
33

Essays in Total Factor Productivity measurement

Severgnini, Battista 16 August 2010 (has links)
Diese Dissertation umfasst sowohl einen theoretisches als auch einen empirischen Beitrag zur Analyse der Messung der gesamten Faktorproduktivität (TFP). Das erste Kapitel inspiziert die bestehende Literatur über die häufigsten Techniken der TFP Messung und gibt einen Überblick über deren Limitierung. Das zweite Kapitel betrachtet Daten, die durch ein Real Business Cycle Modell generiert wurden und untersucht das quantifizierbare Ausmaß von Messfehlern des Solow Residuums als ein Maß für TFP Wachstum, wenn der Kapitalstock fehlerhaft gemessen wird und wenn Kapazitätsauslastung und Abschreibungen endogen sind. Das dritte Kapitel schlägt eine neue Methodologie in einem bayesianischen Zusammenhang vor, die auf Zustands- Raum-Modellen basiert. Das vierte Kapitel führt einen neuen Ansatz zur Bestimmung möglicher Spill-over Effekte auf Grund neuer Technologien auf die Produktivität ein und kombiniert eine kontrafaktische Zerlegung, die von den Hauptannahmen des Malquist Indexes abgeleitet wird mit ökonometrischen Methoden, die auf Machado and Mata (2005) zurückgehen. / This dissertation consists of theoretical and empirical contributions to the study on Total Factor Productivity (TFP) measurement. The first chapter surveys the literature on the most used techniques in measuring TFP and surveys the limits of these frameworks. The second chapter considers data generated from a Real Business Cycle model and studies the quantitative extent of measurement error for the Solow residual as a measure of TFP growth when the capital stock is measured with error and when capacity utilization and depreciation are endogenous. Furthermore, it proposes two alternative measurements of TFP growth which do not require capital stocks. The third chapter proposes a new methodology based on State-space models in a Bayesian framework. Applying the Kalman Filter to artificial data, it proposes a computation of the initial condition for productivity growth based on the properties of the Malmquist index. The fourth chapter introduces a new approach for identifying possible spillovers emanating from new technologies on productivity combining a counterfactual decomposition derived from the main properties of the Malmquist index and the econometric technique introduced by Machado and Mata (2005).
34

Calibração linear assimétrica / Asymmetric Linear Calibration

Figueiredo, Cléber da Costa 27 February 2009 (has links)
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos. / This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
35

Calibração linear assimétrica / Asymmetric Linear Calibration

Cléber da Costa Figueiredo 27 February 2009 (has links)
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos. / This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
36

Vitesse de convergence de l'échantillonneur de Gibbs appliqué à des modèles de la physique statistique / The convergence rate of the Gibbs sampler for some statistical mechanics models

Helali, Amine 11 January 2019 (has links)
Les méthodes de Monte Carlo par chaines de Markov MCMC sont des outils mathématiques utilisés pour simuler des mesures de probabilités π définies sur des espaces de grandes dimensions. Une des questions les plus importantes dans ce contexte est de savoir à quelle vitesse converge la chaine de Markov P vers la mesure invariante π. Pour mesurer la vitesse de convergence de la chaine de Markov P vers sa mesure invariante π nous utilisons la distance de la variation totale. Il est bien connu que la vitesse de convergence d’une chaine de Markov réversible P dépend de la deuxième plus grande valeur propre en valeur absolue de la matrice P notée β!. Une partie importante dans l’estimation de β! consiste à estimer la deuxième plus grande valeur propre de la matrice P, qui est notée β1. Diaconis et Stroock (1991) ont introduit une méthode basée sur l’inégalité de Poincaré pour estimer β1 pour le cas général des chaines de Markov réversibles avec un nombre fini d'état. Dans cette thèse, nous utilisons la méthode de Shiu et Chen (2015) pour étudier le cas de l'algorithme de l'échantillonneur de Gibbs pour le modèle d'Ising unidimensionnel avec trois états ou plus appelé aussi modèle de Potts. Puis, nous généralisons le résultat de Shiu et Chen au cas du modèle d’Ising deux- dimensionnel avec deux états. Les résultats obtenus minorent ceux introduits par Ingrassia (1994). Puis nous avons pensé à perturber l'échantillonneur de Gibbs afin d’améliorer sa vitesse de convergence vers l'équilibre. / Monte Carlo Markov chain methods MCMC are mathematical tools used to simulate probability measures π defined on state spaces of high dimensions. The speed of convergence of this Markov chain X to its invariant state π is a natural question to study in this context.To measure the convergence rate of a Markov chain we use the total variation distance. It is well known that the convergence rate of a reversible Markov chain depends on its second largest eigenvalue in absolute value denoted by β!. An important part in the estimation of β! is the estimation of the second largest eigenvalue which is denoted by β1.Diaconis and Stroock (1991) introduced a method based on Poincaré inequality to obtain a bound for β1 for general finite state reversible Markov chains.In this thesis we use the Chen and Shiu approach to study the case of the Gibbs sampler for the 1−D Ising model with three and more states which is also called Potts model. Then, we generalize the result of Shiu and Chen (2015) to the case of the 2−D Ising model with two states.The results we obtain improve the ones obtained by Ingrassia (1994). Then, we introduce some method to disrupt the Gibbs sampler in order to improve its convergence rate to equilibrium.
37

二篇有關股票價格平均數復歸的實證研究 / Two Essays on Mean Reversion Behavior of Stock Price in Taiwan

阮建銘, Ruan, Jian-Ming Unknown Date (has links)
本論文是二篇探討與股票價格平均數復歸現象有關的實證文章。在第一篇文章中,我們將探討由於廠商特質所產生資金供需雙方訊息的非對稱,而引發的流動性限制對廠商股票價格行為的潛在影響;在第二篇文章中,我們研究的課題是在漲跌幅限制下,交易量與股票報酬自我相關的關係。 第一篇文章主要在探討由於廠商特質所產生資金供需雙方訊息的非對稱,而引發的流動性限制對廠商股票價格行為的影響。我們利用五個廠商特質-所有權結構、集團企業成員、上市時間、公司規模與現金股利的發放,定義面臨流動性限制的廠商,並使用變異數比率衡量股票價格平均數復歸的現象,由於小樣本的問題,我們將利用拔靴法檢定假說:廠商的流動性限制會強化其股票價格平均數復歸的行為。我們的實證結果並不一致,所有權結構、公司規模和集團企業成員的分組實證結果支持我們的假說,流動性限制會強化平均數復歸的行為;而上市時間與現金股利發放的分組實證結果並不支持我們的假說。 在第二篇文章中,我們使用與Campbell et. al. (1993)相同的實證模型,討論在漲跌幅限制下,交易量與股票日報酬自我相關的關係。由於漲跌幅限制的存在,當股票價格觸及漲跌幅上下限時,即停止交易,而使得真正的股票價格無法觀察到,因而未實現之需求或供給將會傳遞至下一個交易日,將使傳統OLS或其衍生方法的估計產生偏誤,而使用Chou和Chib (1995)與Chou (1995)所提的Gibbs抽樣法則可以成功地克服這些困難。所以,本文將應用Chou和Chib (1995)與Chou (1995)的Gibbs抽樣法來衡量台灣股票市場交易量對股票日報酬自我相關係數的影響,以避免漲跌幅限制的影響。本文採用台灣證交所編製的綜合股價指數所採樣的二十四家公司為樣本,利用日資料進行實證分析,實證結果支持「交易量效果」的存在。且在實證過程中,發現台灣股票市場股票日報酬的正自我相關有可能是漲跌幅限制的存在而造成的。
38

信用衍生性商品評價-馬可夫鏈模型

林明宗 Unknown Date (has links)
信用衍生性商品(credit derivatives)是用於移轉信用風險之契約,契約是由保護買方(protection buyer)與保護賣方(protection seller)所簽定,由保護買方支付保險金(可為躉繳或分期支付)以獲得信用的保護,而保護賣方則需在律定之信用事件發生時支付償金予保護買方做為補償。近年來頻傳金融事件,巴塞爾銀行監理委員會(Basel Committee on Banking Supervision)也不得不制定新版的巴塞爾協定以要求銀行強化信用風險控制與分散,而信用衍生性商品亦有助於信用風險的移轉與抵減的功能。 本篇針對利用conditional Markov chain來建構信用違約交換與第n次信用違約交換之評價模型,並利用模擬的方式來求算出各商品之利差。藉由現實中的資料取得參數的估計值放入模型內則可以模擬出各種不同的狀況,進而做出避險的策略。 此外,本篇亦探討如何利用Gibbs sampler來改良conditional Markov chain的模擬方法,以模擬當信用衍生性商品中的資產組合有傳染效果的情況。
39

Recyclage des candidats dans l'algorithme Metropolis à essais multiples

Groiez, Assia 03 1900 (has links)
Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît. / Markov Chain Monte Carlo (MCMC) algorithms are methods that are used for sampling from probability distributions. These tools are based on the path of a Markov chain whose stationary distribution is the distribution to be sampled. Given their relative ease of application, they are one of the most popular approaches in the statistical community, especially in Bayesian analysis. These methods are very popular for sampling from complex and/or high dimensional probability distributions. Since the appearance of the first MCMC method in 1953 (the Metropolis algorithm, see [10]), the interest for these methods, as well as the range of algorithms available, continue to increase from one year to another. Although the Metropolis-Hastings algorithm (see [8]) can be considered as one of the most general Markov chain Monte Carlo algorithms, it is also one of the easiest to understand and explain, making it an ideal algorithm for beginners. As such, it has been studied by several researchers. The multiple-try Metropolis (MTM) algorithm , proposed by [9], is considered as one interesting development in this field, but unfortunately its implementation is quite expensive (in terms of time). Recently, a new algorithm was developed by [1]. This method is named the revisited multiple-try Metropolis algorithm (MTM revisited), which is obtained by expressing the MTM method as a Metropolis-Hastings algorithm on an extended space. The objective of this work is to first present MCMC methods, and subsequently study and analyze the Metropolis-Hastings and standard MTM algorithms to allow readers a better perspective on the implementation of these methods. A second objective is to explore the opportunities and disadvantages of the revisited MTM algorithm to see if it meets the expectations of the statistical community. We finally attempt to fight the sedentarity of the revisited MTM algorithm, which leads to a new algorithm. The latter performs efficiently when the number of generated candidates in a given iteration is small, but the performance of this new algorithm then deteriorates as the number of candidates in a given iteration increases.
40

Estimação de parâmetros genéticos de produção de leite e de gordura da raça Pardo-suíça, utilizando metodologias freqüentista e bayesiana / Estimation of genetic parameters of milk and fat yield of Brown-Swiss cows using frequentist and bayesian methodologies

Yamaki, Marcos 31 July 2006 (has links)
Made available in DSpace on 2015-03-26T13:55:08Z (GMT). No. of bitstreams: 1 texto completo.pdf: 905318 bytes, checksum: 167ccc3c1b47051e3ce28eb0224bed43 (MD5) Previous issue date: 2006-07-31 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / First lactation data of 6.262 Brown-Swiss cows from 311 herds, daughters of 803 sires with calving between 1980 and 2003 were used to estimate genetic parameters for milk and fat production traits. The components of variance were estimated by restricted maximum likelihood (REML) and bayesian methods, using animal model with uni and two-traits analisys . The estimation by REML was obtained with the software MTDFREML (BOLDMAN et al. 1995) testing unitrait models with different effects to covariables and considering contemporary group and season as fixed effect. The best fitting obtained on unitrait analisys were used on two-trait analisys. The estimative of additive variance was reduced when lactation length was included on the model suggesting that the animals were been fitted to the same base on the capacity of transmit a longer or shorter lactation length to the progeny. Therefore, fitting to this covariable is not recommended. On the other side, the age of calving has linearly influenced milk and fat production. The heritability estimates were 0,26 and 0,25 to milk and fat yield respectively with genetic correlation of 0,95. the high correlation among these traits suggests that part of genes that acts on milk yield also respond to fat yield, in such way that selection for milk yield results, indirectly, in increase on fat yield. The estimation by Bayesian inference was made on software MTGSAM (VAN TASSELL E VAN VLECK, 1995). Chain lengths were tested to obtain the marginal posterior densities of unitrait analisys, the best option of chain length, burn-in and sampling interval was used on two-trait analisys. The burn-in periods were tested with the software GIBANAL (VAN KAAM, 1998) witch analysis inform a sampling interval for each burn-in tested, the criteria for choosing the sampling interval was made with the serial correlation resulting by burn-in and sampling process. The heritability estimates were 0,33 ± 0,05 for both traits with genetic correlation of 0,95. Similar results were obtained on studies using the same methodology on first lactation records. The stationary phase adequately reached with a 500.000 chain length and 30.000 burn-in iteractions. / Dados de primeira lactação de 6.262 vacas distribuídas em 311 rebanhos, filhas de 803 touros com partos entre os anos de 1980 e 2003 foram utilizados para estimar de componentes de variância para as características de produção de leite e gordura com informações de primeira lactação, em animais da raça Pardo-Suíça. Os componentes de variância foram estimados pelo método da máxima verossimilhança restrita (REML) e Bayesiano, sob modelo animal, por meio de análises uni e bicaracterística. A estimação realizada via REML foi obtida com o programa MTDFREML (BOLDMAN et al. 1995) testando modelos unicaracterística com diferentes efeitos para as covariáveis e considerados grupo contemporâneo e estação como efeitos fixos. Os melhores ajustes obtidos nas analises unicaracterística foram utilizados na análise bicaracterística. A duração da lactação reduziu a estimativa da variância aditiva quando era utilizada no modelo sugerindo que os animais estariam sendo corrigidos para uma mesma base quanto à capacidade de imprimir duração da lactação mais longa ou mais curta à progênie sendo, portanto, não recomendado o ajuste para esta covariável. Já a idade da vaca ao parto, influenciou linearmente a produção de leite e gordura. As herdabilidades estimadas foram 0,26 e 0,25 para produção de leite e gordura respectivamente com correlação genética de 0,95. A alta correlação entre a produção de leite e gordura obtida sugere que parte dos genes que atuam na produção de leite também responde pela produção de gordura, de tal forma que a seleção para a produção de leite resulta, indiretamente, em aumentos na produção de gordura. A estimação via inferência Bayesiana foi realizada com o programa MTGSAM (VAN TASSELL E VAN VLECK, 1995). Foram testados diversos tamanhos de cadeia para a obtenção das densidades marginais a posteriori das análises unicaracterística, a melhor proposta para o tamanho de cadeia, burn-in e amostragem foi utilizada para a análise bicaracterística. Os períodos de burn-in foram testados pelo programa GIBANAL (VAN KAAM, 1998) cujas análises fornecem um intervalo de amostragem para cada burn-in testado, o critério de escolha do intervalo de amostragem foi feito de acordo com a correlação serial, resultante do burn-in e do processo de amostragem. As estimativas de herdabilidade obtidas foram 0,33 ± 0,05 para ambas as características com correlação de 0,95. Resultados similares foram obtidos em estudos utilizando a mesma metodologia em informações de primeira lactação. A fase estacionária foi adequadamente atingida com uma cadeia de 500.000 iterações e descarte inicial de 30.000 iterações.

Page generated in 0.069 seconds