• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 20
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 31
  • 28
  • 23
  • 21
  • 21
  • 18
  • 18
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A HIGH PERFORMANCE GIBBS-SAMPLING ALGORITHM FOR ITEM RESPONSE THEORY MODELS

Patsias, Kyriakos 01 January 2009 (has links)
Item response theory (IRT) is a newer and improved theory compared to the classical measurement theory. The fully Bayesian approach shows promise for IRT models. However, it is computationally expensive, and therefore is limited in various applications. It is important to seek ways to reduce the execution time and a suitable solution is the use of high performance computing (HPC). HPC offers considerably high computational power and can handle applications with high computation and memory requirements. In this work, we have modified the existing fully Bayesian algorithm for 2PNO IRT models so that it can be run on a high performance parallel machine. With this parallel version of the algorithm, the empirical results show that a speedup was achieved and the execution time was reduced considerably.
2

A PARALLEL IMPLEMENTATION OF GIBBS SAMPLING ALGORITHM FOR 2PNO IRT MODELS

Rahimi, Mona 01 August 2011 (has links)
Item response theory (IRT) is a newer and improved theory compared to the classical measurement theory. The fully Bayesian approach shows promise for IRT models. However, it is computationally expensive, and therefore is limited in various applications. It is important to seek ways to reduce the execution time and a suitable solution is the use of high performance computing (HPC). HPC offers considerably high computational power and can handle applications with high computation and memory requirements. In this work, we have applied two different parallelism methods to the existing fully Bayesian algorithm for 2PNO IRT models so that it can be run on a high performance parallel machine with less communication load. With our parallel version of the algorithm, the empirical results show that a speedup was achieved and the execution time was considerably reduced.
3

Modelagem de dados de resposta ao item sob efeito de speededness / Modeling of Item Response Data under Effect of Speededness

Campos, Joelson da Cruz 08 April 2016 (has links)
Em testes nos quais uma quantidade considerável de indivíduos não dispõe de tempo suciente para responder todos os itens temos o que é chamado de efeito de Speededness. O uso do modelo unidimensional da Teoria da Resposta ao Item (TRI) em testes com speededness pode nos levar a uma série de interpretações errôneas uma vez que nesse modelo é suposto que os respondentes possuem tempo suciente para responder todos os itens. Nesse trabalho, desenvolvemos uma análise Bayesiana do modelo tri-dimensional da TRI proposto por Wollack e Cohen (2005) considerando uma estrutura de dependência entre as distribuições a priori dos traços latentes a qual modelamos com o uso de cópulas. Apresentamos um processo de estimação para o modelo proposto e fazemos um estudo de simulação comparativo com a análise realizada por Bazan et al. (2010) na qual foi utilizada distribuições a priori independentes para os traços latentes. Finalmente, fazemos uma análise de sensibilidade do modelo em estudo e apresentamos uma aplicação levando em conta um conjunto de dados reais proveniente de um subteste do EGRA, chamado de Nonsense Words, realizado no Peru em 2007. Nesse subteste os alunos são avaliados por via oral efetuando a leitura, sequencialmente, de 50 palavras sem sentidos em 60 segundos o que caracteriza a presença do efeito speededness. / In tests where a reasonable amount of individuals does not have enough time to answer all items we observe what is called eect of Speededness. The use of a unidimensional model from Item Response Theory (IRT) in tests with speededness can lead us to erroneous interpretations, since this model assumes that the respondents have enough time to answer all items. In this work, we propose a Bayesian analysis of the three-dimensional item response models (IRT) proposed by Wollack and Cohen et al (2005) considering a dependency structure between the prior distributions of the latent traits which is modeled using Copulas. We propose and develop a MCMC algorithm for the estimation of the model. A simulation study comparing with the analysis in Bazan et al (2010), wherein an independent prior distribution assumption was presented. Finally, we apply our model in a set of real data from EGRA, called Nonsense Words, held in Peru in 2007, where students are evaluated for their performance in reading.
4

Modelagem de dados de resposta ao item sob efeito de speededness / Modeling of Item Response Data under Effect of Speededness

Joelson da Cruz Campos 08 April 2016 (has links)
Em testes nos quais uma quantidade considerável de indivíduos não dispõe de tempo suciente para responder todos os itens temos o que é chamado de efeito de Speededness. O uso do modelo unidimensional da Teoria da Resposta ao Item (TRI) em testes com speededness pode nos levar a uma série de interpretações errôneas uma vez que nesse modelo é suposto que os respondentes possuem tempo suciente para responder todos os itens. Nesse trabalho, desenvolvemos uma análise Bayesiana do modelo tri-dimensional da TRI proposto por Wollack e Cohen (2005) considerando uma estrutura de dependência entre as distribuições a priori dos traços latentes a qual modelamos com o uso de cópulas. Apresentamos um processo de estimação para o modelo proposto e fazemos um estudo de simulação comparativo com a análise realizada por Bazan et al. (2010) na qual foi utilizada distribuições a priori independentes para os traços latentes. Finalmente, fazemos uma análise de sensibilidade do modelo em estudo e apresentamos uma aplicação levando em conta um conjunto de dados reais proveniente de um subteste do EGRA, chamado de Nonsense Words, realizado no Peru em 2007. Nesse subteste os alunos são avaliados por via oral efetuando a leitura, sequencialmente, de 50 palavras sem sentidos em 60 segundos o que caracteriza a presença do efeito speededness. / In tests where a reasonable amount of individuals does not have enough time to answer all items we observe what is called eect of Speededness. The use of a unidimensional model from Item Response Theory (IRT) in tests with speededness can lead us to erroneous interpretations, since this model assumes that the respondents have enough time to answer all items. In this work, we propose a Bayesian analysis of the three-dimensional item response models (IRT) proposed by Wollack and Cohen et al (2005) considering a dependency structure between the prior distributions of the latent traits which is modeled using Copulas. We propose and develop a MCMC algorithm for the estimation of the model. A simulation study comparing with the analysis in Bazan et al (2010), wherein an independent prior distribution assumption was presented. Finally, we apply our model in a set of real data from EGRA, called Nonsense Words, held in Peru in 2007, where students are evaluated for their performance in reading.
5

topicmodels: An R Package for Fitting Topic Models

Hornik, Kurt, Grün, Bettina January 2011 (has links) (PDF)
Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.
6

Conjoint Analysis Using Mixed Effect Models

Frühwirth-Schnatter, Sylvia, Otter, Thomas January 1999 (has links) (PDF)
Following the pioneering work of Allenby and Ginter (1995) and Lenk et al.(1994); we propose in Section 2 a mixed effect model allowing for fixed and random effects as possible statistical solution to the problems mentioned above. Parameter estimation using a new, efficient variant of a Markov Chain Monte Carlo method will be discussed in Section 3 together with problems of model comparison techniques in the context of random effect models. Section 4 presents an application of the former to a brand-price trade-off study from the Austrian mineral water market. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
7

Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model

Dong, Bei, Zhang, Ling, Lu, Xuan January 2008 (has links)
At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate according to the current policy.
8

Gibbs sampling's application in censored regression model and unit root test

Wu, Wei-Lun 02 September 2005 (has links)
Abstract Generally speaking, when dealing with some data, our analysis will be limited because of the given data was incompletely or hidden. And these kinds of errors in calculation will arrive at a statistics answer. This thesis adopts an analysis based on the Gibbs sampling trying to recover the part of hidden data. Since we found out whether time series is unit root or not, the effects of the simulated series will be similar to the true value. After observing the differences between the hidden data and the recovered data in unit root, we noticed that the hidden data has a bigger size and a weakened power over the recovered data. Finally, as an example, we give the unsecured loans at the Japanese money market to prove our issues by analyzing the data from January, 1999 to July, 2004. Since we found out that the numerical value of loan is zero at several months these past several years. In order to observe the Japanese money market, if we substitute the data of zero loan and use the traditional way to inspect unit root without taking model of average value into account, the result will be I(0). And if we simulate the hidden data with Gibbs sampling and substitute the data to inspect the Japanese money market without taking model of average value into account, the result will be I(0) also. But if we take model of average value into account, the of the Japanese Money Market will be I(1). And if we simulate the hidden data with Gibbs sampling and substitute the data to inspect the Japanese money market, the result will be I(I) also.
9

Bayesian analysis of the heterogeneity model

Frühwirth-Schnatter, Sylvia, Tüchler, Regina, Otter, Thomas January 2002 (has links) (PDF)
In the present paper we consider Bayesian estimation of a finite mixture of models with random effects which is also known as the heterogeneity model. First, we discuss the properties of various MCMC samplers that are obtained from full conditional Gibbs sampling by grouping and collapsing. Whereas full conditional Gibbs sampling turns out to be sensitive to the parameterization chosen for the mean structure of the model, the alternative sampler is robust in this respect. However, the logical extension of the approach to the sampling of the group variances does not further increase the efficiency of the sampler. Second, we deal with the identifiability problem due to the arbitrary labeling within the model. Finally, a case study involving metric Conjoint analysis serves as a practical illustration. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
10

Computerintensive statistische Methoden : Gibbs Sampling in Regressionsmodellen /

Krause, Andreas Eckhard. January 1994 (has links)
Diss. Staatswiss. Basel, 1994. / Register. Literaturverz.

Page generated in 0.0987 seconds