• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 18
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 136
  • 62
  • 45
  • 38
  • 31
  • 27
  • 26
  • 24
  • 24
  • 20
  • 16
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Rethinking meta-analysis: an alternative model for random-effects meta-analysis assuming unknown within-study variance-covariance

Toro Rodriguez, Roberto C 01 August 2019 (has links)
One single primary study is only a little piece of a bigger puzzle. Meta-analysis is the statistical combination of results from primary studies that address a similar question. The most general case is the random-effects model, in where it is assumed that for each study the vector of outcomes T_i~N(θ_i,Σ_i ) and that the vector of true-effects for each study is θ_i~N(θ,Ψ). Since each θ_i is a nuisance parameter, inferences are based on the marginal model T_i~N(θ,Σ_i+Ψ). The main goal of a meta-analysis is to obtain estimates of θ, the sampling error of this estimate and Ψ. Standard meta-analysis techniques assume that Σ_i is known and fixed, allowing the explicit modeling of its elements and the use of Generalized Least Squares as the method of estimation. Furthermore, one can construct the variance-covariance matrix of standard errors and build confidence intervals or ellipses for the vector of pooled estimates. In practice, each Σ_i is estimated from the data using a matrix function that depends on the unknown vector θ_i. Some alternative methods have been proposed in where explicit modeling of the elements of Σ_i is not needed. However, estimation of between-studies variability Ψ depends on the within-study variance Σ_i, as well as other factors, thus not modeling explicitly the elements of Σ_i and departure of a hierarchical structure has implications on the estimation of Ψ. In this dissertation, I develop an alternative model for random-effects meta-analysis based on the theory of hierarchical models. Motivated, primarily, by Hoaglin's article "We know less than we should about methods of meta-analysis", I take into consideration that each Σ_i is unknown and estimated by using a matrix function of the corresponding unknown vector θ_i. I propose an estimation method based on the Minimum Covariance Estimator and derive formulas for the expected marginal variance for two effect sizes, namely, Pearson's moment correlation and standardized mean difference. I show through simulation studies that the proposed model and estimation method give accurate results for both univariate and bivariate meta-analyses of these effect-sizes, and compare this new approach to the standard meta-analysis method.
32

Optimal Tests for Panel Data

Bennala, Nezar 14 September 2010 (has links)
Dans ce travail, nous proposons des procédures de test paramétriques et nonparamétriques localement et asymptotiquement optimales au sens de Hajek et Le Cam, pour deux modèles de données de panel. Notre approche est fondée sur la théorie de Le Cam d'une part, pour obtenir les propriétés de normalité asymptotique, bases de la construction des tests paramétriques optimaux, et la théorie de Hajek d'autre part, qui, via un principe d'invariance, permet d'obtenir les procédures nonparamétriques. Dans le premier chapitre, nous considérons un modèle à erreurs composées et nous nous intéressons au problème qui consiste à tester l'absence de l'effet individuel aléatoire. Nous établissons la propriété de normalité locale asymptotique (LAN), ce qui nous permet de construire des procédures paramétriques localement et asymptotiquement optimales (“les plus stringentes”) pour le problème considéré. L'optimalité de ces procédures est liée à la densité-cible f1. Ces propriétés d'optimalité sont hautement paramétriques puisqu'elles requièrent que la densité sous-jacente soit f1. De plus, ces procédures ne seront valides que si la densité-cible f1 et la densité sous-jacent g1 coincïdent. Or, en pratique, une spécification correcte de la densité sous-jacente g1 est non réaliste, et g1 doit être considérée comme un paramètre de nuissance. Pour éliminer cette nuisance, nous adoptons l'argument d'invariance et nous nous restreignons aux procédures fondées sur des statistiques qui sont mesurables par rapport au vecteur des rangs. Les tests que nous obtenons restent valide quelle que soit la densité sous-jacente et sont localement et asymptotiquement les plus stringents. Afin d'avoir des renseignements sur l'efficacité des tests fondés sur les rangs sous différentes lois, nous calculons les efficacités asymptotiques relatives de ces tests par rapport aux tests pseudo-gaussiens, sous des densités g1 quelconques. Enfin, nous proposons quelques simulations pour comparer les performances des procédures proposées. Dans le deuxième chapitre, nous considérons un modèle à erreurs composées avec autocorrélation d'ordre 1 et nous montrons que ce modèle jouit de la propriété LAN. A partir de ce résultat, nous construisons des tests optimaux, au sens local et asymptotique, pour trois problèmes de tests importants dans ce contexte : (a) test de l'absence d'effet individuel et d'autocorrélation; (b) test de l'absence d'effet individuel en présence d'une autocorrélation non spécifiée; et (c) test de l'absence d'autocorrélation en présence d'un effet individuel non spécifié. Enfin, nous proposons quelques simulations pour comparer les performances des tests pseudogaussiens et des tests classiques.
33

Essays on random effects models and GARCH

Skoglund, Jimmy January 2001 (has links)
This thesis consists of four essays, three in the field of random effects models and one in the field of GARCH. The first essay in this thesis, ''Maximum likelihood based inference in the two-way random effects model with serially correlated time effects'', considers maximum likelihood estimation and inference in the two-way random effects model with serial correlation. We derive a straightforward maximum likelihood estimator when the time-specific component follow an AR(1) or MA(1) process. The estimator is also easily generalized to allow for arbitrary stationary and strictly invertible ARMA processes. In addition we consider the model selection problem and derive tests of the null hypothesis of no serial correlation as well as tests for discriminating between the AR(1) and MA(1) specifications. A Monte-Carlo experiment evaluates the finite-sample properties of the estimators, test-statistics and model selection procedures. The second essay, ''Asymptotic properties of the maximum likelihood estimator of random effects models with serial correlation'', considers the large sample behavior of the maximum likelihood estimator of random effects models with serial correlation in the form of AR(1) for the idiosyncratic or time-specific error component. Consistent estimation and asymptotic normality is established for a comprehensive specification which nests these models as well as all commonly used random effects models. The third essay, ''Specification and estimation of random effects models with serial correlation of general form'', is also concerned with maximum likelihood based inference in random effects models with serial correlation. Allowing for individual effects we introduce serial correlation of general form in the time effects as well as the idiosyncratic errors. A straightforward maximum likelihood estimator is derived and a coherent model selection strategy is suggested for determining the orders of serial correlation as well as the importance of time or individual effects. The methods are applied to the estimation of a production function using a sample of 72 Japanese chemical firms observed during 1968-1987. The fourth essay, entitled ''A simple efficient GMM estimator of GARCH models'', considers efficient GMM based estimation of GARCH models. Sufficient conditions for the estimator to be consistent and asymptotically normal are established for the GARCH(1,1) conditional variance process. In addition efficiency results are obtained for a GARCH(1,1) model where the conditional variance is allowed to enter the mean as well. That is, the GARCH(1,1)-M model. An application to the returns to the SP500 index illustrates. / <p>Diss. Stockholm : Handelshögskolan, 2001</p>
34

The k-Sample Problem When k is Large and n Small

Zhan, Dongling 2012 May 1900 (has links)
The k-sample problem, i.e., testing whether two or more data sets come from the same population, is a classic one in statistics. Instead of having a small number of k groups of samples, this dissertation works on a large number of p groups of samples, where within each group, the sample size, n, is a fixed, small number. We call this as a "Large p, but Small n" setting. The primary goal of the research is to provide a test statistic based on kernel density estimation (KDE) that has an asymptotic normal distribution when p goes to infinity with n fixed. In this dissertation, we propose a test statistic called Tp(S) and its standardized version, T(S). By using T(S), we conduct our test based on the critical values of the standard normal distribution. Theoretically, we show that our test is invariant to a location and scale transformation of the data. We also find conditions under which our test is consistent. Simulation studies show that our test has good power against a variety of alternatives. The real data analyses show that our test finds differences between gene distributions that are not due simply to location.
35

Tests of random effects in linear and non-linear models

Häggström Lundevaller, Erling January 2002 (has links)
No description available.
36

Multivariate Longitudinal Data Analysis with Mixed Effects Hidden Markov Models

Raffa, Jesse Daniel January 2012 (has links)
Longitudinal studies, where data on study subjects are collected over time, is increasingly involving multivariate longitudinal responses. Frequently, the heterogeneity observed in a multivariate longitudinal response can be attributed to underlying unobserved disease states in addition to any between-subject differences. We propose modeling such disease states using a hidden Markov model (HMM) approach and expand upon previous work, which incorporated random effects into HMMs for the analysis of univariate longitudinal data, to the setting of a multivariate longitudinal response. Multivariate longitudinal data are modeled jointly using separate but correlated random effects between longitudinal responses of mixed data types in addition to a shared underlying hidden process. We use a computationally efficient Bayesian approach via Markov chain Monte Carlo (MCMC) to fit such models. We apply this methodology to bivariate longitudinal response data from a smoking cessation clinical trial. Under these models, we examine how to incorporate a treatment effect on the disease states, as well as develop methods to classify observations by disease state and to attempt to understand patient dropout. Simulation studies were performed to evaluate the properties of such models and their applications under a variety of realistic situations.
37

Flexible Mixed-Effect Modeling of Functional Data, with Applications to Process Monitoring

Mosesova, Sofia 29 May 2007 (has links)
High levels of automation in manufacturing industries are leading to data sets of increasing size and dimension. The challenge facing statisticians and field professionals is to develop methodology to help meet this demand. Functional data is one example of high-dimensional data characterized by observations recorded as a function of some continuous measure, such as time. An application considered in this thesis comes from the automotive industry. It involves a production process in which valve seats are force-fitted by a ram into cylinder heads of automobile engines. For each insertion, the force exerted by the ram is automatically recorded every fraction of a second for about two and a half seconds, generating a force profile. We can think of these profiles as individual functions of time summarized into collections of curves. The focus of this thesis is the analysis of functional process data such as the valve seat insertion example. A number of techniques are set forth. In the first part, two ways to model a single curve are considered: a b-spline fit via linear regression, and a nonlinear model based on differential equations. Each of these approaches is incorporated into a mixed effects model for multiple curves, and multivariate process monitoring techniques are applied to the predicted random effects in order to identify anomalous curves. In the second part, a Bayesian hierarchical model is used to cluster low-dimensional summaries of the curves into meaningful groups. The belief is that the clusters correspond to distinct types of processes (e.g. various types of “good” or “faulty” assembly). New observations can be assigned to one of these by calculating the probabilities of belonging to each cluster. Mahalanobis distances are used to identify new observations not belonging to any of the existing clusters. Synthetic and real data are used to validate the results.
38

A Monte Carlo Study: The Impact of Missing Data in Cross-Classification Random Effects Models

Alemdar, Meltem 12 August 2009 (has links)
Unlike multilevel data with a purely nested structure, data that are cross-classified not only may be clustered into hierarchically ordered units but also may belong to more than one unit at a given level of a hierarchy. In a cross-classified design, students at a given school might be from several different neighborhoods and one neighborhood might have students who attend a number of different schools. In this type of scenario, schools and neighborhoods are considered to be cross-classified factors, and cross-classified random effects modeling (CCREM) should be used to analyze these data appropriately. A common problem in any type of multilevel analysis is the presence of missing data at any given level. There has been little research conducted in the multilevel literature about the impact of missing data, and none in the area of cross-classified models. The purpose of this study was to examine the effect of data that are missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR), on CCREM estimates while exploring multiple imputation to handle the missing data. In addition, this study examined the impact of including an auxiliary variable that is correlated with the variable with missingness (the level-1 predictor) in the imputation model for multiple imputation. This study expanded on the CCREM Monte Carlo simulation work of Meyers (2004) by the inclusion of studying the effect of missing data and method for handling these missing data with CCREM. The results demonstrated that in general, multiple imputation met Hoogland and Boomsma’s (1998) relative bias estimation criteria (less than 5% in magnitude) for parameter estimates under different types of missing data patterns. For the standard error estimates, substantial relative bias (defined by Hoogland and Boomsma as greater than 10%) was found in some conditions. When multiple imputation was used to handle the missing data then substantial bias was found in the standard errors in most cells where data were MNAR. This bias increased as a function of the percentage of missing data.
39

The Euro Effect on Trade : The Trade Effect of the Euro on non-EMU and EMU Members

Choi, Ga Eun, Galonja, Stephanie January 2012 (has links)
The purpose of this paper is to investigate how the changes in trade values are affected by the implementation of the euro currency. We study the EU members, including 11 EMU members and 3 non-EMU members (Sweden, Denmark and the United Kingdom). The empirical analysis is conducted by using a modified version of the standard gravity model. Our core findings can be summarized into two parts. First, the euro effect on trade which is estimated by the euro-dummy coefficient reflects an adverse influence by the euro creation on trade values for the first two years of the implementation on all our sample countries. It leads us to a conclusion that there is no significant improvement of trade in the year of implementation. These results do not change when a time trend variable is added to evaluate the robustness of the model. Our primary interpretation is that the euro creation does not have an immediate impact on trade but it is rather gradual as countries need time to adapt to a new currency. It is connected to our second finding that the negative influence of the euro implementation is not permanent but eventually initiates positive outcomes on trade values over time, thus concluding that the euro implementation has had gradual impact on both EMU and non-EMU members.
40

The Heterogeneity Model and its Special Cases. An Illustrative Comparison.

Tüchler, Regina, Frühwirth-Schnatter, Sylvia, Otter, Thomas January 2002 (has links) (PDF)
In this paper we carry out fully Bayesian analysis of the general heterogeneity model, which is a mixture of random effects model, and its special cases, the random coefficient model and the latent class model. Our application comes from Conjoint analysis and we are especially interested in what is gained by the general heterogeneity model in comparison to the other two when modeling consumers' heterogeneous preferences. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"

Page generated in 0.0492 seconds