• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Essays on random effects models and GARCH /

Skoglund, Jimmy, January 1900 (has links)
Diss. Stockholm : Handelshögsk., 2001.
2

Robustness of normal theory inference when random effects are not normally distributed

Devamitta Perera, Muditha Virangika January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
3

Essays on random effects models and GARCH

Skoglund, Jimmy January 2001 (has links)
This thesis consists of four essays, three in the field of random effects models and one in the field of GARCH. The first essay in this thesis, ''Maximum likelihood based inference in the two-way random effects model with serially correlated time effects'', considers maximum likelihood estimation and inference in the two-way random effects model with serial correlation. We derive a straightforward maximum likelihood estimator when the time-specific component follow an AR(1) or MA(1) process. The estimator is also easily generalized to allow for arbitrary stationary and strictly invertible ARMA processes. In addition we consider the model selection problem and derive tests of the null hypothesis of no serial correlation as well as tests for discriminating between the AR(1) and MA(1) specifications. A Monte-Carlo experiment evaluates the finite-sample properties of the estimators, test-statistics and model selection procedures. The second essay, ''Asymptotic properties of the maximum likelihood estimator of random effects models with serial correlation'', considers the large sample behavior of the maximum likelihood estimator of random effects models with serial correlation in the form of AR(1) for the idiosyncratic or time-specific error component. Consistent estimation and asymptotic normality is established for a comprehensive specification which nests these models as well as all commonly used random effects models. The third essay, ''Specification and estimation of random effects models with serial correlation of general form'', is also concerned with maximum likelihood based inference in random effects models with serial correlation. Allowing for individual effects we introduce serial correlation of general form in the time effects as well as the idiosyncratic errors. A straightforward maximum likelihood estimator is derived and a coherent model selection strategy is suggested for determining the orders of serial correlation as well as the importance of time or individual effects. The methods are applied to the estimation of a production function using a sample of 72 Japanese chemical firms observed during 1968-1987. The fourth essay, entitled ''A simple efficient GMM estimator of GARCH models'', considers efficient GMM based estimation of GARCH models. Sufficient conditions for the estimator to be consistent and asymptotically normal are established for the GARCH(1,1) conditional variance process. In addition efficiency results are obtained for a GARCH(1,1) model where the conditional variance is allowed to enter the mean as well. That is, the GARCH(1,1)-M model. An application to the returns to the SP500 index illustrates. / <p>Diss. Stockholm : Handelshögskolan, 2001</p>
4

A Monte Carlo Study: The Impact of Missing Data in Cross-Classification Random Effects Models

Alemdar, Meltem 12 August 2009 (has links)
Unlike multilevel data with a purely nested structure, data that are cross-classified not only may be clustered into hierarchically ordered units but also may belong to more than one unit at a given level of a hierarchy. In a cross-classified design, students at a given school might be from several different neighborhoods and one neighborhood might have students who attend a number of different schools. In this type of scenario, schools and neighborhoods are considered to be cross-classified factors, and cross-classified random effects modeling (CCREM) should be used to analyze these data appropriately. A common problem in any type of multilevel analysis is the presence of missing data at any given level. There has been little research conducted in the multilevel literature about the impact of missing data, and none in the area of cross-classified models. The purpose of this study was to examine the effect of data that are missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR), on CCREM estimates while exploring multiple imputation to handle the missing data. In addition, this study examined the impact of including an auxiliary variable that is correlated with the variable with missingness (the level-1 predictor) in the imputation model for multiple imputation. This study expanded on the CCREM Monte Carlo simulation work of Meyers (2004) by the inclusion of studying the effect of missing data and method for handling these missing data with CCREM. The results demonstrated that in general, multiple imputation met Hoogland and Boomsma’s (1998) relative bias estimation criteria (less than 5% in magnitude) for parameter estimates under different types of missing data patterns. For the standard error estimates, substantial relative bias (defined by Hoogland and Boomsma as greater than 10%) was found in some conditions. When multiple imputation was used to handle the missing data then substantial bias was found in the standard errors in most cells where data were MNAR. This bias increased as a function of the percentage of missing data.
5

Bayesian parsimonious covariance estimation for hierarchical linear mixed models

Frühwirth-Schnatter, Sylvia, Tüchler, Regina January 2004 (has links) (PDF)
We considered a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows to choose a simple, conditionally conjugate normal prior on the Cholesky factor. Based on the non-centered parameterization, we search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors using Bayesian variable selection methods. With this method we are able to learn from the data for each effect, whether it is random or not, and whether covariances among random effects are zero or not. An application in marketing shows a substantial reduction of the number of free elements of the variance-covariance matrix. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
6

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
7

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
8

Model-based Tests for Standards Evaluation and Biological Assessments

Li, Zhengrong 27 September 2007 (has links)
Implementation of the Clean Water Act requires agencies to monitor aquatic sites on a regular basis and evaluate the quality of these sites. Sites are evaluated individually even though there may be numerous sites within a watershed. In some cases, sampling frequency is inadequate and the evaluation of site quality may have low reliability. This dissertation evaluates testing procedures for determination of site quality based on modelbased procedures that allow for other sites to contribute information to the data from the test site. Test procedures are described for situations that involve multiple measurements from sites within a region and single measurements when stressor information is available or when covariates are used to account for individual site differences. Tests based on analysis of variance methods are described for fixed effects and random effects models. The proposed model-based tests compare limits (tolerance limits or prediction limits) for the data with the known standard. When the sample size for the test site is small, using model-based tests improves the detection of impaired sites. The effects of sample size, heterogeneity of variance, and similarity between sites are discussed. Reference-based standards and corresponding evaluation of site quality are also considered. Regression-based tests provide methods for incorporating information from other sites when there is information on stressors or covariates. Extension of some of the methods to multivariate biological observations and stressors is also discussed. Redundancy analysis is used as a graphical method for describing the relationship between biological metrics and stressors. A clustering method for finding stressor-response relationships is presented and illustrated using data from the Mid-Atlantic Highlands. Multivariate elliptical and univariate regions for assessment of site quality are discussed. / Ph. D.
9

Modeling Patterns of Small Scale Spatial Variation in Soil

Huang, Fang 11 January 2006 (has links)
The microbial communities found in soils are inherently heterogeneous and often exhibit spatial variations on a small scale. Becker et al. (2006) investigate this phenomenon and present statistical analyses to support their findings. In this project, alternative statistical methods and models are considered and employed in a re-analysis of the data from Becker. First, parametric nested random effects models are considered as an alternative to the nonparametric semivariogram models and kriging methods employed by Becker to analyze patterns of spatial variation. Second, multiple logistic regression models are employed to investigate factors influencing microbial community structure as an alternative to the simple logistic models used by Becker. Additionally, the microbial community profile data of Becker were unobservable at several points in the spatial grid. The Becker analysis assumes that the data are missing completely at random and as such have relatively little impact on inference. In this re-analysis, this assumption is investigated and it is shown that the pattern of missingness is correlated with both metabolic potential and spatial coordinates and thus provides useful information that was previously ignored by Becker. Multiple imputation methods are employed to incorporate the information present in the missing data pattern and results are compared with those of Becker.

Page generated in 0.0752 seconds