• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 158
  • 37
  • 7
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 489
  • 489
  • 177
  • 159
  • 154
  • 148
  • 123
  • 70
  • 63
  • 52
  • 47
  • 40
  • 38
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The LASSO linear mixed model for mapping quantitative trait loci

Foster, Scott David January 2006 (has links)
This thesis concerns the identification of quantitative trait loci (QTL) for important traits in cattle line crosses. One of these traits is birth weight of calves, which affects both animal production and welfare through correlated effects on parturition and subsequent growth. Birth weight was one of the traits measured in the Davies' Gene Mapping Project. These data form the motivation for the methods presented in this thesis. Multiple QTL models have been previously proposed and are likely to be superior to single QTL models. The multiple QTL models can be loosely divided into two categories : 1 ) model building methods that aim to generate good models that contain only a subset of all the potential QTL ; and 2 ) methods that consider all the observed marker explanatory variables. The first set of methods can be misleading if an incorrect model is chosen. The second set of methods does not have this limitation. However, a full fixed effect analysis is generally not possible as the number of marker explanatory variables is typically large with respect to the number of observations. This can be overcome by using constrained estimation methods or by making the marker effects random. One method of constrained estimation is the least absolute selection and shrinkage operator (LASSO). This method has the appealing ability to produce predictions of effects that are identically zero. The LASSO can also be specified as a random model where the effects follow a double exponential distribution. In this thesis, the LASSO is investigated from a random effects model perspective. Two methods to approximate the marginal likelihood are presented. The first uses the standard form for the double exponential distribution and requires adjustment of the score equations for unbiased estimation. The second is based on an alternative probability model for the double exponential distribution. It was developed late in the candidature and gives similar dispersion parameter estimates to the first approximation, but does so in a more direct manner. The alternative LASSO model suggests some novel types of predictors. Methods for a number of different types of predictors are specified and are compared for statistical efficiency. Initially, inference for the LASSO effects is performed using simulation. Essentially, this treats the random effects as fixed effects and tests the null hypothesis that the effect is zero. In simulation studies, it is shown to be a useful method to identify important effects. However, the effects are random, so such a test is not strictly appropriate. After the specification of the alternative LASSO model, a method for making probability statements about the random effects being above or below zero is developed. This method is based on the predictive distribution of the random effects (posterior in Bayesian terminology). The random LASSO model is not sufficiently flexible to model most QTL mapping data. Typically, these data arise from large experiments and require models containing terms for experimental design. For example, the Davies' Gene Mapping experiment requires fixed effects for different sires, a covariate for birthdate within season and random normal effects for management group. To accommodate these sources of variation a mixed model is employed. The marker effects are included into this model as random LASSO effects. Estimation of the dispersion parameters is based on an approximate restricted likelihood (an extension of the first method of estimation for the simple random effects model). Prediction of the random effects is performed using a generalisation of Henderson's mixed model equations. The performance of the LASSO linear mixed model for QTL identification is assessed via simulation. It performs well against other commonly used methods but it may lack power for lowly heritable traits in small experiments. However, the rate of false positives in such situations is much lower. Also, the LASSO method is more precise in locating the correct marker rather than a marker in its vicinity. Analysis of the Davies' Gene Mapping Data using the methods described in this thesis identified five non-zero marker-within-sire effects ( there were 570 such effects). This analysis clearly shows that most of the genome does not affect the trait of interest. The simulation results and the analysis of the Davies' Gene Mapping Project Data show that the LASSO linear mixed model is a competitive method for QTL identification. It provides a flexible method to model the genetic and experimental effects simultaneously. / Thesis (Ph.D.)--School of Agriculture, Food and Wine, 2006.
82

Improved iterative schemes for REML estimation of variance parameters in linear mixed models.

Knight, Emma January 2008 (has links)
Residual maximum likelihood (REML) estimation is a popular method of estimation for variance parameters in linear mixed models, which typically requires an iterative scheme. The aim of this thesis is to review several popular iterative schemes and to develop an improved iterative strategy that will work for a wide class of models. The average information (AI) algorithm is a computationally convenient and efficient algorithm to use when starting values are in the neighbourhood of the REML solution. However when reasonable starting values are not available, the algorithm can fail to converge. The expectation-maximisation (EM) algorithm and the parameter expanded EM (PXEM) algorithm are good alternatives in these situations but they can be very slow to converge. The formulation of these algorithms for a general linear mixed model is presented, along with their convergence properties. A series of hybrid algorithms are presented. EM or PXEM iterations are used initially to obtain variance parameter estimates that are in the neighbourhood of the REML solution, and then AI iterations are used to ensure rapid convergence. Composite local EM/AI and local PXEM/AI schemes are also developed; the local EM and local PXEM algorithms update only the random effect variance parameters, with the estimates of the residual error variance parameters held fixed. Techniques for determining when to use EM-type iterations and when to switch to AI iterations are investigated. Methods for obtaining starting values for the iterative schemes are also presented. The performance of these various schemes is investigated for several different linear mixed models. A number of data sets are used, including published data sets and simulated data. The performance of the basic algorithms is compared to that of the various hybrid algorithms, using both uninformed and informed starting values. The theoretical and empirical convergence rates are calculated and compared for the basic algorithms. The direct comparison of the AI and PXEM algorithms shows that the PXEM algorithm, although an improvement over the EM algorithm, still falls well short of the AI algorithm in terms of speed of convergence. However, when the starting values are too far from the REML solution, the AI algorithm can be unstable. Instability is most likely to arise in models with a more complex variance structure. The hybrid schemes use EM-type iterations to move close enough to the REML solution to enable the AI algorithm to successfully converge. They are shown to be robust to choice of starting values like the EM and PXEM algorithms, while demonstrating fast convergence like the AI algorithm. / Thesis (Ph.D.) - University of Adelaide, School of Agriculture, Food and Wine, 2008
83

Factorial linear model analysis

Brien, Christopher J. January 1992 (has links) (PDF)
"February 1992" Bibliography: leaf 323-344. Develops a general strategy for factorial linear model analysis for experimental and observational studies, an iterative, four-stage, model comparison procedure. The approach is applicable to studies characterized as being structure-balanced, multitiered and based on Tjur structures unless the structure involves variation factors when it must be a regular Tjur structure. It covers a wide range of experiments including multiple-error, change-over, two-phase, superimposed and unbalanced experiments.
84

Laplace approximations to likelihood functions for generalized linear mixed models

Liu, Qing, 1961- 31 August 1993 (has links)
This thesis considers likelihood inferences for generalized linear models with additional random effects. The likelihood function involved ordinarily cannot be evaluated in closed form and numerical integration is needed. The theme of the thesis is a closed-form approximation based on Laplace's method. We first consider a special yet important case of the above general setting -- the Mantel-Haenszel-type model with overdispersion. It is seen that the Laplace approximation is very accurate for likelihood inferences in that setting. The approach and results on accuracy apply directly to the more general setting involving multiple parameters and covariates. Attention is then given to how to maximize out nuisance parameters to obtain the profile likelihood function for parameters of interest. In evaluating the accuracy of the Laplace approximation, we utilized Gauss-Hermite quadrature. Although this is commonly used, it was found that in practice inadequate thought has been given to the implementation. A systematic method is proposed for transforming the variable of integration to ensure that the Gauss-Hermite quadrature is effective. We found that under this approach the Laplace approximation is a special case of the Gauss-Hermite quadrature. / Graduation date: 1994
85

Electronic structure and optical properties of ZnO : bulk and surface

Yan, Caihua 23 February 1994 (has links)
Graduation date: 1994
86

Detecting Major Genes Controlling Robustness of Chicken Body Weight Using Double Generalized Linear Models

Zhang, Liming, Han, Yang January 2010 (has links)
Detecting both the majors genes that control the phenotypic mean and those controlling phenotypic variance has been raised in quantitative trait loci analysis. In order to mapping both kinds of genes, we applied the idea of the classic Haley-Knott regression to double generalized linear models. We performed both kinds of quantitative trait loci detection for a Red Jungle Fowl x White Leghorn F2 intercross using double generalized linear models. It is shown that double generalized linear model is a proper and efficient approach for localizing variance-controlling genes. We compared two models with or without fixed sex effect and prefer including the sex effect in order to reduce the residual variances. We found that different genes might take effect on the body weight at different time as the chicken grows.
87

Nonlinear time series modeling of some Canadian river flow data /

Batten, Douglas James, January 2000 (has links)
Thesis (M.A.S.), Memorial University of Newfoundland, 2000. / Bibliography: leaves 71-73.
88

Item and person parameter estimation using hierarchical generalized linear models and polytomous item response theory models

Williams, Natasha Jayne. January 2003 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2003. / Vita. Includes bibliographical references. Available also from UMI Company.
89

Modeling and forecast of Brazilian reservoir inflows via dynamic linear models under climate change scenarios

Lima, Luana Medeiros Marangon 06 February 2012 (has links)
The hydrothermal scheduling problem aims to determine an operation strategy that produces generation targets for each power plant at each stage of the planning horizon. This strategy aims to minimize the expected value of the operation cost over the planning horizon, composed of fuel costs to operate thermal plants plus penalties for failure in load supply. The system state at each stage is highly dependent on the water inflow at each hydropower generator reservoir. This work focuses on developing a probabilistic model for the inflows that is suitable for a multistage stochastic algorithm that solves the hydrothermal scheduling problem. The probabilistic model that governs the inflows is based on a dynamic linear model. Due to the cyclical behavior of the inflows, the model incorporates seasonal and regression components. We also incorporate climate variables such as precipitation, El Ni\~no, and other ocean indexes, as predictive variables when relevant. The model is tested for the power generation system in Brazil with about 140 hydro plants, which are responsible for more than 80\% of the electricity generation in the country. At first, these plants are gathered by basin and classified into 15 groups. Each group has a different probabilistic model that describes its seasonality and specific characteristics. The inflow forecast derived with the probabilistic model at each stage of the planning horizon is a continuous distribution, instead of a single point forecast. We describe an algorithm to form a finite scenario tree by sampling from the inflow forecasting distribution with interstage dependency, that is, the inflow realization at a specific stage depends on the inflow realization of previous stages. / text
90

Examining the invariance of item and person parameters estimated from multilevel measurement models when distribution of person abilities are non-normal

Moyer, Eric 24 September 2013 (has links)
Multilevel measurement models (MMM), an application of hierarchical generalized linear models (HGLM), model the relationship between ability levels estimates and item difficulty parameters, based on examinee responses to items. A benefit of using MMM is the ability to include additional levels in the model to represent a nested data structure, which is common in educational contexts, by using the multilevel framework. Previous research has demonstrated the ability of the one-parameter MMM to accurately recover both item difficulty parameters and examinee ability levels, when using both 2- and 3-level models, under various sample size and test length conditions (Kamata, 1999; Brune, 2011). Parameter invariance of measurement models, that parameter estimates are equivalent regardless of the distribution of the ability levels, is important when the typical assumption of a normal distribution of ability levels in the population may not be correct. An assumption of MMM is that the distribution of examinee abilities, which is represented by the level-2 residuals in the HGLM, is normal. If the distribution of abilities in the population are not normal, as suggested by Micceri (1989), this assumption of MMM is violated, which has been shown to affect the estimation of the level-2 residuals. The current study investigated the parameter invariance of the 2-level 1P-MMM, by examining the accuracy of item difficulty parameter estimates and examinee ability level estimates. Study conditions included the standard normal distribution, as a baseline, and three non-normal distributions having various degrees of skew, in addition to various test lengths and sample sizes, to simulate various testing conditions. The study's results provide evidence for overall parameter invariance of the 2-level 1P-MMM, when accounting for scale indeterminacy from the estimation process, for the study conditions included. Although, the error in the item difficulty parameter and examinee ability level estimates in the study were not of practical importance, there was some evidence that ability distributions may affect the accuracy of parameter estimates for items with difficulties greater than represented in this study. Also, the accuracy of abilities estimates for non-normal distributions seemed less for conditions with greater test lengths and sample sizes, indicating possible increased difficulty in estimating abilities from non-normal distributions. / text

Page generated in 0.0397 seconds