• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 31
  • 30
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 239
  • 77
  • 77
  • 57
  • 42
  • 32
  • 30
  • 30
  • 30
  • 29
  • 25
  • 25
  • 24
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Econometric computing with HC and HAC covariance matrix estimators

Zeileis, Achim January 2004 (has links) (PDF)
Data described by econometric models typically contains autocorrelation and/or heteroskedasticity of unknown form and for inference in such models it is essential to use covariance matrix estimators that can consistently estimate the covariance of the model parameters. Hence, suitable heteroskedasticity-consistent (HC) and heteroskedasticity and autocorrelation consistent (HAC) estimators have been receiving attention in the econometric literature over the last 20 years. To apply these estimators in practice, an implementation is needed that preferably translates the conceptual properties of the underlying theoretical frameworks into computational tools. In this paper, such an implementation in the package sandwich in the R system for statistical computing is described and it is shown how the suggested functions provide reusable components that build on readily existing functionality and how they can be integrated easily into new inferential procedures or applications. The toolbox contained in sandwich is extremely flexible and comprehensive, including specific functions for the most important HC and HAC estimators from the econometric literature. Several real-world data sets are used to illustrate how the functionality can be integrated into applications. / Series: Research Report Series / Department of Statistics and Mathematics
42

Object-oriented Computation of Sandwich Estimators

Zeileis, Achim January 2006 (has links) (PDF)
Sandwich covariance matrix estimators are a popular tool in applied regression modeling for performing inference that is robust to certain types of model misspecification. Suitable implementations are available in the R system for statistical computing for certain model fitting functions only (in particular lm()), but not for other standard regression functions, such as glm(), nls(), or survreg(). Therefore, conceptual tools and their translation to computational tools in the package sandwich are discussed, enabling the computation of sandwich estimators in general parametric models. Object orientation can be achieved by providing a few extractor functions-most importantly for the empirical estimating functions-from which various types of sandwich estimators can be computed. / Series: Research Report Series / Department of Statistics and Mathematics
43

On measuring differential yielding abilities of wheat cultivars over varying environments

Land, Miriam L January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Statistics.
44

The use of a dynamic digraph structure in a population simulation model for grain sorghum

Curry, Jess Walter January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
45

A New Estimating Equation Based Approach for Secondary Trait Analyses in Genetic Case-control Studies

Song, Xiaoyu January 2015 (has links)
Background/Aims: Case-control designs are commonly employed in genetic association studies. In addition to the primary trait of interest, data on additional secondary traits, related to the primary trait, are often collected. Traditional association analyses between genetic variants and secondary traits can be biased in such cases, and several methods have been proposed to address this issue, including the inverse-probability-of-sampling-weighted (IPW) approach and semi-parametric maximum likelihood (SPML) approach. Methods: Here, we propose a set of new estimating equation based approach that combines observed and counter-factual outcomes to provide unbiased estimation of genetic associations with secondary traits. We extend the estimating equation framework to both generalized linear models (GLM) and non-parametric regressions, and compare it with the existing approaches. Results: We demonstrate analytically and numerically that our proposed approach provides robust and fairly efficient unbiased estimation in all simulations we consider. Unlike existing methods, it is less sensitive to the sampling scheme and underlying disease model specification. In addition, we illustrate our new approach using two real data examples. The first one is to analyze the binary secondary trait diabetes under GLM framework using a stroke case-control study. The second one is to analyze the continuous secondary trait serum IgE levels under linear and quantile regression models using an asthma case-control study. Conclusion: The proposed new estimating equation approach is able to accommodate a wide range of regressions, and it outperforms the existing approaches in some scenarios we consider.
46

Estimating R2 Shrinkage in Multiple Regression: A Comparison of Different Analytical Methods

Yin, Ping 01 May 1999 (has links)
This study investigated the effectiveness of various analytical methods used for estimating R2 shrinkage in multiple regression analysis. Two categories of analytical formulae were identified: estimators of the population squared multiple correlation coefficient (ρ2), and estimators of the population cross-validity coefficient (ρc2). To avoid possible confounding factors that might be associated with a real data set such as data nonnormality, lack of precise population parameters, different degrees of multicollinearity among the predictor variables, and so forth, the Monte Carlo method was used to simulate multivariate normal sample data, with prespecified population parameters such as the squared multiple correlation coefficient (ρ2), number of predictors, different sample sizes, known degree of multicollinearity, and controlled data normality conditions. Five hundred replicates were simulated within each cell of the sampling conditions. Various analytical formulae were applied to the simulated data in each sampling condition, and the "adjusted" coefficients were obtained and then compared to their corresponding population parameters (ρ2 and ρc2). Analysis of the results indicates that the currently most widely used (in both SAS and SPSS) "Wherry" formula is probably not the most effective analytical formula in estimating ρ2. Instead, the Pratt formula appeared to outperform other analytical formulae across most of these sampling conditions. Among the analytical formulae designed to estimate ρc2, the Browne formula appeared to be the most effective and stable in minimizing statistical bias across different sampling conditions. The study also concludes that it is the n/p (sample size/number of predictor variables) ratio that affects the performances of these analytical formulae the most; different degrees of multicollinearity among predictor variables do not have dramatic influence on the performances of these analytical formulae. Further replicants on both real and simulated data re still needed to investigate the effectiveness of these analytical formulae.
47

Linear Models for Estimating the Nutritive Value of Sheep Diets

Christiansen, Michael L. 01 May 1979 (has links)
Digestibility data were determined in 2 replications of a 2 x 3 x 2 x 2 factorial arranged experiment to: (1) determine the effects of forage type (grass vs alfalfa), forage maturity (late vegetative vs midbloom vs fullbloom), diet ingredients (forage only vs 50:50 forage plus corn), and diet texture (coarsely chopped vs pelleted) on the digestibility of diet chemical constituents by sheep; (2) develop equations to estimate digestible energy of sheep diets from nutrient content of the diet; and (3) compare popular chemical methods used to partition feed dry matter into fibrous and soluble components. Diets were fed to growing wether lambs. Crude protein (CP) and available carbohydrates (AC) of diets were nearly 100% digestible (true digestibility) regardless of diet source. However, the apparent digestibility of CP and AC varied significantly with concentration of these components in the diet. Apparent digestibility of cellulose (CL) was significantly different between grass and alfalfa, early and late maturity stages, and coarse and pelleted diet textures. Interactions between forage type and stage of maturity and between stages of maturity and energy level also significantly altered the apparent digestibility of all diet fibrous constituents except hemicellulose (HC). An energy level-by-diet texture interaction significantly affected the apparent digestibility of HC, CL, CW, NDF, ADF and CF. simple (equation 1) and complex (equation 2) models were generated for estimating nutrient digestible amounts (YN) or diet digestible energy (DE) (YN) from nutrient content (XN) of the diet. Complex models were developed to adjust the estimation of the nutrient digestible amount or DE estimations for effects due to forage type (αi), stage of maturity (βj), feed combination (γk) and texture (δl). Two-way interactions (βij, βγK , . . ., γβKl) between qualitative variables were added in the equations when significant. Interactions between qualitative variables and the quantitative variable (αiXl, βjXl, γKXl, δlXl, &&alhpa;βijXl, etc) were also tried but did not significantly change the precision of the equations. Complex models gave significantly better estimates of digestible CP, AC, total lipid (TL), HC, CL, CW, NDF, ADR or CF and DE than simpler models. DE in the diets was determined by two methods: First, DE was estimated by the summation of the predicted decimal fraction of digested protein, carbohydrates, and lipids times respective caloric values (Mcal/kg) (equation 3). DE was also estimated directly from CL, CW, NDF, ADF, or CF content in the diet. Both approaches gave comparably precise estimations of diet DE when complex models were used. The CF (1) YN = bo + blXN (2) YN = bo + blXN + αi + βj + γK + δl + αβij+ . . . + γδKl (3) DE = 5.65 (YCP) + 4.15 (YAC + YHC + YCL) + 9.40 (YTL) Simple model gave poorer estimates of DE (R2 = .56) than CL, CW, NDF, and ADF simple models (R2 = .69, .69, .71, and .71 respectively). Added indicator variables compensated for differences between CF and other chemical parameters. Cl, CW, NDF, ADF, and CF complex models were similar in estimation of DE (average R2 = .89 for DE complex models). Complex models could be effectively used in a computer program for balancing rations for sheep. Additional experiments should be conducted to provide added information for comparison.
48

Cost Estimation and Production Evaluation for Hopper Dredges

Hollinberger, Thomas E. 2010 May 1900 (has links)
Dredging projects are expensive government funded projects that are contracted out and competitively bid upon. When planning a trailing suction hopper dredge project or bidding on the request for proposal for such a project, having an accurate cost prediction is essential. This thesis presents a method using fluid transport fundamentals and pump power characteristics to determine a production rate for hopper dredges. With a production rate established, a number of financial inputs are used to determine the cost and duration of a project. The estimating program is a Microsoft Excel spreadsheet provided with reasonable values for a wide arrange of hopper dredging projects. The spreadsheet allows easy customization for any user with specific knowledge to improve the accuracy of his estimate. Results from the spreadsheet were found to be satisfactory using the default values and inputs of 8 projects from 1998 to 2009,: The spreadsheet produced an estimate that was an average of a 15.9% difference from the actual contract cost, versus a 15.7% difference for government estimates of the same projects.
49

Duration Data Analysis in Longitudinal Survey

Boudreau, Christian January 2003 (has links)
Considerable amounts of event history data are collected through longitudinal surveys. These surveys have many particularities or features that are the results of the dynamic nature of the population under study and of the fact that data collected through longitudinal surveys involve the use of complex survey designs, with clustering and stratification. These particularities include: attrition, seam-effect, censoring, left-truncation and complications in the variance estimation due to the use of complex survey designs. This thesis focuses on the last two points. Statistical methods based on the stratified Cox proportional hazards model that account for intra-cluster dependence, when the sampling design is uninformative, are proposed. This is achieved using the theory of estimating equations in conjunction with empirical process theory. Issues concerning analytic inference from survey data and the use of weighted versus unweighted procedures are also discussed. The proposed methodology is applied to data from the U. S. Survey of Income and Program Participation (SIPP) and data from the Canadian Survey of Labour and Income Dynamics (SLID). Finally, different statistical methods for handling left-truncated sojourns are explored and compared. These include the conditional partial likelihood and other methods, based on the Exponential or the Weibull distributions.
50

Cost Estimation and Production Evaluation for Hopper Dredges

Hollinberger, Thomas E. 2010 May 1900 (has links)
Dredging projects are expensive government funded projects that are contracted out and competitively bid upon. When planning a trailing suction hopper dredge project or bidding on the request for proposal for such a project, having an accurate cost prediction is essential. This thesis presents a method using fluid transport fundamentals and pump power characteristics to determine a production rate for hopper dredges. With a production rate established, a number of financial inputs are used to determine the cost and duration of a project. The estimating program is a Microsoft Excel spreadsheet provided with reasonable values for a wide arrange of hopper dredging projects. The spreadsheet allows easy customization for any user with specific knowledge to improve the accuracy of his estimate. Results from the spreadsheet were found to be satisfactory using the default values and inputs of 8 projects from 1998 to 2009,: The spreadsheet produced an estimate that was an average of a 15.9% difference from the actual contract cost, versus a 15.7% difference for government estimates of the same projects.

Page generated in 0.0966 seconds