• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 26
  • 20
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 144
  • 144
  • 37
  • 37
  • 33
  • 27
  • 18
  • 16
  • 14
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Penalized Regression Methods in the Study of Serum Biomarkers for Overweight and Obesity

Vasquez, Monica M., Vasquez, Monica M. January 2017 (has links)
The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. Although the availability of numerous serum biomarkers is highly promising, multiplex assays present statistical challenges due to the high dimensionality of these data. In this dissertation, three studies are presented that address these challenges using L1 penalized regression methods. In the first part of the dissertation, an extensive simulation study is performed for the logistic regression model that compares the Least Absolute Shrinkage and Selection Operator (LASSO) method with five LASSO-type methods given scenarios that are present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. Results show that choice of optimal LASSO-type method is dependent on data structure and should be guided by the research objective. Methods are then applied to the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD) study for the identification of serum biomarkers of overweight and obesity. Measurement of serum biomarkers using multiplex technologies may be more variable as compared to traditional single biomarker methods. Measurement error may induce bias in parameter estimation and complicate the variable selection process. In the second part of the dissertation, an existing measurement error correction method for penalized linear regression with L1 penalty has been adapted to accommodate validation data on a randomly selected subset of the study sample. A simulation study and analysis of TESAOD data demonstrate that the proposed approach improves variable selection and reduces bias in parameter estimation for validation data as small as 10 percent of the study sample. In the third part of the dissertation, a measurement error correction method that utilizes validation data is proposed for the penalized logistic regression model with the L1 penalty. A simulation study and analysis of TESAOD data are used to evaluate the proposed method. Results show an improvement in variable selection.
22

Modelling non-linear exposure-disease relationships in a large individual participant meta-analysis allowing for the effects of exposure measurement error

Strawbridge, Alexander Daniel January 2012 (has links)
This thesis was motivated by data from the Emerging Risk Factors Collaboration (ERFC), a large individual participant data (IPD) meta-analysis of risk factors for coronary heart disease(CHD). Cardiovascular disease is the largest cause of death in almost all countries in the world, therefore it is important to be able to characterise the shape of risk factor–CHD relationships. Many of the risk factors for CHD considered by the ERFC are subject to substantial measurement error, and their relationship with CHD non-linear. We firstly consider issues associated with modelling the risk factor–disease relationship in a single study, before using meta-analysis to combine relationships across studies. It is well known that classical measurement error generally attenuates linear exposure–disease relationships, however its precise effect on non-linear relationships is less well understood. We investigate the effect of classical measurement error on the shape of exposure–disease relationships that are commonly encountered in epidemiological studies, and then consider methods for correcting for classical measurement error. We propose the application of a widely used correction method, regression calibration, to fractional polynomial models. We also consider the effects of non-classical error on the observed exposure–disease relationship, and the impact on our correction methods when we erroneously assume classical measurement error. Analyses performed using categorised continuous exposures are common in epidemiology. We show that MacMahon’s method for correcting for measurement error in analyses that use categorised continuous exposures, although simple, does not provide the correct shape for nonlinear exposure–disease relationships. We perform a simulation study to compare alternative methods for categorised continuous exposures. Meta-analysis is the statistical synthesis of results from a number of studies addressing similar research hypotheses. The use of IPD is the gold standard approach because it allows for consistent analysis of the exposure–disease relationship across studies. Methods have recently been proposed for combining non-linear relationships across studies. We discuss these methods, extend them to P-spline models, and consider alternative methods of combining relationships across studies. We apply the methods developed to the relationships of fasting blood glucose and lipoprotein(a) with CHD, using data from the ERFC.
23

Prevalence, impact, and adjustments of measurement error in retrospective reports of unemployment : an analysis using Swedish administrative data

Pina-Sánchez, Jose January 2014 (has links)
In this thesis I carry out an encompassing analysis of the problem of measurement error in retrospectively collected work histories using data from the “Longitudinal Study of the Unemployed”. This dataset has the unique feature of linking survey responses to a retrospective question on work status to administrative data from the Swedish Register of Unemployment. Under the assumption that the register data is a gold standard I explore three research questions: i) what is the prevalence of and the reasons for measurement error in retrospective reports of unemployment; ii) what are the consequences of using such survey data subject to measurement error in event history analysis; and iii) what are the most effective statistical methods to adjust for such measurement error. Regarding the first question I find substantial measurement error in retrospective reports of unemployment, e.g. only 54% of the subjects studied managed to report the correct number of spells of unemployment experienced in the year prior to the interview. Some reasons behind this problem are clear, e.g. the longer the recall period the higher the prevalence of measurement error. However, some others depend on how measurement error is defined, e.g. women were associated with a higher probability of misclassifying spells of unemployment but not with misdating them. To answer the second question I compare different event history models using duration data from the survey and the register as their response variable. Here I find that the impact of measurement error is very large, attenuating regression estimates by about 90% of their true value, and this impact is fairly consistent regardless of the type of event history model used. In the third part of the analysis I implement different adjustment methods and compare their effectiveness. Here I note how standard methods based on strong assumptions such as SIMEX or Regression Calibration are incapable of dealing with the complexity of the measurement process under analysis. More positive results are obtained through the implementation of ad hoc Bayesian adjustments capable of accounting for the different patterns of measurement error using a mixture model.
24

On Small Area Estimation Problems with Measurement Errors and Clustering

Torkashvand, Elaheh 05 October 2016 (has links)
In this dissertation, we first develop new statistical methodologies for small area estimation problems with measurement errors. The prediction of small area means for the unit-level regression model with the functional measurement error in the area-specific covariate is considered. We obtain the James-Stein (JS) estimate of the true area-specific covariate. Consequently, we construct the pseudo Bayes (PB) and pseudo empirical Bayes (PEB) predictors of small area means and estimate the mean squared prediction error (MSPE) associated with each predictor. Secondly, we modify the point estimation of the true area-specific covariate obtained earlier such that the histogram of the predictors of the small area means gets closer to its true one. We propose the constrained Bayes (CB) estimate of the true area-specific covariate. We show the superiority of the CB over the maximum likelihood (ML) estimate in terms of the Bayes risk. We also show the PB predictor of the small area mean based on the CB estimate of the true area-specific covariate dominates its counterpart based on the ML estimate in terms of the Bayes risk. We compare the performance of different predictors of the small area means using measures such as sensitivity, specificity, positive predictive value, negative predictive value, and MSPE. We believe that using the PEB and pseudo hierarchical Bayes predictors of small area means based on the constrained empirical Bayes (CEB) and constrained hierarchical Bayes (CHB) offers higher precision in recognizing socio-economic groups which are in danger of the prehypertension. Clustering the small areas to understand the behavior of the random effects better and accordingly, to predict the small area means is the final problem we address. We consider the Fay-Herriot model for this problem. We design a statistical test to evaluate the assumption of the equality of the variance components in different clusters. In the case of rejection of the null hypothesis of the equality of the variance components, we implement a modified version of Tukey's method. We calculate the MSPE to evaluate the effect of the clustering on the precision of predictors of the small area means. We apply our methodologies to real data sets. / February 2017
25

Poor health and early exit from labour force: an analysis using data from Survey of Health, Aging and Retirement in Europe / Poor health and early exit from labour force: an analysis using data from Survey of Health, Aging and Retirement in Europe

Hausenblas, Václav January 2011 (has links)
Poor health and early exit from labour force: an analysis using data from Survey of Health, Ageing and Retirement in Europe Václav Hausenblas May 12, 2011 Abstract Health is considered to be one the main determinants of retirement decision. A majority of empirical studies implements health using self- perceived health status measures. According to the justification hypoth- esis such a method may introduce a bias into estimation, and moreover, this bias may vary from country to country. The aim of this thesis is to make use of a dataset rich in objective measures of health from the second wave of Survey of Health, Aging and Retirement in Europe and to put side by side the estimates based on subjective measures as well as IV estimates using more objective variables and thereby to assess the mag- nitude of possible endogeneity and measurement error. It applies these identification methods on the model of early exit from labour force and discusses gender differences and specifics of given EU countries. 1
26

What did you really earn last year?: explaining measurement error in survey income data

Angel, Stefan, Disslbacher, Franziska, Humer, Stefan, Schnetzer, Matthias January 2019 (has links) (PDF)
The paper analyses the sources of income measurement error in surveys with a unique data set. We use the Austrian 2008-2011 waves of the European Union "Statistics on income and living conditions" survey which provide individual information on wages, pensions and unemployment benefits from survey interviews and officially linked administrative records. Thus, we do not have to fall back on complex two-sample matching procedures like related studies. We empirically investigate four sources of measurement error, namely social desirabil- ity, sociodemographic characteristics of the respondent, the survey design and the presence of learning effects. We find strong evidence for a social desirability bias in income reporting, whereas the presence of learning effects is mixed and depends on the type of income under consideration. An Owen value decomposition reveals that social desirability is a major expla- nation of misreporting in wages and pensions, whereas sociodemographic characteristics are most relevant for mismatches in unemployment benefits.
27

Mixtures-of-Regressions with Measurement Error

Fang, Xiaoqiong 01 January 2018 (has links)
Finite Mixture model has been studied for a long time, however, traditional methods assume that the variables are measured without error. Mixtures-of-regression model with measurement error imposes challenges to the statisticians, since both the mixture structure and the existence of measurement error can lead to inconsistent estimate for the regression coefficients. In order to solve the inconsistency, We propose series of methods to estimate the mixture likelihood of the mixtures-of-regressions model when there is measurement error, both in the responses and predictors. Different estimators of the parameters are derived and compared with respect to their relative efficiencies. The simulation results show that the proposed estimation methods work well and improve the estimating process.
28

Regression calibration and maximum likelihood inference for measurement error models

Monleon-Moscardo, Vicente J. 08 December 2005 (has links)
Graduation date: 2006 / Regression calibration inference seeks to estimate regression models with measurement error in explanatory variables by replacing the mismeasured variable by its conditional expectation, given a surrogate variable, in an estimation procedure that would have been used if the true variable were available. This study examines the effect of the uncertainty in the estimation of the required conditional expectation on inference about regression parameters, when the true explanatory variable and its surrogate are observed in a calibration dataset and related through a normal linear model. The exact sampling distribution of the regression calibration estimator is derived for normal linear regression when independent calibration data are available. The sampling distribution is skewed and its moments are not defined, but its median is the parameter of interest. It is shown that, when all random variables are normally distributed, the regression calibration estimator is equivalent to maximum likelihood provided a natural estimate of variance is non-negative. A check for this equivalence is useful in practice for judging the suitability of regression calibration. Results about relative efficiency are provided for both external and internal calibration data. In some cases maximum likelihood is substantially more efficient than regression calibration. In general, though, a more important concern when the necessary conditional expectation is uncertain, is that inferences based on approximate normality and estimated standard errors may be misleading. Bootstrap and likelihood-ratio inferences are preferable.
29

An Error Prevention Model For Cosmic Functional Size Measurement Method

Salmanoglu, Murat 01 September 2012 (has links) (PDF)
Estimation and measurement of the size of software is crucial for project management activities. Functional size measurement is one of the most frequently used methods to measure size of software and COSMIC is one of the popular methods for functional size measurement. Although precise size measurement is critical, the results may differ because of the errors made in the measurement process. The erroneous measurement results cause lack of confidence for the methods as well as reliability problems for effort and cost estimations. This research proposes an error prevention model for COSMIC Functional Size Measurement method to increase the reliability of the measurements. The prevention model defines data movement patterns for different types of the functional processes and a cardinality table to prevent errors. We validated the prevention model with two different case studies and observed that it can decrease errors up to 90% in our case studies.
30

Tests of random effects in linear and non-linear models

Häggström Lundevaller, Erling January 2002 (has links)
No description available.

Page generated in 0.1002 seconds