• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 52
  • 52
  • 26
  • 13
  • 12
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some problems in model specification and inference for generalized additive models

Marra, Giampiero January 2010 (has links)
Regression models describingthe dependence between a univariate response and a set of covariates play a fundamental role in statistics. In the last two decades, a tremendous effort has been made in developing flexible regression techniques such as generalized additive models(GAMs) with the aim of modelling the expected value of a response variable as a sum of smooth unspecified functions of predictors. Many nonparametric regression methodologies exist includinglocal-weighted regressionand smoothing splines. Here the focus is on penalized regression spline methods which can be viewed as a generalization of smoothing splines with a more flexible choice of bases and penalties. This thesis addresses three issues. First, the problem of model misspecification is treated by extending the instrumental variable approach to the GAM context. Second, we study the theoretical and empirical properties of the confidence intervals for the smooth component functions of a GAM. Third, we consider the problem of variable selection within this flexible class of models. All results are supported by theoretical arguments and extensive simulation experiments which shed light on the practical performance of the methods discussed in this thesis.
2

Designing Messages to Reduce Meat Consumption: A Test of the Extended Parallel Process Model

January 2015 (has links)
abstract: The purpose of this study was to examine the utility of the Extended Parallel Process Model (EPPM) in guiding message design for a new health context, reducing meat consumption. The experiment was a posttest only design with a comparison and a control group. Message design was informed by the EPPM and contained threat and efficacy components. Participants (Americans ages 25-44 who eat meat approximately once a day) were randomly assigned to view a high threat/ high efficacy video, a high threat/ low efficacy video, or to be in a control group. Dependent variables were danger control outcomes (i.e., attitudes, intentions, and behavior) and fear control outcomes (i.e., perceived manipulative intent, message derogation, and defensive avoidance). Outcomes were assessed at an immediate posttest (Time 1) and at a one-week follow up (Time 2). There were 373 participants at Time 1 and 153 participants at Time 2. The data did not fully fit either the EPPM or the additive model; both videos were equally persuasive and resulted in greater message acceptance (attitude change, behavioral intention, and behavior) than the control group. Because the high threat/ low efficacy group was more persuasive than the control group, the data more closely fit the additive model. Fear control outcomes did not differ between the two video groups. Overall, the study demonstrated the effectiveness of using the EPPM to guide video message design in a new health context, reducing meat consumption. The results supported the EPPM prediction that a high-threat high-efficacy message would result in message acceptance, but support was not found for the necessity of an efficacy component for message acceptance. These findings can be used to guide new or existing health campaigns that seek to improve public health outcomes, including reducing the incidence of heart disease, cancer, diabetes, and obesity. / Dissertation/Thesis / Doctoral Dissertation Communication 2015
3

A Bayesian Subgroup Analysis Using An Additive Model

Xiao, Yang January 2013 (has links)
No description available.
4

Náhodné procesy v analýze spolehlivosti / Random Processes in Reliability Analysis

Chovanec, Kamil January 2011 (has links)
Title: Random Processes in Reliability Analysis Author: Kamil Chovanec Department: Department of Probability and Mathematical Statistics Supervisor: Doc. Petr Volf, CSc. Supervisor's e-mail address: volf@utia.cas.cz Abstract: The thesis is aimed at the reliability analysis with special em- phasis at the Aalen additive model. The result of hypothesis testing in the reliability analysis is often a process that converges to a Gaussian martingale under the null hypothesis. We can estimate the variance of the martingale using a uniformly consistent estimator. The result of this estimation is a new hypothesis about the process resulting from the original hypothesis. There are several ways to test for this hypothesis. The thesis presents some of these tests and compares their power for various models and sample sizes using Monte Carlo simulations. In a special case we derive a point that maximizes the asymptotic power of two of the tests. Keywords: Martingale, Aalen's additive model, hazard function 1
5

Régression isotonique itérée / Iterative isotonic regression

Jégou, Nicolas 23 November 2012 (has links)
Ce travail se situe dans le cadre de la régression non paramétrique univariée. Supposant la fonction de régression à variation bornée et partant du résultat selon lequel une telle fonction se décompose en la somme d’une fonction croissante et d’une fonction décroissante, nous proposons de construire et d’étudier un nouvel estimateur combinant les techniques d’estimation des modèles additifs et celles d’estimation sous contraintes de monotonie. Plus précisément, notreméthode consiste à itérer la régression isotonique selon l’algorithme backfitting. On dispose ainsià chaque itération d’un estimateur de la fonction de régression résultant de la somme d’une partiecroissante et d’une partie décroissante.Le premier chapitre propose un tour d’horizon des références relatives aux outils cités à l’instant. Le chapitre suivant est dédié à l’étude théorique de la régression isotonique itérée. Dans un premier temps, on montre que, la taille d’échantillon étant fixée, augmenter le nombre d’itérations conduit à l’interpolation des données. On réussit à identifier les limites des termes individuels de la somme en montrant l’égalité de notre algorithme avec celui consistant à itérer la régressionisotonique selon un algorithme de type réduction itérée du biais. Nous établissons enfin la consistance de l’estimateur.Le troisième chapitre est consacré à l’étude pratique de l’estimateur. Comme augmenter le nombre d’itérations conduit au sur-ajustement, il n’est pas souhaitable d’itérer la méthode jusqu’à la convergence. Nous examinons des règles d’arrêt basées sur des adaptations de critères usuellement employés dans le cadre des méthodes linéaires de lissage (AIC, BIC,...) ainsi que des critères supposant une connaissance a priori sur le nombre de modes de la fonction de régression. Il en ressort un comportement intéressant de la méthode lorsque la fonction de régression possède des points de rupture. Nous appliquons ensuite l’algorithme à des données réelles de type puces CGH où la détection de ruptures est d’un intérêt crucial. Enfin, une application à l’estimation des fonctions unimodales et à la détection de mode(s) est proposée / This thesis is part of non parametric univariate regression. Assume that the regression function is of bounded variation then the Jordan’s decomposition ensures that it can be written as the sum of an increasing function and a decreasing function. We propose and analyse a novel estimator which combines the isotonic regression related to the estimation of monotonefunctions and the backfitting algorithm devoted to the estimation of additive models. The first chapter provides an overview of the references related to isotonic regression and additive models. The next chapter is devoted to the theoretical study of iterative isotonic regression. As a first step we show that increasing the number of iterations tends to reproduce the data. Moreover, we manage to identify the individual limits by making a connexion with the general property of isotonicity of projection onto convex cones and deriving another equivalent algorithm based on iterative bias reduction. Finally, we establish the consistency of the estimator.The third chapter is devoted to the practical study of the estimator. As increasing the number of iterations leads to overfitting, it is not desirable to iterate the procedure until convergence. We examine stopping criteria based on adaptations of criteria usually used in the context of linear smoothing methods (AIC, BIC, ...) as well as criteria assuming the knowledge of thenumber of modes of the regression function. As it is observed an interesting behavior of the method when the regression function has breakpoints, we apply the algorithm to CGH-array data where breakopoints detections are of crucial interest. Finally, an application to the estimation of unimodal functions is proposed
6

Comparison of background correction in tiling arrays and a spatial model

Maurer, Dustin January 1900 (has links)
Master of Science / Department of Statistics / Susan J. Brown / Haiyan Wang / DNA hybridization microarray technologies have made it possible to gain an unbiased perspective of whole genome transcriptional activity on such a scale that is increasing more and more rapidly by the day. However, due to biologically irrelevant bias introduced by the experimental process and the machinery involved, correction methods are needed to restore the data to its true biologically meaningful state. Therefore, it is important that the algorithms developed to remove any sort of technical biases are accurate and robust. This report explores the concept of background correction in microarrays by using a real data set of five replicates of whole genome tiling arrays hybridized with genetic material from Tribolium castaneum. It reviews the literature surrounding such correction techniques and explores some of the more traditional methods through implementation on the data set. Finally, it introduces an alternative approach, implements it, and compares it to the traditional approaches for the correction of such errors.
7

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
8

Estimation and bias correction of the magnitude of an abrupt level shift

Liu, Wenjie January 2012 (has links)
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
9

Regression analysis with longitudinal measurements

Ryu, Duchwan 29 August 2005 (has links)
Bayesian approaches to the regression analysis for longitudinal measurements are considered. The history of measurements from a subject may convey characteristics of the subject. Hence, in a regression analysis with longitudinal measurements, the characteristics of each subject can be served as covariates, in addition to possible other covariates. Also, the longitudinal measurements may lead to complicated covariance structures within each subject and they should be modeled properly. When covariates are some unobservable characteristics of each subject, Bayesian parametric and nonparametric regressions have been considered. Although covariates are not observable directly, by virtue of longitudinal measurements, the covariates can be estimated. In this case, the measurement error problem is inevitable. Hence, a classical measurement error model is established. In the Bayesian framework, the regression function as well as all the unobservable covariates and nuisance parameters are estimated. As multiple covariates are involved, a generalized additive model is adopted, and the Bayesian backfitting algorithm is utilized for each component of the additive model. For the binary response, the logistic regression has been proposed, where the link function is estimated by the Bayesian parametric and nonparametric regressions. For the link function, introduction of latent variables make the computing fast. In the next part, each subject is assumed to be observed not at the prespecifiedtime-points. Furthermore, the time of next measurement from a subject is supposed to be dependent on the previous measurement history of the subject. For this outcome- dependent follow-up times, various modeling options and the associated analyses have been examined to investigate how outcome-dependent follow-up times affect the estimation, within the frameworks of Bayesian parametric and nonparametric regressions. Correlation structures of outcomes are based on different correlation coefficients for different subjects. First, by assuming a Poisson process for the follow- up times, regression models have been constructed. To interpret the subject-specific random effects, more flexible models are considered by introducing a latent variable for the subject-specific random effect and a survival distribution for the follow-up times. The performance of each model has been evaluated by utilizing Bayesian model assessments.
10

Study on Additive Generalized Radial Basis Function Networks

Liao, Shih-hui 18 June 2009 (has links)
In this thesis, we propose a new class of learning models, namely the additive generalized radial basis function networks (AGRBFNs), for general nonlinear regression problems. This class of learning machines combines the generalized radial basis function networks (GRBFNs) commonly used in general machine learning problems and the additive models (AMs) frequently encountered in semiparametric regression problems. In statistical regression theory, AM is a good compromise between the linear model and the nonparametric model. In order for more general network structure hoping to address more general data sets, the AMs are embedded in the output layer of the GRBFNs to form the AGRBFNs. Simple weights updating rules based on incremental gradient descent will be derived. Several illustrative examples are provided to compare the performances for the classical GRBFNs and the proposed AGRBFNs. Simulation results show that upon proper selection of the hidden nodes and the bandwidth of the kernel smoother used in additive output layer, AGRBFNs can give better fits than the classical GRBFNs. Furthermore, for the given learning problem, AGRBFNs usually need fewer hidden nodes than those of GRBFNs for the same level of accuracy.

Page generated in 0.0835 seconds