• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 29
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 290
  • 290
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Probabilistic pairwise model comparisons based on discrepancy measures and a reconceptualization of the p-value

Riedle, Benjamin N. 01 May 2018 (has links)
Discrepancy measures are often employed in problems involving the selection and assessment of statistical models. A discrepancy gauges the separation between a fitted candidate model and the underlying generating model. In this work, we consider pairwise comparisons of fitted models based on a probabilistic evaluation of the ordering of the constituent discrepancies. An estimator of the probability is derived using the bootstrap. In the framework of hypothesis testing, nested models are often compared on the basis of the p-value. Specifically, the simpler null model is favored unless the p-value is sufficiently small, in which case the null model is rejected and the more general alternative model is retained. Using suitably defined discrepancy measures, we mathematically show that, in general settings, the Wald, likelihood ratio (LR) and score test p-values are approximated by the bootstrapped discrepancy comparison probability (BDCP). We argue that the connection between the p-value and the BDCP leads to potentially new insights regarding the utility and limitations of the p-value. The BDCP framework also facilitates discrepancy-based inferences in settings beyond the limited confines of nested model hypothesis testing.
112

Statistical detection with weak signals via regularization

Li, Jinzheng 01 July 2012 (has links)
There has been an increasing interest in uncovering smuggled nuclear materials associated with the War on Terror. Detection of special nuclear materials hidden in cargo containers is a major challenge in national and international security. We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15db. The proposed method is shown to be variable-selection consistent, in the framework of increasing detection time and under mild regularity conditions. We study the problem of testing for shielding, i.e. the presence of intervening materials that attenuate the gamma ray signal. We show that, as detection time increases to infinity, the Lagrange multiplier test, the likelihood ratio test and Wald test are asymptotically equivalent, under the null hypothesis, and their asymptotic null distribution is Chi-square. We also derived the local power of these tests. We also develop a nonparametric approach for detecting spectra indicative of the presence of SNM. This approach characterizes the shape change in a spectrum from background radiation. We do this by proposing a dissimilarity function that characterizes the complete shape change of a spectrum from the background, over all energy channels. We derive the null asymptotic test distributions in terms of functionals of the Brownian bridge. Simulation results show that the proposed approach is very powerful and promising for detecting weak signals. It is able to accurately detect weak signals with SNR as low as -37db.
113

Contributions to the theory and practice of hypothesis testing

Sriananthakumar, Sivagowry, 1968- January 2000 (has links)
Abstract not available
114

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
<p>This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. </p><p>The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. </p><p>The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used.</p><p>The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power.</p><p>The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.</p>
115

Sequence alignment

Chia, Nicholas Lee-Ping, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 80-87).
116

Testing the unit root hypothesis in nonlinear time series and panel models

Sandberg, Rickard January 2004 (has links)
The thesis contains the four chapters: Testing parameter constancy in unit root autoregressive models against continuous change; Dickey-Fuller type of tests against nonlinear dynamic models; Inference for unit roots in a panel smooth transition autoregressive model where the time dimension is fixed; Testing unit roots in nonlinear dynamic heterogeneous panels. In Chapter  1 we derive tests for parameter constancy when the data generating process is non-stationary against the hypothesis that the parameters of the model change smoothly over time. To obtain the asymptotic distributions of the tests we generalize many theoretical results, as well as new are introduced, in the area of unit roots . The results are derived under the assumption that the error term is a strong mixing. Small sample properties of the tests are investigated, and in particular, the power performances are satisfactory. In Chapter 2 we introduce several test statistics of testing the null hypotheses of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure, and the trend. We derive analytical limiting distributions for all tests. Finite sample properties are examined. The performance of the tests is compared to that of the classical unit root tests by Dickey-Fuller and Phillips and Perron, and is found to be superior in terms of power. In Chapter 3 we derive a unit root test against a Panel Logistic Smooth Transition Autoregressive (PLSTAR). The analysis is concentrated on the case where the time dimension is fixed and the cross section dimension tends to infinity. Under the null hypothesis of a unit root, we show that the LSDV estimator of the autoregressive parameter in the linear component of the model is inconsistent due to the inclusion of fixed effects. The test statistic, adjusted for the inconsistency, has an asymptotic normal distribution whose first two moments are calculated analytically. To complete the analysis, finite sample properties of the test are examined. We highlight scenarios under which the traditional panel unit root tests by Harris and Tzavalis have inferior or reasonable power compared to our test. In Chapter 4 we present a unit root test against a non-linear dynamic heterogeneous panel with each country modelled as an LSTAR model. All parameters are viewed as country specific. We allow for serially correlated residuals over time and heterogeneous variance among countries. The test is derived under three special cases: (i) the number of countries and observations over time are fixed, (ii) observations over time are fixed and the number of countries tend to infinity, and (iii) first letting the number of observations over time tend to infinity and thereafter the number of countries. Small sample properties of the test  show modest size distortions and satisfactory power being superior to the Im, Pesaran and Shin t-type of test. We also show clear improvements in power compared to a univariate unit root test allowing for non-linearities under the alternative hypothesis. / Diss. Stockholm : Handelshögskolan, 2004
117

NONPARAMETRIC INFERENCES FOR THE HAZARD FUNCTION WITH RIGHT TRUNCATION

Akcin, Haci Mustafa 03 May 2013 (has links)
Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.
118

Justifying Slavery: An Exlopration of Self-Deception Mechanisms in Proslavery Argument in the Antebellum South

Tenenbaum, Peri 01 April 2013 (has links)
An exploration of self-deception in proslavery arguments in the antebellum South. This work explores how proslavery theorists were able to support slavery despite overwhelming evidence that slavery was immoral. By using non-intentional self-deception, slavery supporters tested their hypothesis that slavery was good in a motivationally biased manner that aligned with their interests and desires.
119

Nonparametric Inferences for the Hazard Function with Right Truncation

Akcin, Haci Mustafa 03 May 2013 (has links)
Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.
120

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used. The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power. The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.

Page generated in 0.072 seconds