• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples

Milligan, Kenneth W. 08 1900 (has links)
The study seeks to determine the effect upon the Marascuilo Ú₀ statistic of violating the small sample assumption. The study employed a Monte Carlo simulation technique to vary the degree of sample size and unequal sample sizes within experiments to determine the effect of such conditions, Twenty-two simulations, with 1200 trials each, were used. The following conclusion appeared to be appropriate: The Marascuilo Ú₀ statistic should not be used with small sample sizes and it is recommended that the statistic be used only if sample sizes are larger than ten.
112

Convex and non-convex optimizations for recovering structured data: algorithms and analysis

Cho, Myung 15 December 2017 (has links)
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
113

Improving Hypothesis Testing Skills: Evaluating a General Purpose Classroom Exercise with Biology Students in Grade 9.

Wilder, Michael Gregg 01 January 2011 (has links)
There is an increased emphasis on inquiry in national and Oregon state high school science standards. As hypothesis testing is a key component of these new standards, instructors need effective strategies to improve students' hypothesis testing skills. Recent research suggests that classroom exercises may prove useful. A general purpose classroom activity called the thought experiment is proposed. The effectiveness of 7 hours of instruction using this exercise was measured in an introductory biology course, using a quasi-experimental contrast group design. An instrument for measuring hypothesis testing skill is also proposed. Treatment (n=18) and control (n=10) sections drawn from preexisting high school classes were pre- and post-assessed using the proposed Multiple Choice Assessment of Deductive Reasoning. Both groups were also post-assessed by individually completing a written, short-answer format hypothesis testing exercise. Treatment section mean posttest scores on contextualized, multiple choice problem sets were significantly higher than those of the control section. Mean posttest scores did not significantly differ between sections on abstract deductive logic problems or the short answer format hypothesis testing exercise.
114

Probabilistic pairwise model comparisons based on discrepancy measures and a reconceptualization of the p-value

Riedle, Benjamin N. 01 May 2018 (has links)
Discrepancy measures are often employed in problems involving the selection and assessment of statistical models. A discrepancy gauges the separation between a fitted candidate model and the underlying generating model. In this work, we consider pairwise comparisons of fitted models based on a probabilistic evaluation of the ordering of the constituent discrepancies. An estimator of the probability is derived using the bootstrap. In the framework of hypothesis testing, nested models are often compared on the basis of the p-value. Specifically, the simpler null model is favored unless the p-value is sufficiently small, in which case the null model is rejected and the more general alternative model is retained. Using suitably defined discrepancy measures, we mathematically show that, in general settings, the Wald, likelihood ratio (LR) and score test p-values are approximated by the bootstrapped discrepancy comparison probability (BDCP). We argue that the connection between the p-value and the BDCP leads to potentially new insights regarding the utility and limitations of the p-value. The BDCP framework also facilitates discrepancy-based inferences in settings beyond the limited confines of nested model hypothesis testing.
115

Statistical detection with weak signals via regularization

Li, Jinzheng 01 July 2012 (has links)
There has been an increasing interest in uncovering smuggled nuclear materials associated with the War on Terror. Detection of special nuclear materials hidden in cargo containers is a major challenge in national and international security. We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15db. The proposed method is shown to be variable-selection consistent, in the framework of increasing detection time and under mild regularity conditions. We study the problem of testing for shielding, i.e. the presence of intervening materials that attenuate the gamma ray signal. We show that, as detection time increases to infinity, the Lagrange multiplier test, the likelihood ratio test and Wald test are asymptotically equivalent, under the null hypothesis, and their asymptotic null distribution is Chi-square. We also derived the local power of these tests. We also develop a nonparametric approach for detecting spectra indicative of the presence of SNM. This approach characterizes the shape change in a spectrum from background radiation. We do this by proposing a dissimilarity function that characterizes the complete shape change of a spectrum from the background, over all energy channels. We derive the null asymptotic test distributions in terms of functionals of the Brownian bridge. Simulation results show that the proposed approach is very powerful and promising for detecting weak signals. It is able to accurately detect weak signals with SNR as low as -37db.
116

Contributions to the theory and practice of hypothesis testing

Sriananthakumar, Sivagowry, 1968- January 2000 (has links)
Abstract not available
117

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
<p>This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. </p><p>The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. </p><p>The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used.</p><p>The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power.</p><p>The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.</p>
118

Sequence alignment

Chia, Nicholas Lee-Ping, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 80-87).
119

Testing the unit root hypothesis in nonlinear time series and panel models

Sandberg, Rickard January 2004 (has links)
The thesis contains the four chapters: Testing parameter constancy in unit root autoregressive models against continuous change; Dickey-Fuller type of tests against nonlinear dynamic models; Inference for unit roots in a panel smooth transition autoregressive model where the time dimension is fixed; Testing unit roots in nonlinear dynamic heterogeneous panels. In Chapter  1 we derive tests for parameter constancy when the data generating process is non-stationary against the hypothesis that the parameters of the model change smoothly over time. To obtain the asymptotic distributions of the tests we generalize many theoretical results, as well as new are introduced, in the area of unit roots . The results are derived under the assumption that the error term is a strong mixing. Small sample properties of the tests are investigated, and in particular, the power performances are satisfactory. In Chapter 2 we introduce several test statistics of testing the null hypotheses of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure, and the trend. We derive analytical limiting distributions for all tests. Finite sample properties are examined. The performance of the tests is compared to that of the classical unit root tests by Dickey-Fuller and Phillips and Perron, and is found to be superior in terms of power. In Chapter 3 we derive a unit root test against a Panel Logistic Smooth Transition Autoregressive (PLSTAR). The analysis is concentrated on the case where the time dimension is fixed and the cross section dimension tends to infinity. Under the null hypothesis of a unit root, we show that the LSDV estimator of the autoregressive parameter in the linear component of the model is inconsistent due to the inclusion of fixed effects. The test statistic, adjusted for the inconsistency, has an asymptotic normal distribution whose first two moments are calculated analytically. To complete the analysis, finite sample properties of the test are examined. We highlight scenarios under which the traditional panel unit root tests by Harris and Tzavalis have inferior or reasonable power compared to our test. In Chapter 4 we present a unit root test against a non-linear dynamic heterogeneous panel with each country modelled as an LSTAR model. All parameters are viewed as country specific. We allow for serially correlated residuals over time and heterogeneous variance among countries. The test is derived under three special cases: (i) the number of countries and observations over time are fixed, (ii) observations over time are fixed and the number of countries tend to infinity, and (iii) first letting the number of observations over time tend to infinity and thereafter the number of countries. Small sample properties of the test  show modest size distortions and satisfactory power being superior to the Im, Pesaran and Shin t-type of test. We also show clear improvements in power compared to a univariate unit root test allowing for non-linearities under the alternative hypothesis. / Diss. Stockholm : Handelshögskolan, 2004
120

NONPARAMETRIC INFERENCES FOR THE HAZARD FUNCTION WITH RIGHT TRUNCATION

Akcin, Haci Mustafa 03 May 2013 (has links)
Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.

Page generated in 0.101 seconds