• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Some results on experimental designs when the usual assumptions are invalid

Sweeny, Hale Caterson January 1956 (has links)
Ph. D.
92

The comparison of the sensitivities of experiments using different scales of measurement

Schumann, D. E. W. January 1956 (has links)
Ph. D.
93

NEURAL CORRELATES OF DECISION UNCERTAINTY AND MEMORY ENHANCEMENT DURING HYPOTHESIS TESTING

Shen, Xinxu, 0000-0001-7319-641X 08 1900 (has links)
Humans are motivated to actively seek information to reduce uncertainty, which has been shown to alter episodic memory (Shen et al., 2022). Specifically, we found that uncertainty during hypothesis testing was both linearly and quadratically related to episodic memory. Yet, little is known about the neural mechanisms underlying how hypothesis testing relates to subsequent memory. In the current fMRI study, 40 participants were presented with three multi-dimension keys. They were instructed to figure out the target feature of a key to open a treasure chest. Reinforcement learning model was used to capture decision uncertainty around different features of the keys. We replicated our prior findings and showed that a reinforcement learning model captured hypothesis-testing behavior and there was a quadratic relationship between decision uncertainty and memory, such that memory was enhanced at the intermediate level of decision uncertainty. In terms of neural results, we found that the quadratic term of decision uncertainty was coded in the ventral striatum. We also found that decreasing decision uncertainty was related to greater activation in the ventral striatum, anterior and posterior hippocampus, and ventromedial prefrontal cortex, while increasing decision uncertainty was related to greater activation in the ventral tegmental area. More importantly, we found that activation in the ventral striatum in response to the quadratic term of decision uncertainty correlated with the quadratic relationship between decision uncertainty and memory, such that participants with greater activation in the ventral striatum showed a more pronounced quadratic relationship between decision uncertainty and memory. Together, this work extends existing research on how uncertainty influences memory via changes in motivation in the framework of hypothesis testing. / Psychology
94

Resilient Navigation through Jamming Detection and Measurement Error Modeling

Jada, Sandeep Kiran 28 October 2024 (has links)
Global Navigation Satellite Systems (GNSS) provide critical positioning, navigation, and timing (PNT) services across various sectors. GNSS signals are weak when they reach Earth from Medium Earth Orbit (MEO), making them vulnerable to jamming. The jamming threat has been growing over the past decade, putting critical services at risk. In response, the National Space-Based PNT Advisory Board and the White House advocate for policies and technologies to protect, toughen, and augment GPS for a more resilient PNT. Time-sequential estimation improves navigation accuracy and allows for the augmentation of GNSS with other difficult-to-interfere sensors. Safety-critical navigation applications (e.g., GNSS/INS-based aircraft localization) that use time-sequential estimation require high-integrity measurement error time correlation models to compute estimation error bounds. In response, two new methods to identify high-integrity measurement error time correlation models from experimental data are developed and evaluated in this thesis. As opposed to bounding autocorrelation functions in the time domain and power spectra in the frequency domain, methods proposed in this thesis use bounding of lagged product distributions in the time domain and scaled periodogram distributions in the frequency domain. The proposed methods can identify tight-bounding models from empirical data, resulting in tighter estimation error bounds. The sample distributions are bound using theoretical First-order Gauss-Markov process (FOGMP) model distributions derived in this thesis. FOGMP models provide means to account for error time correlation while being easily incorporated into linear estimators. The two methods were evaluated using simulated and experimental GPS measurement error data collected in a mild multipath environment. To protect and alert GNSS end users of jamming, this thesis proposes and evaluates an autonomous algorithm to detect jamming using publicly available data from large receiver networks. The algorithm uses carrier-to-noise ratio (C/N0)-based jamming detectors that are optimal, self-calibrating, receiver-independent, and while adhering to a predefined false alert rate. This algorithm was tested using data from networks with hundreds of receivers, revealing patterns indicative of intentional interference, which provided an opportunity to validate the detector. This validation activity, described in this thesis, consists of designing a portable hardware setup, deriving an optimal power-based jamming monitor for independent detection, and time-frequency analysis of wideband RF (WBRF) data collected during jamming events. The analysis of the WBRF data from a genuine jamming event detected while driving on I-25 in Denver, Colorado, USA, revealed power variations resembling a personal privacy device (PPD), validating the C/N0 detector's result. Finally, this thesis investigates the cause of recurring false alerts in our power-based jamming detectors. These false alerts are caused by a few short pulses of power increases, which other researchers also observe. The time-frequency analysis of signals from the pulses revealed binary data encoded using frequency shift keying (FSK) in the GPS L1 band. Various experiments confirmed the signals are not aliases of out-of-band signals. A survey of similar encoded messages identified the source as car key fobs and other devices transmitting at 315 MHz, nowhere near the GPS L1 band, with an unattenuated 5$^{th}$ harmonic in the GPS L1 band. The RF emission regulations were analyzed to identify mitigation. / Doctor of Philosophy / Global Navigation Satellite Systems (GNSS) have become integral to modern-day life. Many essential services rely on GNSS-provided Positioning, Navigation, and Timing (PNT) services; power grids rely on accurate GNSS-provides timing for synchronization; stock markets use them for time-stamping trades; aircraft and ships use GNSS to correct accumulated position errors regularly; to name a few. In addition, the availability of cheap and accessible PNT services combined with mobile internet spawned new service sectors through mobile applications. A 2019 study published by the National Institute of Standards and Technology (NIST) estimates that GPS has generated $1.4 trillion in U.S. economic benefits since the system became available in the 1980s. With the wide adoption of GNSS services comes new motives for interference. These motives can range from delivery workers and truck drivers trying to hide their location from their employers to something more nefarious, such as criminals trying to evade law enforcement surveillance. GNSS jamming is a type of interference in which the attacker drowns out the faint GNSS signals, broadcast from medium Earth Orbit (MEO) at 20,000 km, with a powerful RF transmitter. Some commonly used devices are transmitters are cheaply available for as low as $10 on Amazon, known as personal privacy devices (PPDs). Another source of jamming comes from militaries in conflict zones overseas, jamming GNSS signals over large areas of a country or a city. However, two major incidents in the US have disrupted air traffic over busy airspace, such as in Denver and Dallas. This threat of GNSS interference has grown over the past decade and is only getting worse. The White House and other organizations advocate for policies for a more resilient PNT; to protect, toughen, and augment GNSS. % This thesis contributes to protecting GNSS frequencies through autonomous algorithms that process publicly available signal quality data from large receiver networks for jamming detection. This autonomous algorithm uses detectors that are self-calibrating and optimal, i.e., minimizing the probability of missed detection while targeting a predefined false alert probability. Several jamming event patterns consistent with intentional interference were detected using this algorithm. The signal-quality-based detectors were validated using an independent power-based optimal jamming detector derived in this thesis. Spurious recurring false alerts triggered the power detector. An investigation described in the thesis discovered that car key fobs and other devices emit RF energy in restricted GPS frequencies. Based on the analysis of FCC regulation for RF transmitters, mitigation is proposed for power-based jamming detectors to prevent false alarms. Time-sequential estimation improves navigation accuracy and allows for the augmentation of GNSS with other difficult-to-interfered sensors such as IMU or LIDAR. Safety-critical navigation applications can benefit from time-sequential estimation, but they require high-integrity measurement error time correlation models to compute bounds on positioning errors. Two new methods to derive high-integrity measurement error time correlation models from experimental data are developed and evaluated in this thesis. These methods can derive tighter bounding models compared to the existing methods, reducing the uncertainty in position estimates. The two methods were implemented and evaluated using simulated and experimental GPS measurement error data collected in a mild multipath environment.
95

Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithm

Wang, Yan 19 May 2011 (has links)
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
96

Hypothesis testing and community detection on networks with missingness and block structure

Guilherme Maia Rodrigues Gomes (8086652) 06 December 2019 (has links)
Statistical analysis of networks has grown rapidly over the last few years with increasing number of applications. Graph-valued data carries additional information of dependencies which opens the possibility of modeling highly complex objects in vast number of fields such as biology (e.g. brain networks , fungi networks, genes co-expression), chemistry (e.g. molecules fingerprints), psychology (e.g. social networks) and many others (e.g. citation networks, word co-occurrences, financial systems, anomaly detection). While the inclusion of graph structure in the analysis can further help inference, simple statistical tasks in a network is very complex. For instance, the assumption of exchangeability of the nodes or the edges is quite strong, and it brings issues such as sparsity, size bias and poor characterization of the generative process of the data. Solutions to these issues include adding specific constraints and assumptions on the data generation process. In this work, we approach this problem by assuming graphs are globally sparse but locally dense, which allows exchangeability assumption to hold in local regions of the graph. We consider problems with two types of locality structure: block structure (also framed as multiple graphs or population of networks) and unstructured sparsity which can be seen as missing data. For the former, we developed a hypothesis testing framework for weighted aligned graphs; and a spectral clustering method for community detection on population of non-aligned networks. For the latter, we derive an efficient spectral clustering approach to learn the parameters of the zero inflated stochastic blockmodel. Overall, we found that incorporating multiple local dense structures leads to a more precise and powerful local and global inference. This result indicates that this general modeling scheme allows for exchangeability assumption on the edges to hold while generating more realistic graphs. We give theoretical conditions for our proposed algorithms, and we evaluate them on synthetic and real-world datasets, we show our models are able to outperform the baselines on a number of settings. <br>
97

Nonparametric tests to detect relationship between variables in the presence of heteroscedastic treatment effects

Tolos, Siti January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Haiyan Wang / Statistical tools to detect nonlinear relationship between variables are commonly needed in various practices. The first part of the dissertation presents a test of independence between a response variable, either discrete or continuous, and a continuous covariate after adjusting for heteroscedastic treatment effects. The method first involves augmenting each pair of the data for all treatments with a fixed number of nearest neighbors as pseudo-replicates. A test statistic is then constructed by taking the difference of two quadratic forms. Using such differences eliminate the need to estimate any nonlinear regression function, reducing the computational time. Although using a fixed number of nearest neighbors poses significant difficulty in the inference compared to when the number of nearest neighbors goes to infinity, the parametric standardizing rate is obtained for the asymptotic distribution of the proposed test statistics. Numerical studies show that the new test procedure maintains the intended type I error rate and has robust power to detect nonlinear dependency in the presence of outliers. The second part of the dissertation discusses the theory and numerical studies for testing the nonparametric effects of no covariate-treatment interaction and no main covariate based on the decomposition of the conditional mean of regression function that is potentially nonlinear. A similar test was discussed in Wang and Akritas (2006) for the effects defined through the decomposition of the conditional distribution function, but with the number of pseudo-replicates going to infinity. Consequently, their test statistics have slow convergence rates and computational speeds. Both test limitations are overcome using new model and tests. The last part of the dissertation develops theory and numerical studies to test for no covariate-treatment interaction, no simple covariate and no main covariate effects for cases when the number of factor levels and the number of covariate values are large.
98

Mathematical Methods for Enhanced Information Security in Treaty Verification

MacGahan, Christopher, MacGahan, Christopher January 2016 (has links)
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
99

Robustness of the One-Sample Kolmogorov Test to Sampling from a Finite Discrete Population

Tucker, Joanne M. (Joanne Morris) 12 1900 (has links)
One of the most useful and best known goodness of fit test is the Kolmogorov one-sample test. The assumptions for the Kolmogorov (one-sample test) test are: 1. A random sample; 2. A continuous random variable; 3. F(x) is a completely specified hypothesized cumulative distribution function. The Kolmogorov one-sample test has a wide range of applications. Knowing the effect fromusing the test when an assumption is not met is of practical importance. The purpose of this research is to analyze the robustness of the Kolmogorov one-sample test to sampling from a finite discrete distribution. The standard tables for the Kolmogorov test are derived based on sampling from a theoretical continuous distribution. As such, the theoretical distribution is infinite. The standard tables do not include a method or adjustment factor to estimate the effect on table values for statistical experiments where the sample stems from a finite discrete distribution without replacement. This research provides an extension of the Kolmogorov test when the hypothesized distribution function is finite and discrete, and the sampling distribution is based on sampling without replacement. An investigative study has been conducted to explore possible tendencies and relationships in the distribution of Dn when sampling with and without replacement for various parameter settings. In all, 96 sampling distributions were derived. Results show the standard Kolmogorov table values are conservative, particularly when the sample sizes are small or the sample represents 10% or more of the population.
100

Computational and Statistical Advances in Testing and Learning

Ramdas, Aaditya Kumar 01 July 2015 (has links)
This thesis makes fundamental computational and statistical advances in testing and estimation, making critical progress in theory and application of classical statistical methods like classification, regression and hypothesis testing, and understanding the relationships between them. Our work connects multiple fields in often counter-intuitive and surprising ways, leading to new theory, new algorithms, and new insights, and ultimately to a cross-fertilization of varied fields like optimization, statistics and machine learning. The first of three thrusts has to do with active learning, a form of sequential learning from feedback-driven queries that often has a provable statistical advantage over passive learning. We unify concepts from two seemingly different areas—active learning and stochastic firstorder optimization. We use this unified view to develop new lower bounds for stochastic optimization using tools from active learning and new algorithms for active learning using ideas from optimization. We also study the effect of feature noise, or errors-in-variables, on the ability to actively learn. The second thrust deals with the development and analysis of new convex optimization algorithms for classification and regression problems. We provide geometrical and convex analytical insights into the role of the margin in margin-based classification, and develop new greedy primal-dual algorithms for non-linear classification. We also develop a unified proof for convergence rates of randomized algorithms for the ordinary least squares and ridge regression problems in a variety of settings, with the purpose of investigating which algorithm should be utilized in different settings. Lastly, we develop fast state-of-the-art numerically stable algorithms for an important univariate regression problem called trend filtering with a wide variety of practical extensions. The last thrust involves a series of practical and theoretical advances in nonparametric hypothesis testing. We show that a smoothedWasserstein distance allows us to connect many vast families of univariate and multivariate two sample tests. We clearly demonstrate the decreasing power of the families of kernel-based and distance-based two-sample tests and independence tests with increasing dimensionality, challenging existing folklore that they work well in high dimensions. Surprisingly, we show that these tests are automatically adaptive to simple alternatives and achieve the same power as other direct tests for detecting mean differences. We discover a computation-statistics tradeoff, where computationally more expensive two-sample tests have a provable statistical advantage over cheaper tests. We also demonstrate the practical advantage of using Stein shrinkage for kernel independence testing at small sample sizes. Lastly, we develop a novel algorithmic scheme for performing sequential multivariate nonparametric hypothesis testing using the martingale law of the iterated logarithm to near-optimally control both type-1 and type-2 errors. One perspective connecting everything in this thesis involves the closely related and fundamental problems of linear regression and classification. Every contribution in this thesis, from active learning to optimization algorithms, to the role of the margin, to nonparametric testing fits in this picture. An underlying theme that repeats itself in this thesis, is the computational and/or statistical advantages of sequential schemes with feedback. This arises in our work through comparing active with passive learning, through iterative algorithms for solving linear systems instead of direct matrix inversions, and through comparing the power of sequential and batch hypothesis tests.

Page generated in 0.0776 seconds