1 |
Group Testing: A Practical ApproachGollapudi, Sri Srujan 12 1900 (has links)
Broadly defined, group testing is the study of finding defective items in a large set. In the medical infection setting, that implies classifying each member of a population as infected or uninfected, while minimizing the total number of tests.
|
2 |
Estimating Proportions by Group Retesting with Unequal Group Sizes at Each StageHu, Yusang January 2020 (has links)
Group testing is a procedure that splits samples into multiple groups based on some specific grouping criterion and then tests each group. It is usually used in identifying affected individuals or estimating the population proportion of affected individuals. Improving precision of group testing and saving cost of experiment are two crucial tasks for investigators. Cost-efficiency is a ratio of precision to cost; hence improving cost-efficiency is as crucial as improvement of precision and cost saving. In this thesis, retesting will be considered as a method to improve precision and cost-efficiency, and save cost. Retesting is an extension of group testing. It uses two or more group testing stages, and testing original samples in all of the stages. Hepworth and Watson (2015) proposed a two-stage group testing procedure where two stages have equal group sizes, and the number of groups of the second stage is based on the number of positive
groups in the first stage. In this thesis, our main goal is estimating a proportion p under the circumstance of unequal group sizes in two stages, and discovering the most cost-efficient experiment design. Analytical solutions of precision will be provided; we will use these analytical solutions with simulations to analyse some experimental designs, and discover whether doing one group testing only is precise enough or not
and if it is worth retesting for each design. In the end, we will combine all these analyses and identify the optimal experiment design. / Thesis / Master of Science (MSc)
|
3 |
An Exploration in Group Testing: Finding Radioactive PotatoesSobieska, Aleksandra Cecylia 20 May 2014 (has links)
No description available.
|
4 |
Preemptive mobile code protection using spy agentsKalogridis, Georgios January 2011 (has links)
This thesis introduces 'spy agents' as a new security paradigm for evaluating trust in remote hosts in mobile code scenarios. In this security paradigm, a spy agent, i.e. a mobile agent which circulates amongst a number of remote hosts, can employ a variety of techniques in order to both appear 'normal' and suggest to a malicious host that it can 'misuse' the agent's data or code without being held accountable. A framework for the operation and deployment of such spy agents is described. Subsequently, a number of aspects of the operation of such agents within this framework are analysed in greater detail. The set of spy agent routes needs to be constructed in a manner that enables hosts to be identified from a set of detectable agent-specific outcomes. The construction of route sets that both reduce the probability of spy agent detection and support identification of the origin of a malicious act is analysed in the context of combinatorial group testing theory. Solutions to the route set design problem are proposed. A number of spy agent application scenarios are introduced and analysed, including: a) the implementation of a mobile code email honeypot system for identifying email privacy infringers, b) the design of sets of agent routes that enable malicious host detection even when hosts collude, and c) the evaluation of the credibility of host classification results in the presence of inconsistent host behaviour. Spy agents can be used in a wide range of applications, and it appears that each application creates challenging new research problems, notably in the design of appropriate agent route sets.
|
5 |
Cost and accuracy analysis of group and individual testing strategies: Implications for COVID 19Islam, Ismat January 2021 (has links)
We compared several group and individual testing strategies in terms of cost and accuracy and then showed which one is more accurate while costing as little as possible for a specified prevalence rate. / Thesis / Master of Science (MSc)
|
6 |
Finding A Subset Of Non-defective Items From A Large Population : Fundamental Limits And Efficient AlgorithmsSharma, Abhay 05 1900 (has links) (PDF)
Consider a large population containing a small number of defective items. A commonly
encountered goal is to identify the defective items, for example, to isolate them. In the classical non-adaptive group testing (NAGT) approach, one groups the items into subsets, or pools, and runs tests for the presence of a defective itemon each pool. Using the outcomes the tests, a fundamental goal of group testing is to reliably identify the complete set of defective items with as few tests as possible. In contrast, this thesis studies a non-defective subset identification problem, where the primary goal is to identify a “subset” of “non-defective” items given the test outcomes. The main contributions of this thesis are:
We derive upper and lower bounds on the number of nonadaptive group tests
required to identify a given number of non-defective items with arbitrarily small
probability of incorrect identification as the population size goes to infinity. We
show that an impressive reduction in the number of tests is achievable compared
to the approach of first identifying all the defective items and then picking the
required number of non-defective items from the complement set. For example, in the asymptotic regime with the population size N → ∞, to identify L nondefective items out of a population containing K defective items, when the tests are reliable, our results show that O _ K logK L N _ measurements are sufficient when L ≪ N − K and K is fixed. In contrast, the necessary number of tests using the conventional approach grows with N as O _ K logK log N K_ measurements. Our
results are derived using a general sparse signal model, by virtue of which, they
are also applicable to other important sparse signal based applications such as
compressive sensing.
We present a bouquet of computationally efficient and analytically tractable nondefective subset recovery algorithms. By analyzing the probability of error of the
algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. By comparing with the information theoretic lower bounds, we show that the upper bounds bounds on the number of tests are order-wise tight up to a log(K) factor, where K is the number of defective items. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). We also provide extensive simulation results that compare the relative performance of the
different algorithms and provide further insights into their practical utility. The
proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate.
We investigate the use of adaptive group testing in the application of finding a
spectrum hole of a specified bandwidth in a given wideband of interest. We propose
a group testing based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent sub-bands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios. Energy-based hypothesis tests are used to provide an occupancy decision over the group of sub-bands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme. We illustrate the performance of the proposed algorithms via Monte Carlo simulations.
|
7 |
Comparisons of Estimators of Small Proportion under Group TestingWei, Xing 02 July 2015 (has links)
Binomial group testing has been long recognized as an efficient method of estimating proportion of subjects with a specific characteristic. The method is superior to the classic maximum likelihood estimator (MLE), particularly when the proportion is small. Under the group testing model, we assume the testing is conducted without error. In the present research, a new Bayes estimator will be proposed that utilizes an additional piece of information, the proportion to be estimated is small and within a given range. It is observed that with the appropriate choice of the hyper-parameter our new Bayes estimator has smaller mean squared error (MSE) than the classic MLE, Burrows estimator, and the existing Bayes estimator. Furthermore, on the basis of heavy Monte Carlo simulation we have determined the best hyper-parameters in the sense that the corresponding new Bayes estimator has the smallest MSE. A table of these best hyper-parameters is made for proportions within the considered range.
|
8 |
Sustainable Fault-handling Of Reconfigurable Logic Using Throughput-driven AssessmentSharma, Carthik 01 January 2008 (has links)
A sustainable Evolvable Hardware (EH) system is developed for SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) using outlier detection and group testing-based assessment principles. The fault diagnosis methods presented herein leverage throughput-driven, relative fitness assessment to maintain resource viability autonomously. Group testing-based techniques are developed for adaptive input-driven fault isolation in FPGAs, without the need for exhaustive testing or coding-based evaluation. The techniques maintain the device operational, and when possible generate validated outputs throughout the repair process. Adaptive fault isolation methods based on discrepancy-enabled pair-wise comparisons are developed. By observing the discrepancy characteristics of multiple Concurrent Error Detection (CED) configurations, a method for robust detection of faults is developed based on pairwise parallel evaluation using Discrepancy Mirror logic. The results from the analytical FPGA model are demonstrated via a self-healing, self-organizing evolvable hardware system. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect's role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control.
|
9 |
High-performance and Scalable Bayesian Group Testing and Real-time fMRI Data AnalysisChen, Weicong 27 January 2023 (has links)
No description available.
|
10 |
Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness ConsiderationsAprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD / Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices.
|
Page generated in 0.0972 seconds