• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 767
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1615
  • 591
  • 342
  • 248
  • 246
  • 235
  • 191
  • 187
  • 177
  • 169
  • 168
  • 160
  • 143
  • 135
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

Efficient Prevalence Estimation for Emerging and Seasonal Diseases Under Limited Resources

Nguyen, Ngoc Thu 30 May 2019 (has links)
Estimating the prevalence rate of a disease is crucial for controlling its spread, and for planning of healthcare services. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing outcomes from multiple pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation can be challenging, as it requires prior information on the current status of the disease, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model inputs.Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets. / Doctor of Philosophy / Accurately estimating the proportion of a population that has a disease, i.e., the disease prevalence rate, is crucial for controlling its spread, and for planning of healthcare services, such as disease prevention, screening, and treatment. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where biological specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing results from the testing pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation, e.g., the pool design that minimizes the estimation error, can be challenging, as it requires information on the current status of the disease prior to testing, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. Examples of such diseases include, but are not limited to, Zika virus, West Nile virus, and Lyme disease. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact optimization algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model input parameters. Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets.
802

Optimal Data-driven Methods for Subject Classification in Public Health Screening

Sadeghzadeh, Seyedehsaloumeh 01 July 2019 (has links)
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification. We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening. As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices. / Doctor of Philosophy / A biomarker is a measurable characteristic that is used as an indicator of a biological state or condition, such as a disease or disorder. Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external factors (e.g., temperature, humidity) or subject-specific characteristics (e.g., weight, race, gender). These sources of variability can increase the likelihood of subject misclassification based on a biomarker test. We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening. As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. As a result, newborn screening for cystic fibrosis is conducted throughout the United States. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
803

A unified decision analysis framework for robust system design evaluation in the face of uncertainty

Duan, Chunming 06 June 2008 (has links)
Some engineered systems now in use are not adequately meeting the needs for which they were developed, nor are they very cost-effective in terms of consumer utilization. Many problems associated with unsatisfactory system performance and high life-cycle cost are the direct result of decisions made during early phases of system design. To develop quality systems, both engineering and management need fundamental principles and methodologies to guide decision making during system design and advanced planning. In order to provide for the efficient resolution of complex system design decisions involving uncertainty, human judgments, and value tradeoffs, an efficient and effective decision analysis framework is required. Experience indicates that an effective approach to improving the quality of detail designs is through the application of Genichi Taguchi's philosophy of robust design. How to apply Taguchi's philosophy of robust design to system design evaluation at the preliminary design stage is an open question. The goal of this research is to develop a unified decision analysis framework to support the need for developing better system designs in the face of various uncertainties. This goal is accomplished by adapting and integrating statistical decision theory, utility theory, elements of the systems engineering process, and Taguchi's philosophy of robust design. The result is a structured, systematic methodology for evaluating system design alternatives. The decision analysis framework consists of two parts: (1) decision analysis foundations, and (2) an integrated approach. Part I (Chapters 2 through 5) covers the foundations for design decision analysis in the face of uncertainty. This research begins with an examination of the life cycle of engineered systems and identification of the elements of the decision process of system design and development. After investigating various types of uncertainty involved in the process of system design, the concept of robust design is defined from the perspective of system life-cycle engineering. Some common measures for assessing the robustness of candidate system designs are then identified and examined. Then the problem of design evaluation in the face of uncertainty is studied within the context of decision theory. After classifying design decision problems into four categories, the structure of each type of problem in terms of sequence and causal relationships between various decisions and uncertain outcomes is represented by a decision tree. Based upon statistical decision theory, the foundations for choosing a best design in the face of uncertainty are identified. The assumptions underlying common objective functions in design optimization are also investigated. Some confusion and controversy which surround Taguchi's robust design criteria — loss functions and signal-to-noise ratios -- are addressed and clarified. Part Il (Chapters 6 through 9) covers models and their application to design evaluation in the face of uncertainty. Based upon the decision analysis foundations, an integrated approach is developed and presented for resolving beth discrete decisions, continuous decisions, and decisions involving both uncertainty and multiple attributes. Application of the approach is illustrated by two hypothetical examples: bridge design and repairable equipment population system design. / Ph. D.
804

Hydroxypropylmethylcellulose: A New Matrix for Solid-Surface Room-Temperature Phosphorimetry

Hamner, Vincent N. 05 November 1999 (has links)
This thesis reports an investigation of hydroxypropylmethylcellulose (HPMC) as a new solid-surface room-temperature phosphorescence (SSRTP) sample matrix. The high background phosphorescence originating from filter paper substrates can interfere with the detection and quantitation of trace-level analytes. High-purity grades of HPMC were investigated as SSRTP substrates in an attempt to overcome this limitation. When compared directly to filter paper, HPMC allows the spectroscopist to achieve greater sensitivity, lower limits of detection (LOD), and lower limits of quantitation (LOQ) for certain phosphor/heavy-atom combinations since SSRTP signal intensities are stronger. For example, the determination of the analytical figures of merit for a naphthalene/sodium iodide/HPMC system resulted in a calibration sensitivity of 2.79, LOD of 4 ppm (3 ng), and LOQ of 14 ppm (11 ng). Corresponding investigations of a naphthalene/sodium iodide/filter paper system produced a calibration sensitivity of 0.326, LOD of 33 ppm (26 ng), and LOQ of 109 ppm (86 ng). Extended purging with dry-nitrogen gas yields improved sensitivities, lower LOD's, and lower LOQ's in HPMC matrices when LOD and LOQ are calculated according to the IUPAC guidelines.To test the universality of HPMC, qualitative SSRTP spectra were obtained for a wide variety of probe phosphors offering different molecular sizes, shapes, and chemical functionalities. Suitable spectra were obtained for the following model polycyclic aromatic hydrocarbons (PAHs): naphthalene, p-aminobenzoic acid, acenaphthene, phenanthrene, 2-naphthoic acid, 2-naphthol, salicylic acid, and triphenylene.Filter paper and HPMC substrates are inherently anisotropic, non-heterogeneous media. Since this deficiency cannot be addressed experimentally, a robust statistical method is examined for the detection of questionable SSRTP data points and the deletion of outlying observations. If discordant observations are discarded, relative standard deviations are typically reduced to less than 10% for most SSRTP data sets. Robust techniques for outlier identification are superior to traditional methods since they operate at a high level of efficiency and are immune to masking effects.The process of selecting a suitable sample support material often involves considerable trial-and-error on the part of the analyst. A mathematical model based on Hansen's cohesion parameter theory is developed to predict favorable phosphor-substrate attraction and interactions. The results of investigations using naphthalene as a probe phosphor and sodium iodide as an external heavy-atom enhancer support the cohesion parameter model.This document includes a thorough description of the fundamental principles of phosphorimetry and provides a detailed analysis of the theoretical and practical concerns associated with performing SSRTP. In order to better understand the properties of both filter paper and HPMC, a chapter is devoted to the discussion of the cellulose biopolymer. Experimental results and interpretations are presented and suggestions for future investigations are provided. Together, these results provide a framework that will support additional advancements in the field of solid-surface room-temperature phosphorescence spectroscopy. / Ph. D.
805

Robust Control Design and Analysis for Small Fixed-Wing Unmanned Aircraft Systems Using Integral Quadratic Constraints

Palframan, Mark C. 29 July 2016 (has links)
The main contributions of this work are applications of robust control and analysis methods to complex engineering systems, namely, small fixed-wing unmanned aircraft systems (UAS). Multiple path-following controllers for a small fixed-wing Telemaster UAS are presented, including a linear parameter-varying (LPV) controller scheduled over path curvature. The controllers are synthesized based on a lumped path-following and UAS dynamic system, effectively combining the six degree-of-freedom aircraft dynamics with established parallel transport frame virtual vehicle dynamics. The robustness and performance of these controllers are tested in a rigorous MATLAB simulation environment that includes steady winds, turbulence, measurement noise, and delays. After being synthesized off-line, the controllers allow the aircraft to follow prescribed geometrically defined paths bounded by a maximum curvature. The controllers presented within are found to be robust to the disturbances and uncertainties in the simulation environment. A robust analysis framework for mathematical validation of flight control systems is also presented. The framework is specifically developed for the complete uncertainty characterization, quantification, and analysis of small fixed-wing UAS. The analytical approach presented within is based on integral quadratic constraint (IQC) analysis methods and uses linear fractional transformations (LFTs) on uncertainties to represent system models. The IQC approach can handle a wide range of uncertainties, including static and dynamic, linear time-invariant and linear time-varying perturbations. While IQC-based uncertainty analysis has a sound theoretical foundation, it has thus far mostly been applied to academic examples, and there are major challenges when it comes to applying this approach to complex engineering systems, such as UAS. The difficulty mainly lies in appropriately characterizing and quantifying the uncertainties such that the resulting uncertain model is representative of the physical system without being overly conservative, and the associated computational problem is tractable. These challenges are addressed by applying IQC-based analysis tools to analyze the robustness of the Telemaster UAS flight control system. Specifically, uncertainties are characterized and quantified based on mathematical models and flight test data obtained in house for the Telemaster platform and custom autopilot. IQC-based analysis is performed on several time-invariant H∞ controllers along with various sets of uncertainties aimed at providing valuable information for use in controller analysis, controller synthesis, and comparison of multiple controllers. The proposed framework is also transferable to other fixed-wing UAS platforms, effectively taking IQC-based analysis beyond academic examples to practical application in UAS control design and airworthiness certification. IQC-based analysis problems are traditionally solved using convex optimization techniques, which can be slow and memory intensive for large problems. An oracle for discrete-time IQC analysis problems is presented to facilitate the use of a cutting plane algorithm in lieu of convex optimization in order to solve large uncertainty analysis problems relatively quickly, and with reasonable computational effort. The oracle is reformulated to a skew-Hamiltonian/Hamiltonian eigenvalue problem in order to improve the robustness of eigenvalue calculations by eliminating unnecessary matrix multiplications and inverses. Furthermore, fast, structure exploiting eigensolvers can be employed with the skew-Hamiltonian/Hamiltonian oracle to accurately determine critical frequencies when solving IQC problems. Applicable solution algorithms utilizing the IQC oracle are briefly presented, and an example shows that these algorithms can solve large problems significantly faster than convex optimization techniques. Finally, a large complex engineering system is analyzed using the oracle and a cutting-plane algorithm. Analysis of the same system using the same computer hardware failed when employing convex optimization techniques. / Ph. D.
806

Investigating the performance of process-observation-error-estimator and robust estimators in surplus production model: a simulation study

He, Qing 15 September 2010 (has links)
This study investigated the performance of the three estimators of surplus production model including process-observation-error-estimator with normal distribution (POE_N), observation-error-estimator with normal distribution (OE_N), and process-error-estimator with normal distribution (PE_N). The estimators with fat-tailed distributions including Student's t distribution and Cauchy distribution were also proposed and their performances were compared with the estimators with normal distribution. This study used Bayesian method, revised Metropolis Hastings within Gibbs sampling algorithm (MHGS) that was previously used to solve POE_N (Millar and Meyer, 2000), developed the MHGS for the other estimators, and developed the methodologies which enabled all the estimators to deal with data containing multiple indices based on catch-per-unit-effort (CPUE). Simulation study was conducted based on parameter estimation from two example fisheries: the Atlantic weakfish (Cynoscion regalis) and the black sea bass (Centropristis striata) southern stock. Our results indicated that POE_N is the estimator with best performance among all six estimators with regard to both accuracy and precision for most of the cases. POE_N is also the robust estimator to outliers, atypical values, and autocorrelated errors. OE_N is the second best estimator. PE_N is often imprecise. Estimators with fat-tailed distribution usually result in some estimates more biased than estimators with normal distribution. The performance of POE_N and OE_N can be improved by fitting multiple indices. Our study suggested that POE_N be used for population dynamic models in future stock assessment. Multiple indices from valid surveys should be incorporated into stock assessment models. OE_N can be considered when multiple indices are available. / Master of Science
807

Applications of Combinatorial Graph Theory to the Classical and Post-Quantum Security Analysis of Memory-Hard Functions and Proofs of Sequential Work

Seunghoon Lee (18431271) 26 April 2024 (has links)
<p dir="ltr">Combinatorial graph theory is an essential tool in the design and analysis of cryptographic primitives such as Memory-Hard Functions (MHFs) and Proofs of Sequential Work (PoSWs). MHFs are used to design egalitarian Proofs of Work and to help protect low-entropy secrets such as user passwords against brute-force attacks in password hashing. A PoSW is a protocol for proving that one spent significant sequential computation work to validate some statement. PoSWs have many applications, including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Prior work has used combinatorial properties of graphs to construct provably secure MHFs and PoSWs. However, some open problems still exist, such as improving security bounds for MHFs, finding approximation algorithms for measuring their memory hardness, and analyzing the post-quantum security of MHFs and PoSWs. This dissertation addresses these challenges in the security analysis of MHFs and PoSWs using combinatorial graph theory. </p><p dir="ltr">We first improve the understanding of the classical security of MHFs in the following ways. (1) We present improved security bounds for MHF candidates such as Argon2i and DRSample under plausible graph-theoretic conjectures. (2) We prove that it is Unique Games-hard to approximate the cumulative pebbling complexity of a directed acyclic graph, which is an important metric to understand the memory-hardness of data-independent MHFs. (3) We provide the first explicit construction of extremely depth-robust graphs with small indegree. Here, (extreme) depth-robustness is a crucial combinatorial tool to construct secure MHFs and PoSWs. (4) We build a new family of graphs that achieves better provable parameters for concrete depth-robustness.</p><p dir="ltr">Second, as we progress toward developing quantum computers, we initiate the post-quantum security analysis of MHFs and PoSWs. Specifically, we make the following contributions. (1) We introduce the parallel reversible pebbling game, which captures additional restrictions in quantum computing. We use combinatorial graph theory as a tool to analyze the space-time complexity and the cumulative pebbling complexity of MHF candidates such as Argon2i and DRSample in a reversible setting, which we call reversible space-time/cumulative pebbling cost, respectively. (2) We prove that the reversible cumulative pebbling cost is never too much larger than the classical cumulative pebbling cost, along with the separation result that, in some instances, the reversible cumulative pebbling cost is asymptotically larger than the classical one. (3) We prove that it is also Unique Games-hard to approximate the reversible cumulative pebbling cost of a directed acyclic graph. (4) Finally, we establish the post-quantum security of a PoSW from Cohen and Pietrzak (EUROCRYPT 2018) in the parallel quantum random oracle model by extending Zhandry's compressed oracle technique (CRYPTO 2019) and utilizing underlying combinatorial techniques of PoSWs.</p>
808

Student Ratings of Instruction: Examining the Role of Academic Field, Course Level, and Class Size

Laughlin, Anne Margaret 11 April 2014 (has links)
This dissertation investigated the relationship between course characteristics and student ratings of instruction at a large research intensive university. Specifically, it examined the extent to which academic field, course level, and class size were associated with variation in mean class ratings. Past research consistently identifies differences between student ratings in different academic fields, but offers no unifying conceptual framework for the definition or categorization of academic fields. Therefore, two different approaches to categorizing classes into academic fields were compared - one based on the institution's own academic college system and one based on Holland's (1997) theory of academic environments. Because the data violated assumptions of normality and homogeneity of variance, traditional ANOVA procedures were followed by post-hoc analyses using bootstrapping to more accurately estimate standard errors and confidence intervals. Bootstrapping was also used to determine the statistical significance of a difference between the effect sizes of academic college and Holland environment, a situation for which traditional statistical tests have not been developed. Findings replicate the general pattern of academic field differences found in prior research on student ratings and offer several unique contributions. They confirm the value of institution-specific approaches to defining academic fields and also indicate that Holland's theory of academic environments may be a useful conceptual framework for making sense of academic field differences in student ratings. Building on past studies that reported differences in mean ratings across academic fields, this study describes differences in the variance of ratings across academic fields. Finally, this study shows that class size and course level may impact student ratings differently - in terms of interaction effects and magnitude of effects - depending on the academic field of the course. / Ph. D.
809

Simultaneous Estimation and Modeling of State-Space Systems Using Multi-Gaussian Belief Fusion

Steckenrider, John Josiah 09 April 2020 (has links)
This work describes a framework for simultaneous estimation and modeling (SEAM) of dynamic systems using non-Gaussian belief fusion by first presenting the relevant fundamental formulations, then building upon these formulations incrementally towards a more general and ubiquitous framework. Multi-Gaussian belief fusion (MBF) is introduced as a natural and effective method of fusing non-Gaussian probability distribution functions (PDFs) in arbitrary dimensions efficiently and with no loss of accuracy. Construction of some multi-Gaussian structures for potential use in MBF is addressed. Furthermore, recursive Bayesian estimation (RBE) is developed for linearized systems with uncertainty in model parameters, and a rudimentary motion model correction stage is introduced. A subsequent improvement to motion model correction for arbitrarily non-Gaussian belief is developed, followed by application to observation models. Finally, SEAM is generalized to fully nonlinear and non-Gaussian systems. Several parametric studies were performed on simulated experiments in order to assess the various dependencies of the SEAM framework and validate its effectiveness in both estimation and modeling. The results of these studies show that SEAM is capable of improving estimation when uncertainty is present in motion and observation models as compared to existing methods. Furthermore, uncertainty in model parameters is consistently reduced as these parameters are updated throughout the estimation process. SEAM and its constituents have potential uses in robotics, target tracking and localization, state estimation, and more. / Doctor of Philosophy / The simultaneous estimation and modeling (SEAM) framework and its constituents described in this dissertation aim to improve estimation of signals where significant uncertainty would normally introduce error. Such signals could be electrical (e.g. voltages, currents, etc.), mechanical (e.g. accelerations, forces, etc.), or the like. Estimation is accomplished by addressing the problem probabilistically through information fusion. The proposed techniques not only improve state estimation, but also effectively "learn" about the system of interest in order to further refine estimation. Potential uses of such methods could be found in search-and-rescue robotics, robust control algorithms, and the like. The proposed framework is well-suited for any context where traditional estimation methods have difficulty handling heightened uncertainty.
810

Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness Considerations

Aprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD

Page generated in 0.0294 seconds