Spelling suggestions: "subject:"adaptive sampling"" "subject:"daptive sampling""
1 |
Designing computer experiments to estimate integrated response functionsMarin, Ofelia, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 115-117).
|
2 |
Robust and adaptive sampled data I - controlOzdemir, Necati January 2000 (has links)
No description available.
|
3 |
Performance evaluation in Bayesian adaptive randomization.Wang, Degang. Lee, Jack J., Fu, Yunxin, Lai, Dajian Boerwinkle, Eric, January 2008 (has links)
Source: Masters Abstracts International, Volume: 47-03, page: 1686. Advisers: Jack J. Lee; Yunxin Fu. Includes bibliographical references.
|
4 |
Sequential Optimal Recovery: A Paradigm for Active LearningNiyogi, Partha 12 May 1995 (has links)
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
|
5 |
Implementation of an adaptive importance sampling technique in MCNP for monoenergetic slab problemsMosher, Scott William 05 1900 (has links)
No description available.
|
6 |
An analysis of the adaptive cluster sampling design with rare plant point distributions /Tout, Jeremy. January 1900 (has links)
Thesis (M.A.)--Humboldt State University, 2009. / Includes bibliographical references (leaves 29-31). Also available via Humboldt Digital Scholar.
|
7 |
Adaptive Sampling for Targeted Software TestingShah, Abhishek January 2024 (has links)
Targeted software testing is a critical task in development of secure software. The core challenge of targeted software testing is to generate many inputs that reach specific code target locations in a given program. However, this task is challenging because it is NP-hard in theory and real-world programs contain very large input spaces and many lines of code, making this difficult in practice.
In this thesis, I introduce a new approach for targeted software testing based on adaptive sampling. The key insight is to reduce the original problem to a sequence of approximate counting problems, and I apply this approach to targeted software testing in two stages.
First, to find a single target-reaching input when no such input is given, I develop a new search algorithm MC2 that adaptively uses approximate-count feedback to narrow down which input region is more likely to contain a target-reaching input using probabilistic bisection.
Second, given a single target-reaching input, I develop a new set approximation algorithm ProgramSampler that adaptively learns an approximation of the set of target-reaching inputs based on approximate-count feedback, where the set approximation can be efficiently uniformly sampled for many target-reaching inputs.
Backed by theoretical guarantees, these techniques have been highly effective in practice: outperforming existing methods on average by 1-2 orders of magnitude.
|
8 |
Adaptive Sampling Line Search for Simulation OptimizationRagavan, Prasanna Kumar 08 March 2017 (has links)
This thesis is concerned with the development of algorithms for simulation optimization (SO), a special case of stochastic optimization where the objective function can only be evaluated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that eliminates the need for any user defined parameters. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on a continuous feasible set mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. We also show that ADALINE converges ``almost surely'' to the set of local minima. Finally, our numerical results suggest that ADALINE converges to a local minimum faster, outperforming other advanced SO algorithms that utilize variable sampling strategies.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using a Sample Average Approximation (SAA) approach, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution quickly than SAA. / Ph. D. / This thesis is concerned with the development of algorithms for simulation optimization (SO), where the objective function does not have an analytical form, and can only be estimated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that minimizes the need for user defined parameter. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on continuous feasible sets mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. Finally, our numerical results suggest that ADALINE converges to a local minimum faster than the best available SO algorithm for the purpose.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using traditional optimization techniques, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution more quickly than traditional optimization.
|
9 |
An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertaintyCrowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis.
Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner?
This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required.
The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand.
Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model.
By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.
|
10 |
New methods for studying complex diseases via genetic association studiesSchu, Matthew Charles 22 January 2016 (has links)
Genome-wide association studies (GWAS) have delivered many novel insights about the etiology of many common heritable diseases. However, in most disorders studied by GWAS, the known single nucleotide polymorphisms (SNPs) associated with the disease do not account for a large portion of the genetic factors underlying the condition. This suggests that many of the undiscovered variants contributing to the risk of common diseases have weak effects or are relatively rare. This thesis introduces novel adaptations of techniques for improving detection power for both of these types of risk variants, and reports the results of analyses applying these methods to real datasets for common diseases.
Chapter 2 describes a novel approach to improve the detection of weak-effect risk variants that is based on an adaptive sampling technique known as Distilled Sensing (DS). This procedure entails utilization of a portion of the total sample to exclude from consideration regions of the genome where there is no evidence of genetic association, and then testing for association with a greatly reduced number of variants in the remaining sample. Application of the method to simulated data sets and GWAS data from studies of age-related macular degeneration (AMD) demonstrated that, in many situations, DS can have superior power over traditional meta-analysis techniques to detect weak-effect loci.
Chapter 3 describes an innovative pipeline to screen for rare variants in next generation sequencing (NGS) data. Since rare variants, by definition, are likely to be present in only a few individuals even in large samples, efficient methods to screen for rare causal variants are critical for advancing the utility of NGS technology. Application of our approach, which uses family-based data to identify candidate rare variants that could explain aggregation of disease in some pedigrees, resulted in the discovery of novel protein-coding variants linked to increased risk for Alzheimer's disease (AD) in African Americans.
The techniques presented in this thesis address different aspects of the "missing heritability" problem and offer efficient approaches to discover novel risk variants, and thereby facilitate development of a more complete picture of genetic risk for common diseases.
|
Page generated in 0.0899 seconds