1 |
On fixed-width simultaneous confidence intervals for multiple comparisons and some related problemsNikouukar-Zanjani, Masoud January 1996 (has links)
No description available.
|
2 |
Cooperative strategies for spatial resource allocationMoore, Brandon Joseph 16 July 2007 (has links)
No description available.
|
3 |
Efficient sampling-based Rbdo by using virtual support vector machine and improving the accuracy of the Kriging methodSong, Hyeongjin 01 December 2013 (has links)
The objective of this study is to propose an efficient sampling-based RBDO using a new classification method to reduce the computational cost. In addition, accuracy improvement strategies for the Kriging method are proposed to reduce the number of expensive computer experiments. Current research effort involves: (1) developing a new classification method that is more efficient than conventional surrogate modeling methods while maintaining required accuracy level; (2) developing a sequential adaptive sampling method that inserts samples near the limit state function; (3) improving the efficiency of the RBDO process by using a fixed hyper-spherical local window with an efficient uniform sampling method and identification of active/violated constraints; and (4) improving the accuracy of the Kriging method by introducing several strategies.
In the sampling-based RBDO, only accurate classification information is needed instead of accurate response surface. On the other hand, in general, surrogates are constructed using all available DoE samples instead of focusing on the limit state function. Therefore, the computational cost of surrogates can be relatively expensive; and the accuracy of the limit state (or decision) function can be sacrificed in return for reducing the error on unnecessary regions away from the limit state function. On the contrary, the support vector machine (SVM), which is a classification method, only uses support vectors, which are located near the limit state function, to focus on the decision function. Therefore, the SVM is very efficient and ideally applicable to sampling-based RBDO, if the accuracy of SVM is improved by inserting virtual samples near the limit state function.
The proposed sequential sampling method inserts new samples near the limit state function so that the number of DoE samples is minimized. In many engineering problems, expensive computer simulations are used and thus the total computational cost needs to be reduced by using less number of DoE samples.
Several efficiency strategies such as: (1) launching RBDO at a deterministic optimum design, (2) hyper-spherical local windows with an efficient uniform sampling method, (3) filtering of constraints, (4) sample reuse, (5) improved virtual sample generation, are used for the proposed sampling-based RBDO using virtual SVM.
The number of computer experiments is also reduced by implementing accuracy improvement strategies for the Kriging method. Since the Kriging method is used for generating virtual samples and generating response surface of the cost function, the number of computer experiments can be reduced by introducing: (1) accurate correlation parameter estimation, (2) penalized maximum likelihood estimation (PMLE) for small sample size, (3) correlation model selection by MLE, and (4) mean structure selection by cross-validation (CV) error.
|
4 |
Comparisons of methods for generating conditional Poisson samples and Sampford samplesGrafström, Anton January 2005 (has links)
Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto sampling method. They are found to be very efficient and useful in all situations. The list sequential methods for both CP-sampling and Sampford sampling are other methods that are found to be efficient, especially if many samples are to be generated.
|
5 |
Neural Correlates of Speed-Accuracy Tradeoff: An Electrophysiological AnalysisHeitz, Richard Philip 29 March 2007 (has links)
Recent computational models and physiological studies suggest that simple, two-alternative forced-choice decision making can be conceptualized as the gradual accumulation of sensory evidence. Accordingly, information is sampled over time from a sensory stimulus, giving rise to an activation function. A response is emitted when this function reaches a criterion level of activity. Critically, the phenomenon known as speed-accuracy tradeoff (SAT) is modeled as a shift in the response boundaries (criterion). As speed stress increases and criterion is lowered, the information function travels less distance before reaching threshold. This leads to faster overall responses, but also an increase in error rate, given that less information is accumulated. Psychophysiological data using EEG and single-unit recordings from monkey cortex suggest that these accumulator models are biologically plausible. The present work is an effort to strengthen this position. Specifically, it seeks to demonstrate a neural correlate of criterion and demonstrate its relationship to behavior. To do so, subjects performed a letter discrimination paradigm under three levels of speed stress. At the same time, electroencephalogram (EEG) was used to derive a measure known as the lateralized readiness potential, which is known to reflect ongoing motor preparation in motor cortex. In Experiment 1, the amplitude of the LRP was related to speed stress: as subjects were forced to respond more quickly, less information was accumulated before making a response. In other words, criterion lowered. These data are complicated by Experiment 2, which found that there are boundary conditions for this effect to obtain.
|
6 |
Sequential Sampling in Noisy Multi-Objective Evolutionary OptimizationSiegmund, Florian January 2009 (has links)
<p>Most real-world optimization problems behave stochastically. Evolutionary optimization algorithms have to cope with the uncertainty in order to not loose a substantial part of their performance. There are different types of uncertainty and this thesis studies the type that is commonly known as noise and the use of resampling techniques as countermeasure in multi-objective evolutionary optimization. Several different types of resampling techniques have been proposed in the literature. The available techniques vary in adaptiveness, type of information they base their budget decisions on and in complexity. The results of this thesis show that their performance is not necessarily increasing as soon as they are more complex and that their performance is dependent on optimization problem and environment parameters. As the sampling budget or the noise level increases the optimal resampling technique varies. One result of this thesis is that at low computing budgets or low noise strength simple techniques perform better than complex techniques but as soon as more budget is available or as soon as the algorithm faces more noise complex techniques can show their strengths. This thesis evaluates the resampling techniques on standard benchmark functions. Based on these experiences insights have been gained for the use of resampling techniques in evolutionary simulation optimization of real-world problems.</p>
|
7 |
Transition from the late Roman period to the early Anglo-Saxon period in the Upper Thames Valley based on stable isotopesSakai, Yurika January 2017 (has links)
Following the argument of cultural change between the Romano-British and Anglo-Saxon periods in Britain, the purpose of this thesis is to find evidence of change in human diet and animal husbandry in the Upper Thames Valley across this boundary. Research questions are set to find differences in human diet, animal diet, and birth seasonality of herbivores at Horcott, a site showing human activity in both periods. Stable carbon and nitrogen isotope measurements on collagen from humans and livestock animals and enamel carbonate extracted from herbivores were analysed. Results showed changes in the diets of cattle, sheep/goats, pigs, and human, and birth seasonality of cattle and sheep/goats. These changes were argued to have been caused by differences in the intensity of fertilising crop fields, the amount of animal protein fed to adult pigs, the amount of non-local food in human diet, and the significance and purpose of livestock rearing and the preference of dairy products. The outcome of this thesis enhances the understanding of: a) the strategy and the amount of human effort put into crop cultivation and livestock management; b) the availability and preference of food for humans depending on the period; and c) the site-dependent differences in the extent of change in the course of transition between the Romano-British and Anglo-Saxon periods. This thesis demonstrates the importance of animal data in order to discuss human diet, and the advantage of modelling enamel carbonate sequential data when analysing worn and shortened teeth.
|
8 |
Sequential Sampling in Noisy Multi-Objective Evolutionary OptimizationSiegmund, Florian January 2009 (has links)
Most real-world optimization problems behave stochastically. Evolutionary optimization algorithms have to cope with the uncertainty in order to not loose a substantial part of their performance. There are different types of uncertainty and this thesis studies the type that is commonly known as noise and the use of resampling techniques as countermeasure in multi-objective evolutionary optimization. Several different types of resampling techniques have been proposed in the literature. The available techniques vary in adaptiveness, type of information they base their budget decisions on and in complexity. The results of this thesis show that their performance is not necessarily increasing as soon as they are more complex and that their performance is dependent on optimization problem and environment parameters. As the sampling budget or the noise level increases the optimal resampling technique varies. One result of this thesis is that at low computing budgets or low noise strength simple techniques perform better than complex techniques but as soon as more budget is available or as soon as the algorithm faces more noise complex techniques can show their strengths. This thesis evaluates the resampling techniques on standard benchmark functions. Based on these experiences insights have been gained for the use of resampling techniques in evolutionary simulation optimization of real-world problems.
|
9 |
GLR Control Charts for Process Monitoring with Sequential SamplingPeng, Yiming 06 November 2014 (has links)
The objective of this dissertation is to investigate GLR control charts based on a sequential sampling scheme (SS GLR charts). Phase II monitoring is considered and the goal is to quickly detect a wide range of changes in the univariate normal process mean parameter and/or the variance parameter. The performance of the SS GLR charts is evaluated and design guidelines for SS GLR charts are provided so that practitioners can easily apply the SS GLR charts in applications. More specifically, the structure of this dissertation is as follows:
We first develop a two-sided SS GLR chart for monitoring the mean μ of a normal process. The performance of the SS GLR chart is evaluated and compared with other control charts. The SS GLR chart has much better performance than that of the fixed sampling rate GLR chart. It is also shown that the overall performance of the SS GLR chart is better than that of the variable sampling interval (VSI) GLR chart and the variable sampling rate (VSR) CUSUM chart. The SS GLR chart has the additional advantage that it requires fewer parameters to be specified than other VSR charts. The optimal parameter choices are given, and regression equations are provided to find the limits for the SS GLR chart.
If detecting one-sided shifts in μ is of interest, the above SS GLR chart can be modified to be a one-sided chart. The performance of this modified SS GLR chart is investigated.
Next we develop an SS GLR chart for simultaneously monitoring the mean μ and the variance 𝜎² of a normal process. The performance and properties of this chart are evaluated. The design methodology and some illustrative examples are provided so that the SS GLR chart can be easily used in applications. The optimal parameter choices are given, and the performance of the SS GLR chart remains very good as long as the parameter choices are not too far away from the optimized choices. / Ph. D.
|
10 |
Consistency and Uniform Bounds for Heteroscedastic Simulation Metamodeling and Their ApplicationsZhang, Yutong 05 September 2023 (has links)
Heteroscedastic metamodeling has gained popularity as an effective tool for analyzing and optimizing complex stochastic systems. A heteroscedastic metamodel provides an accurate approximation of the input-output relationship implied by a stochastic simulation experiment whose output is subject to input-dependent noise variance. Several challenges remain unsolved in this field. First, in-depth investigations into the consistency of heteroscedastic metamodeling techniques, particularly from the sequential prediction perspective, are lacking. Second, sequential heteroscedastic metamodel-based level-set estimation (LSE) methods are scarce. Third, the increasingly high computational cost required by heteroscedastic Gaussian process-based LSE methods in the sequential sampling setting is a concern. Additionally, when constructing a valid uniform bound for a heteroscedastic metamodel, the impact of noise variance estimation is not adequately addressed. This dissertation aims to tackle these challenges and provide promising solutions. First, we investigate the information consistency of a widely used heteroscedastic metamodeling technique, stochastic kriging (SK). Second, we propose SK-based LSE methods leveraging novel uniform bounds for input-point classification. Moreover, we incorporate the Nystrom approximation and a principled budget allocation scheme to improve the computational efficiency of SK-based LSE methods. Lastly, we investigate empirical uniform bounds that take into account the impact of noise variance estimation, ensuring an adequate coverage capability. / Doctor of Philosophy / In real-world engineering problems, understanding and optimizing complex systems can be challenging and prohibitively expensive. Computer simulation is a valuable tool for analyzing and predicting system behaviors, allowing engineers to explore different scenarios without relying on costly physical prototypes. However, the increasing complexity of simulation models leads to a higher computational burden. Metamodeling techniques have emerged to address this issue by accurately approximating the system performance response surface based on limited simulation experiment data to enable real-time decision-making. Heteroscedastic metamodeling goes further by considering varying noise levels inherent in simulation outputs, resulting in more robust and accurate predictions. Among various techniques, stochastic kriging (SK) stands out by striking a good balance between computational efficiency and statistical accuracy. Despite extensive research on SK, challenges persist in its application and methodology. These include little understanding of SK's consistency properties, an absence of sequential SK-based algorithms for level-set estimation (LSE) under heteroscedasticity, and the increasingly low computational efficiency of SK-based LSE methods in implementation. Furthermore, a precise construction of uniform bounds for the SK predictor is also missing. This dissertation aims at addressing these aforementioned challenges. First, the information consistency of SK from a prediction perspective is investigated. Then, sequential SK-based procedures for LSE in stochastic simulation, incorporating novel uniform bounds for accurate input-point classification, are proposed. Furthermore, a popular approximation technique is incorporated to enhance the computational efficiency of the SK-based LSE methods. Lastly, empirical uniform bounds are investigated considering the impact of noise variance estimation.
|
Page generated in 0.0956 seconds