Spelling suggestions: "subject:"desponse surfaces (estatistics)"" "subject:"desponse surfaces (cstatistics)""
21 |
Weighted multiple response optimization methodology : modifications and examination of limitations associated with applications of weighted methodsGheware, Ashish G. 01 July 2003 (has links)
No description available.
|
22 |
Response surface optimization techniques for multiple objective and randomly valued independent variable problemsDvorak, Todd M. 01 October 2000 (has links)
No description available.
|
23 |
The use of correlated simulation experiments in response surface optimizationDonohue, Joan M. January 1988 (has links)
Response surface methodology (RSM) provides a useful framework for the optimization of stochastic simulation models. The sequential experimentation and model fitting procedures of RSM enable prediction of the response and location of the optimum operating conditions. In a simulation environment, the experimentation phase of RSM involves selecting the input variable levels for each simulation run and assigning pseudorandom number streams to the stochastic model components. Through an appropriate assignment of random number streams to simulation runs, correlation among the simulated responses can be induced, thereby affecting reductions in the variances of certain model coefficients. Three methods of correlation induction are considered in this research: (i) no correlation induction, achieved through the use of independent streams, (ii) positive correlation induction, achieved through the use of common streams, and (iii) a combination of positive and negative correlation induction, achieved through the use of the assignment rule blocking strategy.
The performance of the correlation induction strategies is evaluated in terms of two mean squared error design criteria; MSE of response and MSE of slope. The MSE of slope criteria is useful in the early stages of RSM, when the experimental objective is location of the region containing the optimum. The MSE of response criteria is useful in the latter stages of RSM, when the experimental objective is prediction of the optimum response. The correlation induction strategies are evaluated under two experimental situations; fitting a first order model while protecting against quadratic curvature in the response surface, and fitting a second order model while protecting against cubic curvature. In the case of fitting a first order model, two-level factorial designs are used to evaluate the correlation induction strategies, and in the second order case, four design classes are considered; central composite designs, Box-Behnken designs, three-level factorial designs, and small composite designs.
The findings of this research indicate that the assignment rule blocking strategy generally performs the best of the three strategies under both MSE criteria, and the performance of this strategy improves as the magnitudes of the induced correlations increase. The independent streams strategy is a poor choice when the design criteria is MSE of slope and the common streams strategy is a poor choice when the design criteria is MSE of response. The central composite and Box-Behnken designs were found to perform the best of the four second order design classes. The three-level factorial designs performed poorly under MSE of response criteria and the small composite designs performed poorly under the MSE of slope criteria. / Ph. D.
|
24 |
Sequential design augmentation with model misspecificationSutherland, Sindee S. 03 October 2007 (has links)
In Response Surface Methodology (RSM) one attempts to model some variable of interest, usually as a known function of design variables. Subsequent analysis often indicates a need to move to a new region of interest. Many times the design is augmented by adding points sequentially to this new region of interest. Current methods of sequential design augmentation are used under the assumption of either correctly specified models or misspecification in the user’s model that can be quantified, such as using a first order model when a second order model is correct. However, under model misspecification the sequential placement of points in the new region of interest using usual augmentation techniques may not be optimal, especially if the misspecification in the model is not due to polynomial terms.
A new methodology, based on a modified kernel regression procedure called HATLINK, is presented that incorporates model misspecification into the sequential augmentation of points in the new region. HATLINK is a combination of parametric and nonparametric regressions and is designed to perform best when the user has specified a reasonable approximate model. Parametric regression supplies a basic fit, while nonparametric regression allows adjustments to compensate for some misspecification in the parametric model. The mixing parameter is determined adaptively through cross-validation. The augmentation is performed by a new technique called BIIV, the bias-influenced integrated prediction variance. BIIV attempts to select points that both minimizes the integrated prediction variance and the location where the current fit is the worst. Thus, BIIV incorporates an estimate of the bias due to misspecification of the parametric model into the augmentation procedure. It is shown that the designs generated by sequential design augmentation using HATLINK and BIIV are superior to designs from other methods. / Ph. D.
|
25 |
Graphical assessment of the prediction capability of response surface designsGiovannitti-Jensen, Ann January 1987 (has links)
A response surface analysis is concerned with the exploration of a system in order to determine the behavior of the response of the system as levels of certain factors which influence the response are changed. It is often of particular interest to predict the response in some region of the allowable factor values and to find the optimal operating conditions of the system.
In an experiment to search for the optimum response of a surface it is advantageous to predict the response with equal, or nearly equal, precision at all combinations of the levels of the variables which represent locations which are the same distance from the center of the experimental region. Measures of the quality of prediction at locations on the surface of a hypersphere are presented in this thesis. These measures are used to form a graphical method of assessing the overall prediction capability of an experimental design throughout the region of interest.
Rotatable designs give equal variances of predicted values corresponding to locations on the same sphere. In this case, the center of the sphere coincides with the center of the rotatable design. However, there is a need for a method to quantify the prediction capability on spheres for non-rotatable designs. The spherical variance is a measure of the average prediction variance at locations on the surface of a sphere. The spherical variance obtained with a design provides an assessment of how well the response is being estimated on the average at locations which are the same distance from the region center. This thesis introduces two measures which describe the dispersion in the variances of the predicted responses at aH locations on the surface of a sphere. These prediction variance dispersion (PVD) measures are used to evaluate the ability of a design to estimate the response with consistent precision at locations which are the same distance from the region center. The PVD measures are used in conjunction with the spherical variance to assess the prediction capability of a design.
A plot of the spherical variance and the maximum and minimum prediction variances for locations on a sphere against the radius of the sphere gives a comprehensive picture of the behavior of the prediction variances throughout a region, and, hence, of the quality of the predicted responses, obtained with a particular design. Such plots are used to investigate and compare the prediction capabilities of certain response surface designs currently available to the researcher. The plots are also used to investigate the robustness of a design under adverse experimental conditions and to determine the effects of taking an additional experimental run on the quality of the predicted responses. / Ph. D.
|
26 |
A graphical comparison of designs for response optimization based on slope estimationHockman, Kimberly Kearns January 1989 (has links)
The response surface problem is two-fold: to predict values of the response, and to optimize the response. Slope estimation criteria are well suited for the optimization problem. Response prediction capability has been assessed by plotting the average, maximum, and minimum prediction variances on the surface of spheres with radii ranging across the region of interest. Average and maximum prediction bias plots have recently been added to the spherical criteria. Combined with the prediction variance, a graphical MSE criterion results. This research extends these ideas to the slope estimation objective.
A direct relationship between precise slope estimation and the ability to pinpoint the location of the optimum is developed, resulting in a general slope variance measure related to E-optimality in slope estimation. A more specific slope variance measure is defined and analyzed for use in evaluating standard response surface (RS) designs,where slopes parallel to the factor axes are estimated with equal precision. Standard second order RS designs are then studied in light of the prediction and optimization goal distinction. Designs which perform well for prediction of the response do not necessarily estimate the slope precisely.
A spherical measure of bias in slope estimation is developed and used to measure slope bias due to model misspecification and due to the presence of outliers. A study of augmenting saturated orthogonal arrays of strength two to detect lack of fit is included as an application of a combined squared bias and variance measure of MSE in slope.
A study of the designs recommended for precise slope estimation in their robustness to outliers and to missing observations is conducted using the slope bias and general slope variance measures, respectively. / Ph. D.
|
27 |
Design and analysis for a two level factorial experiment in the presence of dispersion effectsMays, Darcy P. 10 October 2005 (has links)
Standard response surface methodology experimental designs for estimating location models involve the assumption of homogeneous variance throughout the design region. However, with heterogeneity of variance these standard designs are not optimal.
Using the D and Q-optimality criteria, this dissertation proposes a two-stage experimental design procedure that gives more efficient designs than the standard designs when heterogeneous variance exists. Several multiple variable location models, with and without interactions, are considered. For each the first stage estimates the heterogeneous variance structure, while the second stage then augments the first stage to produce a D or Q-optimal design for fitting the location model under the estimated variance structure. However, there is a potential instability of the variance estimates in the first stage that can lower the efficiency of the two-stage procedure. This problem can be addressed and the efficiency of the procedure enhanced if certain mild assumptions concerning the variance structure are made and formulated as a prior distribution to produce a Bayes estimator.
With homogeneous variance, designs are analyzed using ordinary least squares. However, with heterogeneous variance the correct analysis is to use weighted least squares. This dissertation also examines the effects that analysis by weighted least squares can have and compares this procedure to the proposed two-stage procedure. / Ph. D.
|
28 |
Sequential robust response surface strategyDeFeo, Patrick A. January 1988 (has links)
General Response Surface Methodology involves the exploration of some response variable which is a function of other controllable variables. Many criteria exist for selecting an experimental design for the controllable variables. A good choice of a design is one that may not be optimal in a single sense, but rather near optimal with respect to several criteria. This robust approach can lend well to strategies that involve sequential or two stage experimental designs.
An experimenter that fits a first order regression model for the response often fears the presence of curvature in the system. Experimental designs can be chosen such that the experimenter who fits a first order model will have a high degree of protection against potential model bias from the presence of curvature. In addition, designs can also be selected such that the experimenter will have a high chance for detection of curvature in the system. A lack of fit test is usually performed for detection of curvature in the system. Ideally, an experimenter desires good detection capabilities along with good protection capabilities.
An experimental design criterion that incorporates both detection and protection capabilities is the A₂* criterion. This criterion is used to select the designs which maximize the average noncentrality parameter of the lack of fit test among designs with a fixed bias. The first order rotated design class is a new class of designs that offers an improvement in terms of the A₂* criterion over standard first order factorial designs. In conjunction with a sequential experimental strategy, a class of second order rotated designs are easily constructed by augmenting the first order rotated designs. These designs allow for estimation of second order model terms when a significant lack of fit is observed.
Two other design criteria, that are closely related, and incorporate both detection and protection capabilities are the J<sub>PCA</sub>, and J<sub>PCMAX</sub> criterion. J<sub>PCA</sub>, considers the average mean squared error of prediction for a first order model over a region where the detection capabilities of the lack of fit test are not strong. J<sub>PCMAX</sub> considers the maximum mean squared error of prediction over the region where the detection capabilities are not strong. The J<sub>PCA</sub> and J<sub>PCMAX</sub> criteria are used within a sequential strategy to select first order experimental designs that perform well in terms of the mean squared error of prediction when it is likely that a first order model will be employed. These two criteria are also adopted for nonsequential experiments for the evaluation of first order model prediction performance. For these nonsequential experiments, second order designs are used and constructed based upon J<sub>PCA</sub> and J<sub>PCMAX</sub> for first order model properties and D₂ -efficiency and D-efficiency for second order model properties. / Ph. D.
|
29 |
A graphical approach for evaluating the potential impact of bias due to model misspecification in response surface designsVining, G. Geoffrey January 1988 (has links)
The basic purpose of response surface analysis is to generate a relatively simple model to serve as an adequate approximation for a more complex phenomenon. This model then may be used for other purposes, for example prediction or optimization. Since the proposed model is only an approximation, the analyst almost always faces the potential of bias due to model misspecification. The ultimate impact of this bias depends upon the choice both of the experimental design and of the region for conducting the experiment.
This dissertation proposes a graphical approach for evaluating the impact of bias upon response surface designs. Essentially, it extends the work of Giovannitti-Jensen (1987) and Giovannitti-Jensen and Myers (1988) who have developed a graphical technique for displaying a design's prediction variance capabilities. This dissertation extends this concept: (1) to the prediction bias due to model misspecification; (2) the prediction bias due to the presence of a single outlier; and (3) to a mean squared error of prediction. Several common first and second-order response surface designs are evaluated through this approach. / Ph. D.
|
30 |
Building Seismic Fragilities Using Response Surface MetamodelsTowashiraporn, Peeranan 20 August 2004 (has links)
Building fragility describes the likelihood of damage to a building due to random ground motions. Conventional methods for computing building fragilities are either based on statistical extrapolation of detailed analyses on one or two specific buildings or make use of Monte Carlo simulation with these models. However, the Monte Carlo technique usually requires a relatively large number of simulations in order to obtain a sufficiently reliable estimate of the fragilities, and it quickly becomes impractical to simulate the required thousands of dynamic time-history structural analyses for physics-based analytical models.
An alternative approach for carrying out the structural simulation is explored in this work. The use of Response Surface Methodology in connection with the Monte Carlo simulations simplifies the process of fragility computation. More specifically, a response surface is sought to predict the structural response calculated from complex dynamic analyses. Computational cost required in a Monte Carlo simulation will be significantly reduced since the simulation is performed on a polynomial response surface function, rather than a complex dynamic model. The methodology is applied to the fragility computation of an unreinforced masonry (URM) building located in the New Madrid Seismic Zone. Different rehabilitation schemes for this structure are proposed and evaluated through fragility curves. Response surface equations for predicting peak drift are generated and used in the Monte Carlo simulation. Resulting fragility curves show that the URM building is less likely to be damaged from future earthquakes when rehabilitation is properly incorporated.
The thesis concludes with a discussion of an extension of the methodology to the problem of computing fragilities for a collection of buildings of interest. Previous approaches have considered uncertainties in material properties, but this research incorporates building parameters such as geometry, stiffness, and strength variabilities as well as nonstructural parameters (age, design code) over an aggregation of buildings in the response surface models. Simulation on the response surface yields the likelihood of damage to a group of buildings under various earthquake intensity levels. This aspect is of interest to governmental agencies or building owners who are responsible for planning proper mitigation measures for collections of buildings.
|
Page generated in 0.085 seconds