Spelling suggestions: "subject:"desponse surfaces"" "subject:"coresponse surfaces""
31 |
Sequential robust response surface strategyDeFeo, Patrick A. January 1988 (has links)
General Response Surface Methodology involves the exploration of some response variable which is a function of other controllable variables. Many criteria exist for selecting an experimental design for the controllable variables. A good choice of a design is one that may not be optimal in a single sense, but rather near optimal with respect to several criteria. This robust approach can lend well to strategies that involve sequential or two stage experimental designs.
An experimenter that fits a first order regression model for the response often fears the presence of curvature in the system. Experimental designs can be chosen such that the experimenter who fits a first order model will have a high degree of protection against potential model bias from the presence of curvature. In addition, designs can also be selected such that the experimenter will have a high chance for detection of curvature in the system. A lack of fit test is usually performed for detection of curvature in the system. Ideally, an experimenter desires good detection capabilities along with good protection capabilities.
An experimental design criterion that incorporates both detection and protection capabilities is the A₂* criterion. This criterion is used to select the designs which maximize the average noncentrality parameter of the lack of fit test among designs with a fixed bias. The first order rotated design class is a new class of designs that offers an improvement in terms of the A₂* criterion over standard first order factorial designs. In conjunction with a sequential experimental strategy, a class of second order rotated designs are easily constructed by augmenting the first order rotated designs. These designs allow for estimation of second order model terms when a significant lack of fit is observed.
Two other design criteria, that are closely related, and incorporate both detection and protection capabilities are the J<sub>PCA</sub>, and J<sub>PCMAX</sub> criterion. J<sub>PCA</sub>, considers the average mean squared error of prediction for a first order model over a region where the detection capabilities of the lack of fit test are not strong. J<sub>PCMAX</sub> considers the maximum mean squared error of prediction over the region where the detection capabilities are not strong. The J<sub>PCA</sub> and J<sub>PCMAX</sub> criteria are used within a sequential strategy to select first order experimental designs that perform well in terms of the mean squared error of prediction when it is likely that a first order model will be employed. These two criteria are also adopted for nonsequential experiments for the evaluation of first order model prediction performance. For these nonsequential experiments, second order designs are used and constructed based upon J<sub>PCA</sub> and J<sub>PCMAX</sub> for first order model properties and D₂ -efficiency and D-efficiency for second order model properties. / Ph. D.
|
32 |
A graphical approach for evaluating the potential impact of bias due to model misspecification in response surface designsVining, G. Geoffrey January 1988 (has links)
The basic purpose of response surface analysis is to generate a relatively simple model to serve as an adequate approximation for a more complex phenomenon. This model then may be used for other purposes, for example prediction or optimization. Since the proposed model is only an approximation, the analyst almost always faces the potential of bias due to model misspecification. The ultimate impact of this bias depends upon the choice both of the experimental design and of the region for conducting the experiment.
This dissertation proposes a graphical approach for evaluating the impact of bias upon response surface designs. Essentially, it extends the work of Giovannitti-Jensen (1987) and Giovannitti-Jensen and Myers (1988) who have developed a graphical technique for displaying a design's prediction variance capabilities. This dissertation extends this concept: (1) to the prediction bias due to model misspecification; (2) the prediction bias due to the presence of a single outlier; and (3) to a mean squared error of prediction. Several common first and second-order response surface designs are evaluated through this approach. / Ph. D.
|
33 |
Analyse d'incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles / uncertainties and robustness analysis for models with functional inputs and outputsEl Amri, Mohamed 29 April 2019 (has links)
L'objectif de cette thèse est de résoudre un problème d'inversion sous incertitudes de fonctions coûteuses à évaluer dans le cadre du paramétrage du contrôle d'un système de dépollution de véhicules.L'effet de ces incertitudes est pris en compte au travers de l'espérance de la grandeur d'intérêt. Une difficulté réside dans le fait que l'incertitude est en partie due à une entrée fonctionnelle connue à travers d'un échantillon donné. Nous proposons deux approches basées sur une approximation du code coûteux par processus gaussiens et une réduction de dimension de la variable fonctionnelle par une méthode de Karhunen-Loève.La première approche consiste à appliquer une méthode d'inversion de type SUR (Stepwise Uncertainty Reduction) sur l'espérance de la grandeur d'intérêt. En chaque point d'évaluation dans l'espace de contrôle, l'espérance est estimée par une méthode de quantification fonctionnelle gloutonne qui fournit une représentation discrète de la variable fonctionnelle et une estimation séquentielle efficace à partir de l'échantillon donné de la variable fonctionnelle.La deuxième approche consiste à appliquer la méthode SUR directement sur la grandeur d'intérêt dans l'espace joint des variables de contrôle et des variables incertaines. Une stratégie d'enrichissement du plan d'expériences dédiée à l'inversion sous incertitudes fonctionnelles et exploitant les propriétés des processus gaussiens est proposée.Ces deux approches sont comparées sur des fonctions jouets et sont appliquées à un cas industriel de post-traitement des gaz d'échappement d'un véhicule. La problématique est de déterminer les réglages du contrôle du système permettant le respect des normes de dépollution en présence d'incertitudes, sur le cycle de conduite. / This thesis deals with the inversion problem under uncertainty of expensive-to-evaluate functions in the context of the tuning of the control unit of a vehicule depollution system.The effect of these uncertainties is taken into account through the expectation of the quantity of interest. The problem lies in the fact that the uncertainty is partly due to a functional variable only known through a given sample. We propose two approaches to solve the inversion problem, both methods are based on Gaussian Process modelling for expensive-to-evaluate functions and a dimension reduction of the functional variable by the Karhunen-Loève expansion.The first methodology consists in applying a Stepwise Uncertainty Reduction (SUR) method on the expectation of the quantity of interest. At each evaluation point in the control space, the expectation is estimated by a greedy functional quantification method that provides a discrete representation of the functional variable and an effective sequential estimate from the given sample.The second approach consists in applying the SUR method directly to the quantity of interest in the joint space. Devoted to inversion under functional uncertainties, a strategy for enriching the experimental design exploiting the properties of Gaussian processes is proposed.These two approaches are compared on toy analytical examples and are applied to an industrial application for an exhaust gas post-treatment system of a vehicle. The objective is to identify the set of control parameters that leads to meet the pollutant emission norms under uncertainties on the driving cycle.
|
34 |
Building Seismic Fragilities Using Response Surface MetamodelsTowashiraporn, Peeranan 20 August 2004 (has links)
Building fragility describes the likelihood of damage to a building due to random ground motions. Conventional methods for computing building fragilities are either based on statistical extrapolation of detailed analyses on one or two specific buildings or make use of Monte Carlo simulation with these models. However, the Monte Carlo technique usually requires a relatively large number of simulations in order to obtain a sufficiently reliable estimate of the fragilities, and it quickly becomes impractical to simulate the required thousands of dynamic time-history structural analyses for physics-based analytical models.
An alternative approach for carrying out the structural simulation is explored in this work. The use of Response Surface Methodology in connection with the Monte Carlo simulations simplifies the process of fragility computation. More specifically, a response surface is sought to predict the structural response calculated from complex dynamic analyses. Computational cost required in a Monte Carlo simulation will be significantly reduced since the simulation is performed on a polynomial response surface function, rather than a complex dynamic model. The methodology is applied to the fragility computation of an unreinforced masonry (URM) building located in the New Madrid Seismic Zone. Different rehabilitation schemes for this structure are proposed and evaluated through fragility curves. Response surface equations for predicting peak drift are generated and used in the Monte Carlo simulation. Resulting fragility curves show that the URM building is less likely to be damaged from future earthquakes when rehabilitation is properly incorporated.
The thesis concludes with a discussion of an extension of the methodology to the problem of computing fragilities for a collection of buildings of interest. Previous approaches have considered uncertainties in material properties, but this research incorporates building parameters such as geometry, stiffness, and strength variabilities as well as nonstructural parameters (age, design code) over an aggregation of buildings in the response surface models. Simulation on the response surface yields the likelihood of damage to a group of buildings under various earthquake intensity levels. This aspect is of interest to governmental agencies or building owners who are responsible for planning proper mitigation measures for collections of buildings.
|
35 |
Modelagem do crescimento de Aspergillus niger em nectar de manga, frente a pH e temperatura / Growth modeling of Aspergillus niger in mango nectar, as a function of pH and temperatureSilva, Alessandra Regina da 14 July 2006 (has links)
Orientador: Pilar Rodriguez de Massaguer / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos / Made available in DSpace on 2018-08-06T17:12:35Z (GMT). No. of bitstreams: 1
Silva_AlessandraReginada_M.pdf: 1247823 bytes, checksum: dd191945f3ada429831c4afb80ae10e2 (MD5)
Previous issue date: 2006 / Resumo: Em 2005, a produção mundial de manga ¿in natura¿ foi de 850000 toneladas, sendo o que Brasil ocupa o sétimo lugar no ranking mundial de produção. Neste mercado, o néctar de manga ocupa o terceiro lugar da preferência mundial por sabor. Considerando-se que os contaminantes emergentes deste produto são fungos que apresentam características de resistência ao processo de pasteurização empregue pelas indústrias, torna-se essencial que se conheça o nível de contaminação do produto por estes bolores ermoresistentes, bem como que se identifique qual espécie é a mais termoresistente isolada do produto. Além disso, como o processo de pasteurização por si só não é capaz de eliminar totalmente essa micobiota contaminante, sem alterar sensorialmente o produto, é indispensável o estudo do efeito de fatores controladores do crescimento destes microrganismos termoresistentes, bem como que se modele seu crescimento em função de alterações nestes fatores. Considerando o exposto, esta pesquisa visou: i. quantificar, isolar e identificar o bolor mais termoresistente presente em néctar de manga; ii. avaliar o efeito das variáveis controladoras do crescimento sobre os parâmetros de crescimento desse isolado, quando em dois níveis de inoculação, 6,8x100esp./mL e 9,3x103esp./mL, selecionando as variáveis de maior impacto, via modelagem preditiva primária; iii modelar os parâmetros de crescimento, como tempo de adaptação (l; dias) e taxa de crescimento (m; mm.dia-1) do bolor mais termoresistente utilizando a modelagem polinimial de superfície de resposta. Para tanto, 50L de néctar de manga foram utilizados para o isolamento das linhagens termoresistentes, conforme descrito por Bechaut & Pitt (2001). Para delineamento da termoresistência dos isolados utilizou-se metodologia adaptada de Baglioni (1998). No teste do nível baixo de inóculo (100esp./mL de néctar de manga), a temperatura variou de 12 a 25°C, o pH, de 3,2 a 4,8 e a aw de 0,979 a 0,988, mediante um desenho fatorial 23 acrescido de 3 pontos centrais e 6 axiais; já para o nível de inóculo de 103esp./mL, a temperatura variou de 18 a 22°C, o pH de 3,5 a 4,5 e aw de 0,970 a 0,990, mediante desenho fatorial 23 com 3 pontos centrais. Os dados do incremento diário nas medidas de diâmetro foram ajustados pelos modelos de crescimento de Baranyi (Baranyi & Roberts, 1995) e de Gompertz modificado (Zwietering et al., 1994). Para a modelagem secundária do crescimento, utilizou-se um desenho fatorial 22 com 3 pontos centrais e 4 axiais, sendo que a temperatura variou de 17,2 a 22,8°C e o pH variou de 3,2 a 4,7, com aw fixada em 0,980 (comum ao produto). Para estas avaliações o fungo foi inoculado em 230mL de néctar de manga, previamente esterilizados, dispostos em garrafas PET, higienizadas segundo Petrus (2000). A contagem total de bolores termoresistentes presentes no néctar de manga foi de 7,4x103esp./mL, deste total foram isoladas 8 linhagens diferentes, sendo a que apresentou maior termoresistência (100°C/15min) identificada como Aspergillus niger. Considerando-se o nível de inóculo baixo (6,8x100esp./mL de néctar de manga), observou-se que reduções de pH causaram aumento do tempo de adaptação (4 para 10 dias), bem como incrementos de 0,01 na aw o diminuíram em 3 dias, sendo que em temperaturas inferiores a 18°C não foi observado o crescimento do fungo. Já considerando-se o nível de inóculo de 9,3x103esp./mL, observou-se que a aw, na faixa natural ao produto, não apresentou impacto significativo (p<0,05) sobre os parâmetros de crescimento do microrganismo. Em condições de abuso de temperatura, uma redução de 5,6°C (22,8 para 17,2°C), implicaram em aumento de 23 dias no tempo de adaptação do fungo. De maneira semelhante, quando o pH do néctar de manga passou de 4,0 para 4,7, o tempo de adaptação do microrganismo passou de 11 para 3 dias e o diâmetro final da colônia triplicou. Cabe salientar que o modelo de Baranyi demonstrou melhor performance no ajuste dos dados de crescimento, com valores maiores valores de R2 (0,998). Para a modelagem polinomial de superfície de resposta a aw foi fixada em 0,980 (natural do produto). O modelo obtido para tempo de adaptação, com parâmetros significativos (p<0,05) temperatura, linear e quadrática e pH linear (com valores codificados) foi: . Este modelo foi verificado com R2 0,981, 1.06 de fator bias, 1,16 de fator exatidão e relação Fval/Ftab de 23,4. As análises estatísticas do modelo demonstraram que reduções no valor de pH em 0,5 unidade podem ser capazes de duplicar o tempo de vida útil do produto (10 para 20 dias). Efeitos semelhantes foram observados para reduções de temperatura da ordem de 0,8°C. Considerando-se a taxa de crescimento, o modelo polinomial obtido, tendo como fatores significativos (p<0,05) pH, linear e quadrático e temperatura linear e quadrática, com valores codificados, foi: Este modelo foi verificado com R2 0,882, 1.06 de fator bias, 1,16 de fator exatidão e relação Fval/Ftab de 2.6. Análises estatísticas do modelo demonstraram que, em pH 4,0 e 20°C é observada menor taxa de crescimento (1,33mm.dia-1).
Assim sendo, os resultados demonstraram que pH e temperatura são fatores que exercem influência significativa sobre o crescimento de A.niger, em néctar de manga. Entretanto, estes fatores, nos níveis estudados, somente retardam o crescimento do microrganismo, não o impedindo. É essencial então o controle por refrigeração, visando evitar abusos de temperatura, já que em temperaturas =15°C, independentemente do nível de inóculo utilizado, não foi notado o crescimento de A.niger. De modo similar, deve-se também optar pelo controle rígido nos valores de pH, pois alterações de 0,5 unidade podem implicar em mudanças severas na vida de prateleira do produto. Entretanto, somente alterações em valores de pH e temperatura não são suficientes para garantir a estabilidade microbiológica do produto, já que esta depende da qualidade da matéria-prima, dentre outros fatores, contudo, tanto pH quanto temperatura podem atuar como coadjuvantes na preservação do néctar de manga / Abstract: In 2005, the world production of mango fruits was 850,000ton and Brazil was the 7th in ranked of world production. The mango nectar is the third ranked in flavor preference. Concerning the emerging contaminant microorganisms of this product are molds, which are able to survive pasteurization process, it is essential to know the contamination level of heat resistant molds in this product and to identify the most heat resistant. Moreover, pasteurization process is unable to eliminate these molds, so it is necessary to study both the effect of hurdle factors and the interaction of factors in the growth. The aim of this research was: i. to quantify, isolate and identify the most heat resistant mold in the mango nectar; ii. to evaluate the effect of hurdle factors on growth parameters of the isolated, considering two levels of inoculum, 6.8x100 and 9.3x103spores/mL of mango nectar, selecting the most impact variables through primary modeling; iii. to model growth parameters such adaptation time (l; days) and growth rate (m; mm.day-1) of the most heat resistant mold by polynomial response surface. For this purpose, 50L of mango nectar were used for isolating the thermal resistant strains (Bechat & Pitt, 2001). To screen of heat resistantance of the isolated was performed as indicated in Baglioni (1998). Concerning low inoculum level (100spores/mL mango nectar), temperature was from 12 to 25°C, pH, from 3.2 to 4.8 and aw, from 0.979 to 0.988, which were tested by a central composite design, 23 with 3 central points and 6 star points. For 103spores/mL of mango nectar, temperature was from 18 to 22°C, pH, from 3.5 to 4.5 and aw, from 0.970 to 0.990, tested by factorial design 23 with 3 central points. Fungal growth was measured as colony diameter on daily basis. Primary predictive models of Baranyi & Roberts and modified Gompertz were used to fit growth data. For a secondary polynomial model was used a central composite design 22 with 3 central points and 4 star points, in which temperature varied from 17.2 to 22.8°C and pH, from 3.2 to 4.7, and aw was fixed at 0.980. For all experiments conducted the mold was inoculated in 230mL PET bottles mango nectar, sanitized according to Petrus (2000). The total count average of heat resistant molds from mango nectar was . 4x103spores/mL, from this total were isolated 8 different strains and the most heat resistant was Aspergillus niger (100°C/15 minutes). Concerning the low inoculum level (100spores/mL of mango nectar), was observed that pH reductions increase adaptation time from 4 to 10 days, as well, an increase of 0,01 on aw reduced shelf life in 3 days. In temperatures =18°C the growth mold was not observed. For inoculation level 9.3x103spores/mL of mango nectar, was observed that aw was not significant (p<0.05) on the mold growth parameters. In abuse temperatures conditions, reductions of 5.6°C (from 22.8 to 17.2°C), increase the adaptation time in 23 days. Same effects were observed when pH of mango nectar changed from 4.0 to 4.7, while adaptation time decreased from 11 to 3 days and the maximum diameter tripled. It was verified that Baranyi & Roberts¿ model adjusted better the experimental data, with higher adjustment coefficients (0.998). The obtained model for adaptation time, with temperature, (temperature)2 and pH as significant factors (p<0.05), was: . This model was verified with R2 0.981, 1.06 bias factor, 1.16 accuracy factor and Fval/Ftab 23.4. The statistical analyses demonstrate that a decrease in pH of 0.5 unit could double the product shelf-life (from 10 to 20 days). The same effect was observed with reductions of about 0.8°C in temperature. The maximum shelf life obtained (about 30 days) was for pH 3.28 and 17.2°C. The obtained model for growth rate, with pH, (pH)2, temperature and (temperature)2 as significant factors was: This model was verified with R2 0.882, 1.06 bias factor, 1.16 accuracy factor and Fval/Ftab 2.6. Statistical analysis demonstrated when pH was 4.0 and temperature was 20°C, the growth rate was lower (1.33mm.day-1). Thus, pH and temperature were significants factors (p<0.05) on growth of A.niger in mango nectar. However, for the studied levels, this factor only retards the mold growth, without eliminating. So, it is essential to control storage refrigeration, avoiding abuse temperature, since temperatures = 15°C, do not permit A.niger growth, no matter the inoculum level. In the same manner, variation in pH of 0.5 units can implicate in strong changes in product shelf life. However pH and temperature variation are not able to guarantee the product microbial stability, since it depends on raw material quality assurance, among other factors. Hence, pH and temperature can contribute to mango nectar preservation / Mestrado / Mestre em Ciência de Alimentos
|
36 |
An Empirically Based Stochastic Turbulence Simulator with Temporal Coherence for Wind Energy ApplicationsRinker, Jennifer Marie January 2016 (has links)
<p>In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields. </p><p>In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.</p><p>The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.</p><p>This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.</p><p>The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.</p><p>The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.</p><p>EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.</p><p>Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.</p><p>The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.</p><p>This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.</p><p>The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.</p> / Dissertation
|
37 |
Hydrodynamic Shape Optimization of Trawl Doors with Three-Dimensional Computational Fluid Dynamics Models and Local SurrogatesHermannsson, Elvar January 2014 (has links)
Rising fuel prices have been inflating the operating costs of the fishing industry. Trawl doors are used to hold the fishing net open during trawling operations, and they have a great influence on the fuel consumption of vessels. Improvements in the design of trawl doors could therefore contribute significantly to increased fuel efficiency. An efficient optimization algorithm using two- and three-dimensional (2D and 3D) computational fluid dynamics (CFD) models is presented. Accurate CFD models, especially 3D, are computationally expensive. The direct use of traditional optimization algorithms, which often require a large number of evaluations, can therefore be prohibitive. The proposed method is iterative and uses low-order local response surface approximation models as surrogates for the expensive CFD model to reduce the number of iterations. The algorithm is applied to the design of two types of geometries: a typical modern trawl door, and a novel airfoil-shaped trawl door. The results from the 2D design optimization show that the hydrodynamic efficiency of the typical modern trawl door could be increased by 32%, and the novel airfoil-shaped trawl door by 13%. When the 2D optimum designs for the two geometries are compared, the novel airfoil-shaped trawl door results to be 320% more efficient than the optimized design of the typical modern trawl door. The 2D optimum designs were used as the initial designs for the 3D design optimization. The results from the 3D optimization show that the hydrodynamic efficiency could be increased by 6% for both the typical modern and novel airfoil-shaped trawl doors. Results from a 3D CFD analysis show that 3D flow effects are significant, where the values for drag are significantly underestimated in 2D CFD models.
|
38 |
Multi-sensor Optimization Of The Simultaneous Turning And Boring OperationDeane, Erick Johan 01 January 2011 (has links)
To remain competitive in today’s demanding economy, there is an increasing demand for improved productivity and scrap reduction in manufacturing. Traditional manufacturing metal removal processes such as turning and boring are still one of the most used techniques for fabricating metal products. Although the essential metal removal process is the same, new advances in technology have led to improvements in the monitoring of the process allowing for reduction of power consumption, tool wear, and total cost of production. Replacing used CNC lathes from the 1980’s in a manufacturing facility may prove costly, thus finding a method to modernize the lathes is vital. This research focuses on Phase I and II of a three phase research project where the final goal is to optimize the simultaneous turning and boring operation of a CNC Lathe. From the optimization results it will be possible to build an adaptive controller that will produce parts rapidly while minimizing tool wear and machinist interaction with the lathe. Phase I of the project was geared towards selecting the sensors that were to be used to monitor the operation and designing a program with an architecture that would allow for simultaneous data collection from the selected sensors at high sampling rates. Signals monitored during the operation included force, temperature, vibration, sound, acoustic emissions, power, and metalworking fluid flow rates. Phase II of this research is focused on using the Response Surface Method to build empirical models for various responses and to optimize the simultaneous cutting process. The simultaneous turning and boring process was defined by the four factors of spindle speed, feed rate, outer diameter depth of cut, and inner diameter depth of cut. A total of four sets of experiments were performed. The first set of experiments screened the experimental region to iii determine if the cutting parameters were feasible. The next three set s of designs of experiments used Central Composite Designs to build empirical models of each desired response in terms of the four factors and to optimize the process. Each design of experiments was compared with one another to validate that the results achieved were accurate within the experimental region. By using the Response Surface Method optimal machining parameter settings were achieved. The algorithm used to search for optimal process parameter settings was the desirability function. By applying the results from this research to the manufacturing facility, they will achieve reduction in power consumption, reduction in production time, and decrease in the total cost of each part.
|
39 |
Construction and properties of Box-Behnken designsJo, Jinnam 01 February 2006 (has links)
Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2<sup>k</sup> full or 2<sup>k-1</sup> fractional factorials.
In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2<sup>k</sup>full factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λ<sub>m</sub> are all greater than zero.
In order to reduce the number of experimental points the use of 2<sup>k-1</sup> fractional factorials instead of 2<sup>k</sup> full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators.
The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed. / Ph. D.
|
40 |
Simulation-optimization studies: under efficient stimulationstrategies, and a novel response surface methodology algorithmJoshi, Shirish 06 June 2008 (has links)
While attempting to solve optimization problems, the lack of an explicit mathematical expression of the problem may preclude the application of the standard methods of optimization which prove valuable in an analytical framework. In such situations, computer simulations are used to obtain the mean response values for the required settings of the independent variables. Procedures for optimizing on the mean response values, which are in turn obtained through computer simulation experiments, are called simulation-optimization techniques.
The focus of this work is on the simulation-optimization technique of response surface methodology (RSM). RSM is a collection of mathematical and statistical techniques for experimental optimization. Correlation induction strategies can be employed in RSM to achieve improved statistical inferences on experimental designs and sequential experimentations. Also, the search procedures currently employed by RSM algorithms can be improved by incorporating gradient deflection methods.
This dissertation has three major goals: (a) develop analytical results to quantitatively express the gains of using the common random number (CRN) strategy of variance reduction over direct simulation (independent streams or IS strategy) at each stage RSM, (b) develop a new RSM algorithm by incorporating gradient deflection methods in existing RSM algorithms, and (c) to conduct extensive empirical studies to quantify: (i) the use of eRN strategy over direct simulation in a standard RSM algorithm, and (ii) the gains of the new RSM algorithm over a standard existing RSM algorithm. / Ph. D.
|
Page generated in 0.0967 seconds