• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 11
  • 7
  • 2
  • 2
  • Tagged with
  • 63
  • 63
  • 44
  • 19
  • 16
  • 15
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Analyse d'incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles / uncertainties and robustness analysis for models with functional inputs and outputs

El Amri, Mohamed 29 April 2019 (has links)
L'objectif de cette thèse est de résoudre un problème d'inversion sous incertitudes de fonctions coûteuses à évaluer dans le cadre du paramétrage du contrôle d'un système de dépollution de véhicules.L'effet de ces incertitudes est pris en compte au travers de l'espérance de la grandeur d'intérêt. Une difficulté réside dans le fait que l'incertitude est en partie due à une entrée fonctionnelle connue à travers d'un échantillon donné. Nous proposons deux approches basées sur une approximation du code coûteux par processus gaussiens et une réduction de dimension de la variable fonctionnelle par une méthode de Karhunen-Loève.La première approche consiste à appliquer une méthode d'inversion de type SUR (Stepwise Uncertainty Reduction) sur l'espérance de la grandeur d'intérêt. En chaque point d'évaluation dans l'espace de contrôle, l'espérance est estimée par une méthode de quantification fonctionnelle gloutonne qui fournit une représentation discrète de la variable fonctionnelle et une estimation séquentielle efficace à partir de l'échantillon donné de la variable fonctionnelle.La deuxième approche consiste à appliquer la méthode SUR directement sur la grandeur d'intérêt dans l'espace joint des variables de contrôle et des variables incertaines. Une stratégie d'enrichissement du plan d'expériences dédiée à l'inversion sous incertitudes fonctionnelles et exploitant les propriétés des processus gaussiens est proposée.Ces deux approches sont comparées sur des fonctions jouets et sont appliquées à un cas industriel de post-traitement des gaz d'échappement d'un véhicule. La problématique est de déterminer les réglages du contrôle du système permettant le respect des normes de dépollution en présence d'incertitudes, sur le cycle de conduite. / This thesis deals with the inversion problem under uncertainty of expensive-to-evaluate functions in the context of the tuning of the control unit of a vehicule depollution system.The effect of these uncertainties is taken into account through the expectation of the quantity of interest. The problem lies in the fact that the uncertainty is partly due to a functional variable only known through a given sample. We propose two approaches to solve the inversion problem, both methods are based on Gaussian Process modelling for expensive-to-evaluate functions and a dimension reduction of the functional variable by the Karhunen-Loève expansion.The first methodology consists in applying a Stepwise Uncertainty Reduction (SUR) method on the expectation of the quantity of interest. At each evaluation point in the control space, the expectation is estimated by a greedy functional quantification method that provides a discrete representation of the functional variable and an effective sequential estimate from the given sample.The second approach consists in applying the SUR method directly to the quantity of interest in the joint space. Devoted to inversion under functional uncertainties, a strategy for enriching the experimental design exploiting the properties of Gaussian processes is proposed.These two approaches are compared on toy analytical examples and are applied to an industrial application for an exhaust gas post-treatment system of a vehicle. The objective is to identify the set of control parameters that leads to meet the pollutant emission norms under uncertainties on the driving cycle.
32

Building Seismic Fragilities Using Response Surface Metamodels

Towashiraporn, Peeranan 20 August 2004 (has links)
Building fragility describes the likelihood of damage to a building due to random ground motions. Conventional methods for computing building fragilities are either based on statistical extrapolation of detailed analyses on one or two specific buildings or make use of Monte Carlo simulation with these models. However, the Monte Carlo technique usually requires a relatively large number of simulations in order to obtain a sufficiently reliable estimate of the fragilities, and it quickly becomes impractical to simulate the required thousands of dynamic time-history structural analyses for physics-based analytical models. An alternative approach for carrying out the structural simulation is explored in this work. The use of Response Surface Methodology in connection with the Monte Carlo simulations simplifies the process of fragility computation. More specifically, a response surface is sought to predict the structural response calculated from complex dynamic analyses. Computational cost required in a Monte Carlo simulation will be significantly reduced since the simulation is performed on a polynomial response surface function, rather than a complex dynamic model. The methodology is applied to the fragility computation of an unreinforced masonry (URM) building located in the New Madrid Seismic Zone. Different rehabilitation schemes for this structure are proposed and evaluated through fragility curves. Response surface equations for predicting peak drift are generated and used in the Monte Carlo simulation. Resulting fragility curves show that the URM building is less likely to be damaged from future earthquakes when rehabilitation is properly incorporated. The thesis concludes with a discussion of an extension of the methodology to the problem of computing fragilities for a collection of buildings of interest. Previous approaches have considered uncertainties in material properties, but this research incorporates building parameters such as geometry, stiffness, and strength variabilities as well as nonstructural parameters (age, design code) over an aggregation of buildings in the response surface models. Simulation on the response surface yields the likelihood of damage to a group of buildings under various earthquake intensity levels. This aspect is of interest to governmental agencies or building owners who are responsible for planning proper mitigation measures for collections of buildings.
33

Modelagem do crescimento de Aspergillus niger em nectar de manga, frente a pH e temperatura / Growth modeling of Aspergillus niger in mango nectar, as a function of pH and temperature

Silva, Alessandra Regina da 14 July 2006 (has links)
Orientador: Pilar Rodriguez de Massaguer / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos / Made available in DSpace on 2018-08-06T17:12:35Z (GMT). No. of bitstreams: 1 Silva_AlessandraReginada_M.pdf: 1247823 bytes, checksum: dd191945f3ada429831c4afb80ae10e2 (MD5) Previous issue date: 2006 / Resumo: Em 2005, a produção mundial de manga ¿in natura¿ foi de 850000 toneladas, sendo o que Brasil ocupa o sétimo lugar no ranking mundial de produção. Neste mercado, o néctar de manga ocupa o terceiro lugar da preferência mundial por sabor. Considerando-se que os contaminantes emergentes deste produto são fungos que apresentam características de resistência ao processo de pasteurização empregue pelas indústrias, torna-se essencial que se conheça o nível de contaminação do produto por estes bolores ermoresistentes, bem como que se identifique qual espécie é a mais termoresistente isolada do produto. Além disso, como o processo de pasteurização por si só não é capaz de eliminar totalmente essa micobiota contaminante, sem alterar sensorialmente o produto, é indispensável o estudo do efeito de fatores controladores do crescimento destes microrganismos termoresistentes, bem como que se modele seu crescimento em função de alterações nestes fatores. Considerando o exposto, esta pesquisa visou: i. quantificar, isolar e identificar o bolor mais termoresistente presente em néctar de manga; ii. avaliar o efeito das variáveis controladoras do crescimento sobre os parâmetros de crescimento desse isolado, quando em dois níveis de inoculação, 6,8x100esp./mL e 9,3x103esp./mL, selecionando as variáveis de maior impacto, via modelagem preditiva primária; iii modelar os parâmetros de crescimento, como tempo de adaptação (l; dias) e taxa de crescimento (m; mm.dia-1) do bolor mais termoresistente utilizando a modelagem polinimial de superfície de resposta. Para tanto, 50L de néctar de manga foram utilizados para o isolamento das linhagens termoresistentes, conforme descrito por Bechaut & Pitt (2001). Para delineamento da termoresistência dos isolados utilizou-se metodologia adaptada de Baglioni (1998). No teste do nível baixo de inóculo (100esp./mL de néctar de manga), a temperatura variou de 12 a 25°C, o pH, de 3,2 a 4,8 e a aw de 0,979 a 0,988, mediante um desenho fatorial 23 acrescido de 3 pontos centrais e 6 axiais; já para o nível de inóculo de 103esp./mL, a temperatura variou de 18 a 22°C, o pH de 3,5 a 4,5 e aw de 0,970 a 0,990, mediante desenho fatorial 23 com 3 pontos centrais. Os dados do incremento diário nas medidas de diâmetro foram ajustados pelos modelos de crescimento de Baranyi (Baranyi & Roberts, 1995) e de Gompertz modificado (Zwietering et al., 1994). Para a modelagem secundária do crescimento, utilizou-se um desenho fatorial 22 com 3 pontos centrais e 4 axiais, sendo que a temperatura variou de 17,2 a 22,8°C e o pH variou de 3,2 a 4,7, com aw fixada em 0,980 (comum ao produto). Para estas avaliações o fungo foi inoculado em 230mL de néctar de manga, previamente esterilizados, dispostos em garrafas PET, higienizadas segundo Petrus (2000). A contagem total de bolores termoresistentes presentes no néctar de manga foi de 7,4x103esp./mL, deste total foram isoladas 8 linhagens diferentes, sendo a que apresentou maior termoresistência (100°C/15min) identificada como Aspergillus niger. Considerando-se o nível de inóculo baixo (6,8x100esp./mL de néctar de manga), observou-se que reduções de pH causaram aumento do tempo de adaptação (4 para 10 dias), bem como incrementos de 0,01 na aw o diminuíram em 3 dias, sendo que em temperaturas inferiores a 18°C não foi observado o crescimento do fungo. Já considerando-se o nível de inóculo de 9,3x103esp./mL, observou-se que a aw, na faixa natural ao produto, não apresentou impacto significativo (p<0,05) sobre os parâmetros de crescimento do microrganismo. Em condições de abuso de temperatura, uma redução de 5,6°C (22,8 para 17,2°C), implicaram em aumento de 23 dias no tempo de adaptação do fungo. De maneira semelhante, quando o pH do néctar de manga passou de 4,0 para 4,7, o tempo de adaptação do microrganismo passou de 11 para 3 dias e o diâmetro final da colônia triplicou. Cabe salientar que o modelo de Baranyi demonstrou melhor performance no ajuste dos dados de crescimento, com valores maiores valores de R2 (0,998). Para a modelagem polinomial de superfície de resposta a aw foi fixada em 0,980 (natural do produto). O modelo obtido para tempo de adaptação, com parâmetros significativos (p<0,05) temperatura, linear e quadrática e pH linear (com valores codificados) foi: . Este modelo foi verificado com R2 0,981, 1.06 de fator bias, 1,16 de fator exatidão e relação Fval/Ftab de 23,4. As análises estatísticas do modelo demonstraram que reduções no valor de pH em 0,5 unidade podem ser capazes de duplicar o tempo de vida útil do produto (10 para 20 dias). Efeitos semelhantes foram observados para reduções de temperatura da ordem de 0,8°C. Considerando-se a taxa de crescimento, o modelo polinomial obtido, tendo como fatores significativos (p<0,05) pH, linear e quadrático e temperatura linear e quadrática, com valores codificados, foi: Este modelo foi verificado com R2 0,882, 1.06 de fator bias, 1,16 de fator exatidão e relação Fval/Ftab de 2.6. Análises estatísticas do modelo demonstraram que, em pH 4,0 e 20°C é observada menor taxa de crescimento (1,33mm.dia-1). Assim sendo, os resultados demonstraram que pH e temperatura são fatores que exercem influência significativa sobre o crescimento de A.niger, em néctar de manga. Entretanto, estes fatores, nos níveis estudados, somente retardam o crescimento do microrganismo, não o impedindo. É essencial então o controle por refrigeração, visando evitar abusos de temperatura, já que em temperaturas =15°C, independentemente do nível de inóculo utilizado, não foi notado o crescimento de A.niger. De modo similar, deve-se também optar pelo controle rígido nos valores de pH, pois alterações de 0,5 unidade podem implicar em mudanças severas na vida de prateleira do produto. Entretanto, somente alterações em valores de pH e temperatura não são suficientes para garantir a estabilidade microbiológica do produto, já que esta depende da qualidade da matéria-prima, dentre outros fatores, contudo, tanto pH quanto temperatura podem atuar como coadjuvantes na preservação do néctar de manga / Abstract: In 2005, the world production of mango fruits was 850,000ton and Brazil was the 7th in ranked of world production. The mango nectar is the third ranked in flavor preference. Concerning the emerging contaminant microorganisms of this product are molds, which are able to survive pasteurization process, it is essential to know the contamination level of heat resistant molds in this product and to identify the most heat resistant. Moreover, pasteurization process is unable to eliminate these molds, so it is necessary to study both the effect of hurdle factors and the interaction of factors in the growth. The aim of this research was: i. to quantify, isolate and identify the most heat resistant mold in the mango nectar; ii. to evaluate the effect of hurdle factors on growth parameters of the isolated, considering two levels of inoculum, 6.8x100 and 9.3x103spores/mL of mango nectar, selecting the most impact variables through primary modeling; iii. to model growth parameters such adaptation time (l; days) and growth rate (m; mm.day-1) of the most heat resistant mold by polynomial response surface. For this purpose, 50L of mango nectar were used for isolating the thermal resistant strains (Bechat & Pitt, 2001). To screen of heat resistantance of the isolated was performed as indicated in Baglioni (1998). Concerning low inoculum level (100spores/mL mango nectar), temperature was from 12 to 25°C, pH, from 3.2 to 4.8 and aw, from 0.979 to 0.988, which were tested by a central composite design, 23 with 3 central points and 6 star points. For 103spores/mL of mango nectar, temperature was from 18 to 22°C, pH, from 3.5 to 4.5 and aw, from 0.970 to 0.990, tested by factorial design 23 with 3 central points. Fungal growth was measured as colony diameter on daily basis. Primary predictive models of Baranyi & Roberts and modified Gompertz were used to fit growth data. For a secondary polynomial model was used a central composite design 22 with 3 central points and 4 star points, in which temperature varied from 17.2 to 22.8°C and pH, from 3.2 to 4.7, and aw was fixed at 0.980. For all experiments conducted the mold was inoculated in 230mL PET bottles mango nectar, sanitized according to Petrus (2000). The total count average of heat resistant molds from mango nectar was . 4x103spores/mL, from this total were isolated 8 different strains and the most heat resistant was Aspergillus niger (100°C/15 minutes). Concerning the low inoculum level (100spores/mL of mango nectar), was observed that pH reductions increase adaptation time from 4 to 10 days, as well, an increase of 0,01 on aw reduced shelf life in 3 days. In temperatures =18°C the growth mold was not observed. For inoculation level 9.3x103spores/mL of mango nectar, was observed that aw was not significant (p<0.05) on the mold growth parameters. In abuse temperatures conditions, reductions of 5.6°C (from 22.8 to 17.2°C), increase the adaptation time in 23 days. Same effects were observed when pH of mango nectar changed from 4.0 to 4.7, while adaptation time decreased from 11 to 3 days and the maximum diameter tripled. It was verified that Baranyi & Roberts¿ model adjusted better the experimental data, with higher adjustment coefficients (0.998). The obtained model for adaptation time, with temperature, (temperature)2 and pH as significant factors (p<0.05), was: . This model was verified with R2 0.981, 1.06 bias factor, 1.16 accuracy factor and Fval/Ftab 23.4. The statistical analyses demonstrate that a decrease in pH of 0.5 unit could double the product shelf-life (from 10 to 20 days). The same effect was observed with reductions of about 0.8°C in temperature. The maximum shelf life obtained (about 30 days) was for pH 3.28 and 17.2°C. The obtained model for growth rate, with pH, (pH)2, temperature and (temperature)2 as significant factors was: This model was verified with R2 0.882, 1.06 bias factor, 1.16 accuracy factor and Fval/Ftab 2.6. Statistical analysis demonstrated when pH was 4.0 and temperature was 20°C, the growth rate was lower (1.33mm.day-1). Thus, pH and temperature were significants factors (p<0.05) on growth of A.niger in mango nectar. However, for the studied levels, this factor only retards the mold growth, without eliminating. So, it is essential to control storage refrigeration, avoiding abuse temperature, since temperatures = 15°C, do not permit A.niger growth, no matter the inoculum level. In the same manner, variation in pH of 0.5 units can implicate in strong changes in product shelf life. However pH and temperature variation are not able to guarantee the product microbial stability, since it depends on raw material quality assurance, among other factors. Hence, pH and temperature can contribute to mango nectar preservation / Mestrado / Mestre em Ciência de Alimentos
34

An Empirically Based Stochastic Turbulence Simulator with Temporal Coherence for Wind Energy Applications

Rinker, Jennifer Marie January 2016 (has links)
<p>In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields. </p><p>In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.</p><p>The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.</p><p>This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.</p><p>The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.</p><p>The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.</p><p>EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.</p><p>Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.</p><p>The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.</p><p>This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.</p><p>The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.</p> / Dissertation
35

Hydrodynamic Shape Optimization of Trawl Doors with Three-Dimensional Computational Fluid Dynamics Models and Local Surrogates

Hermannsson, Elvar January 2014 (has links)
Rising fuel prices have been inflating the operating costs of the fishing industry. Trawl doors are used to hold the fishing net open during trawling operations, and they have a great influence on the fuel consumption of vessels. Improvements in the design of trawl doors could therefore contribute significantly to increased fuel efficiency. An efficient optimization algorithm using two- and three-dimensional (2D and 3D) computational fluid dynamics (CFD) models is presented. Accurate CFD models, especially 3D, are computationally expensive. The direct use of traditional optimization algorithms, which often require a large number of evaluations, can therefore be prohibitive. The proposed method is iterative and uses low-order local response surface approximation models as surrogates for the expensive CFD model to reduce the number of iterations. The algorithm is applied to the design of two types of geometries: a typical modern trawl door, and a novel airfoil-shaped trawl door. The results from the 2D design optimization show that the hydrodynamic efficiency of the typical modern trawl door could be increased by 32%, and the novel airfoil-shaped trawl door by 13%. When the 2D optimum designs for the two geometries are compared, the novel airfoil-shaped trawl door results to be 320% more efficient than the optimized design of the typical modern trawl door. The 2D optimum designs were used as the initial designs for the 3D design optimization. The results from the 3D optimization show that the hydrodynamic efficiency could be increased by 6% for both the typical modern and novel airfoil-shaped trawl doors. Results from a 3D CFD analysis show that 3D flow effects are significant, where the values for drag are significantly underestimated in 2D CFD models.
36

Multi-sensor Optimization Of The Simultaneous Turning And Boring Operation

Deane, Erick Johan 01 January 2011 (has links)
To remain competitive in today’s demanding economy, there is an increasing demand for improved productivity and scrap reduction in manufacturing. Traditional manufacturing metal removal processes such as turning and boring are still one of the most used techniques for fabricating metal products. Although the essential metal removal process is the same, new advances in technology have led to improvements in the monitoring of the process allowing for reduction of power consumption, tool wear, and total cost of production. Replacing used CNC lathes from the 1980’s in a manufacturing facility may prove costly, thus finding a method to modernize the lathes is vital. This research focuses on Phase I and II of a three phase research project where the final goal is to optimize the simultaneous turning and boring operation of a CNC Lathe. From the optimization results it will be possible to build an adaptive controller that will produce parts rapidly while minimizing tool wear and machinist interaction with the lathe. Phase I of the project was geared towards selecting the sensors that were to be used to monitor the operation and designing a program with an architecture that would allow for simultaneous data collection from the selected sensors at high sampling rates. Signals monitored during the operation included force, temperature, vibration, sound, acoustic emissions, power, and metalworking fluid flow rates. Phase II of this research is focused on using the Response Surface Method to build empirical models for various responses and to optimize the simultaneous cutting process. The simultaneous turning and boring process was defined by the four factors of spindle speed, feed rate, outer diameter depth of cut, and inner diameter depth of cut. A total of four sets of experiments were performed. The first set of experiments screened the experimental region to iii determine if the cutting parameters were feasible. The next three set s of designs of experiments used Central Composite Designs to build empirical models of each desired response in terms of the four factors and to optimize the process. Each design of experiments was compared with one another to validate that the results achieved were accurate within the experimental region. By using the Response Surface Method optimal machining parameter settings were achieved. The algorithm used to search for optimal process parameter settings was the desirability function. By applying the results from this research to the manufacturing facility, they will achieve reduction in power consumption, reduction in production time, and decrease in the total cost of each part.
37

Construction and properties of Box-Behnken designs

Jo, Jinnam 01 February 2006 (has links)
Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2<sup>k</sup> full or 2<sup>k-1</sup> fractional factorials. In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2<sup>k</sup>full factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λ<sub>m</sub> are all greater than zero. In order to reduce the number of experimental points the use of 2<sup>k-1</sup> fractional factorials instead of 2<sup>k</sup> full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators. The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed. / Ph. D.
38

Effective design augmentation for prediction

Rozum, Michael A. 03 August 2007 (has links)
In a typical response surface study, an experimenter will fit a first order model in the early stages of the study and obtain the path of steepest ascent. The path leads the experimenter out of this initial region of interest and into a new region of interest. The experimenter may fit another first order model here or, if curvature is believed to be present in the underlying system, a second order model. In the final stages of the study, the experimenter fits a second order model and typically contracts the region of interest as the levels of the factors that optimize the response are nearly determined. Due to the sequential nature of experimentation in a typical response surface study, the experimenter may find himself/herself wanting to augment some initial design with additional runs within the current region of interest. The little discussion that exists in the statistical literature suggests adding runs sequentially in a conditional D-optimal manner. Four prediction oriented criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> and G, and two estimation oriented criteria, A and E, are studied here as other possible sequential design augmentation optimality criteria. Analytical properties of I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, and A are developed within the context of the design augmentation problem. I<sub>SV</sub><sub>r</sub> is found to be somewhat ineffective in actual sequential design augmentation situations. A new more effective criterion,I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> is introduced and thoroughly developed. Software is developed which allows sequential design augmentation via these seven criteria. Unlike existing design augmentation software, all locations within the current region of interest are eligible for inclusion in the augmenting design (a continuous candidate list). Case studies were performed. For a first order model there was negligible difference in the prediction variance properties of the designs generated via sequential augmentation by D and the A best of the other criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. For a second order model, however, the designs generated via sequential augmentation by D place too few runs too late in the interior of the region of interest. Thus, designs generated via sequential augmentation by D yield inferior prediction variance properties to the designs generated via I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. The D-efficiencies of the designs generated via sequential augmentation by I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A range from the reasonable to fully D-optimum. Therefore, the I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, optimality criteria are recommended for sequential design augmentation when quality of prediction is more important than quality in estimation of coefficients. / Ph. D.
39

Performance evaluation of metamodelling methods for engineering problems: towards a practitioner guide

Kianifar, Mohammed R., Campean, Felician 29 July 2019 (has links)
Yes / Metamodelling or surrogate modelling techniques are frequently used across the engineering disciplines in conjunction with expensive simulation models or physical experiments. With the proliferation of metamodeling techniques developed to provide enhanced performance for specific problems, and the wide availability of a diverse choice of tools in engineering software packages, the engineering task of selecting a robust metamodeling technique for practical problems is still a challenge. This research introduces a framework for describing the typology of engineering problems, in terms of dimensionality and complexity, and the modelling conditions, reflecting the noisiness of the signals and the affordability of sample sizes, and on this basis presents a systematic evaluation of the performance of frequently used metamodeling techniques. A set of metamodeling techniques, selected based on their reported use for engineering problems (i.e. Polynomial, Radial Basis Function, and Kriging), were systematically evaluated in terms of accuracy and robustness against a carefully assembled set of 18 test functions covering different types of problems, sampling conditions and noise conditions. A set of four real-world engineering case studies covering both computer simulation and physical experiments were also analysed as validation tests for the proposed guidelines. The main conclusions drawn from the study are that Kriging model with Matérn 5/2 correlation function performs consistently well across different problem types with smooth (i.e. not noisy) data, while Kriging model with Matérn 3/2 correlation function provides robust performance under noisy conditions, except for the very high noise conditions, where the Kriging model with nugget appears to provide better models. These results provide engineering practitioners with a guide for the choice of a metamodeling technique for problem types and modelling conditions represented in the study, whereas the evaluation framework and benchmarking problems set will be useful for researchers conducting similar studies.
40

Robust parameter optimization strategies in computer simulation experiments

Panis, Renato P. 06 June 2008 (has links)
An important consideration in computer simulation studies is the issue of model validity, the level of accuracy with which the simulation model represents the real world system under study. This dissertation addresses a major cause of model validity problems: the dissimilarity between the simulation model and the real system due to the dynamic nature of the real system that results from the presence of nonstationary stochastic processes within the system. This transitory characteristic of the system is typically not addressed in the search for an optimal solution. In reliability and quality control studies, it is known that optimizing with respect to the variance of the response is as important a concern as optimizing with respect to average performance response. Genichi Taguchi has been instrumental in the advancement of this philosophy. His work has resulted in what is now popularly known as the Taguchi Methods for robust parameter design. Following Taguchi's philosophy, the goal of this research is to devise a framework for finding optimum operating levels for the controllable input factors in a stochastic system that are insensitive to internal sources of variation. Specifically, the model validity problem of nonstationary system behavior is viewed as a major internal cause of system variation. In this research the typical application of response surface methodology (RSM) to the problem of simulation optimization is examined. Simplifying assumptions that enable the use of RSM techniques are examined. The relaxation of these assumptions to address model validity leads to a modification of the RSM approach to properly handle the problem of optimization in the presence of nonstationarity. Taguchi's strategy and methods are then adapted and applied to this problem. Finally, dual-response RSM extensions of the Taguchi approach separately modeling the process performance mean and variance are considered and suitably revised to address the same problem. A second cause of model validity problems is also considered: the random behavior of the supposedly controllable input factors to the system. A resolution to this source of model invalidity is proposed based on the methodology described above. / Ph. D.

Page generated in 0.059 seconds