Spelling suggestions: "subject:"design off computer experiments"" "subject:"design oof computer experiments""
1 |
Robust design using sequential computer experimentsGupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
|
2 |
Robust design using sequential computer experimentsGupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
|
3 |
Physical-Statistical Modeling and Optimization of Cardiovascular SystemsDu, Dongping 01 January 2002 (has links)
Heart disease remains the No.1 leading cause of death in U.S. and in the world. To improve cardiac care services, there is an urgent need of developing early diagnosis of heart diseases and optimal intervention strategies. As such, it calls upon a better understanding of the pathology of heart diseases.
Computer simulation and modeling have been widely applied to overcome many practical and ethical limitations in in-vivo, ex-vivo, and whole-animal experiments. Computer experiments provide physiologists and cardiologists an indispensable tool to characterize, model and analyze cardiac function both in healthy and in diseased heart. Most importantly, simulation modeling empowers the analysis of causal relationships of cardiac dysfunction from ion channels to the whole heart, which physical experiments alone cannot achieve.
Growing evidences show that aberrant glycosylation have dramatic influence on cardiac and neuronal function. Variable but modest reduction in glycosylation among congenital disorders of glycosylation (CDG) subtypes has multi-system effects leading to a high infant mortality rate. In addition, CDG in all young patients tends to cause Atrial Fibrillation (AF), i.e., the most common sustained cardiac arrhythmia. The mortality rate from AF has been increasing in the past two decades. Due to the increasing healthcare burden of AF, studying the AF mechanisms and developing optimal ablation strategies are now urgently needed.
Very little is known about how glycosylation modulates cardiac electrical signaling. It is also a significant challenge to experimentally connect the changes at one organizational level (e.g.,electrical conduction among cardiac tissue) to measured changes at another organizational level (e.g., ion channels). In this study, we integrate the data from in vitro experiments with in-silico models to simulate the effects of reduced glycosylation on the gating kinetics of cardiac ion channel, i.e., hERG channels, Na+ channels, K+ channels, and to predict the glycosylation modulation dynamics in individual cardiac cells and tissues.
The complex gating kinetics of Na+ channels is modeled with a 9-state Markov model that have voltage-dependent transition rates of exponential forms. The model calibration is quite a challenge as the Markov model is non-linear, non-convex, ill-posed, and has a large parametric space. We developed a new metamodel-based simulation optimization approach for calibrating the model with the in-vitro experimental data. This proposed algorithm is shown to be efficient in learning the Markov model of Na+ model. Moreover, it can be easily transformed and applied to many other optimization problems in computer modeling.
In addition, the understanding of AF initiation and maintenance has remained sketchy at best. One salient problem is the inability to interpret intracardiac recordings, which prevents us from reconstructing the rhythmic mechanisms for AF, due to multiple wavelets' circulating, clashing and continuously changing direction in the atria. We are designing computer experiments to simulate the single/multiple activations on atrial tissues and the corresponding intra-cardiac signals. This research will create a novel computer-aided decision support tool to optimize AF ablation procedures.
|
4 |
Multidisciplinary Analysis and Design Optimization of an Efficient Supersonic Air VehicleAllison, Darcy L. 18 November 2013 (has links)
This material is based on research sponsored by Air Force Research Laboratory under agreement number FA8650-09-2-3938. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory or the U.S. Government. / This work seeks to develop multidisciplinary design optimization (MDO) methods to find the optimal design of a particular aircraft called an Efficient Supersonic Air Vehicle (ESAV). This is a long-range military bomber type of aircraft that is to be designed for high speed (supersonic) flight and survivability. The design metric used to differentiate designs is minimization of the take-off gross weight.
The usefulness of MDO tools, rather than compartmentalized design practices, in the early stages of the design process is shown. These tools must be able to adequately analyze all pertinent physics, simultaneously and collectively, that are important to the aircraft of interest.
Low-fidelity and higher-fidelity ESAV MDO frameworks have been constructed. The analysis codes in the higher-fidelity framework were validated by comparison with the legacy B-58 supersonic bomber aircraft. The low-fidelity framework used a computationally expensive process that utilized a large design of computer experiments study to explore its design space. This resulted in identifying an optimal ESAV with an arrow wing planform. Specific challenges to designing an ESAV not addressed with the low-fidelity framework were addressed with the higher-fidelity framework. Specifically, models to characterize the effects of the low-observable ESAV characteristics were required. For example, the embedded engines necessitated a higher-fidelity propulsion model and engine exhaust-washed structures discipline. Low-observability requirements necessitated adding a radar cross section discipline.
A relatively less costly computational process utilizing successive NSGA-II optimization runs was used for the higher-fidelity MDO. This resulted in an optimal ESAV with a trapezoidal wing planform. The NSGA-II optimizer considered arrow wing planforms in early generations during the process, but these were later discarded in favor of the trapezoidal planform. Sensitivities around this optimal design were computed using the well-known ANOVA method to characterize the surrounding design space.
The lower and higher fidelity frameworks could not be combined in a mixed-fidelity optimization process because the low-fidelity was not faithful enough to the higher-fidelity analysis results. The low-fidelity optimum was found to be infeasible according to the higher-fidelity framework and vice versa. Therefore, the low-fidelity framework was not capable of guiding the higher-fidelity framework to the eventual trapezoidal planform optimum. / Air Force Research Laboratory / Ph. D.
|
5 |
Fast uncertainty reduction strategies relying on Gaussian process modelsChevalier, Clément 18 September 2013 (has links) (PDF)
Cette thèse traite de stratégies d'évaluation séquentielle et batch-séquentielle de fonctions à valeurs réelles sous un budget d'évaluation limité, à l'aide de modèles à processus Gaussiens. Des stratégies optimales de réduction séquentielle d'incertitude (SUR) sont étudiées pour deux problèmes différents, motivés par des cas d'application en sûreté nucléaire. Tout d'abord, nous traitons le problème d'identification d'un ensemble d'excursion au dessus d'un seuil T d'une fonction f à valeurs réelles. Ensuite, nous étudions le problème d'identification de l'ensemble des configurations "robustes, contrôlées", c'est à dire l'ensemble des inputs contrôlés où la fonction demeure sous T quelle que soit la valeur des différents inputs non-contrôlés. De nouvelles stratégies SUR sont présentés. Nous donnons aussi des procédures efficientes et des formules permettant d'utiliser ces stratégies sur des applications concrètes. L'utilisation de formules rapides pour recalculer rapidement le posterior de la moyenne ou de la fonction de covariance d'un processus Gaussien (les "formules d'update de krigeage") ne fournit pas uniquement une économie computationnelle importante. Elles sont aussi l'un des ingrédient clé pour obtenir des formules fermées permettant l'utilisation en pratique de stratégies d'évaluation coûteuses en temps de calcul. Une contribution en optimisation batch-séquentielle utilisant le Multi-points Expected Improvement est également présentée.
|
Page generated in 0.0966 seconds