• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Variable Fidelity Optimization with Hardware-in-the-Loop for Flapping Flight

Duffield, Michael Luke 10 July 2013 (has links) (PDF)
Hardware-in-the-loop (HIL) modeling is a powerful way of modeling complicated systems. However, some hardware is expensive to use in terms of time or mechanical wear. In cases like these, optimizing using the hardware can be prohibitively expensive because of the number of calls to the hardware that are needed. Variable fidelity optimization can help overcome these problems. Variable fidelity optimization uses less expensive surrogates to optimize an expensive system while calling it fewer times. The surrogates are usually created from performing a design of experiments on the expensive model and fitting a surface to the results. However, some systems are too expensive to create a surrogate from. One such case is that of a flapping flight model. In this thesis, a technique for variable fidelity optimization of HIL has been created that optimizes a system while calling it as few times as possible. This technique is referred to as an intelligent DOE. This intelligent DOE was tested using simple models of various dimension. It was then used to find a flapping wing trajectory that maximizes lift. Through testing, the intelligent DOE was shown to be able to optimize expensive systems with fewer calls than traditional variable fidelity optimization would have needed. Savings as high as 97% were recorded. It was noted that as the number of design variables increased, the intelligent DOE became more effective by comparison because the number of calls needed by a traditional DOE based variable fidelity optimization increased faster than linearly, where the number of hardware calls for the intelligent increased linearly.
2

Multiscale modeling of multimaterial systems using a Kriging based approach

Sen, Oishik 01 December 2016 (has links)
The present work presents a framework for multiscale modeling of multimaterial flows using surrogate modeling techniques in the particular context of shocks interacting with clusters of particles. The work builds a framework for bridging scales in shock-particle interaction by using ensembles of resolved mesoscale computations of shocked particle laden flows. The information from mesoscale models is “lifted” by constructing metamodels of the closure terms - the thesis analyzes several issues pertaining to surrogate-based multiscale modeling frameworks. First, to create surrogate models, the effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver. After this, closure laws for drag are constructed in the form of surrogate models derived from real-time resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the least number of mesoscale simulations. It is shown that unlike the DKG method, the MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. In macroscale models for shock-particle interactions, Subgrid Particle Reynolds’ Stress Equivalent (SPARSE) terms arise because of velocity fluctuations due to fluid-particle interaction in the subgrid/meso scales. Mesoscale computations are performed to calculate the SPARSE terms and the kinetic energy of the fluctuations for different values of Mach Number and particle volume fraction. Closure laws for SPARSE terms are constructed using the MBKG method. It is found that the directions normal and parallel to those of shock propagation are the principal directions of the SPARSE tensor. It is also found that the kinetic energy of the fluctuations is independent of the particle volume fraction and is 12-15% of the incoming shock kinetic energy for higher Mach Numbers. Finally, the thesis addresses the cost of performing large ensembles of resolved mesoscale computations for constructing surrogates. Variable fidelity techniques are used to construct an initial surrogate from ensembles of coarse-grid, relative inexpensive computations, while the use of resolved high-fidelity simulations is limited to the correction of initial surrogate. Different variable-fidelity techniques, viz the Space Mapping Method, RBFs and the MBKG methods are evaluated based on their ability to correct the initial surrogate. It is found that the MBKG method uses the least number of resolved mesoscale computations to correct the low-fidelity metamodel. Instead of using 56 high-fidelity computations for obtaining a surrogate, the MBKG method constructs surrogates from only 15 resolved computations, resulting in drastic reduction of computational cost.
3

Variable fidelity modeling as applied to trajectory optimization for a hydraulic backhoe

Moore, Roxanne Adele 08 April 2009 (has links)
Modeling, simulation, and optimization play vital roles throughout the engineering design process; however, in many design disciplines the cost of simulation is high, and designers are faced with a tradeoff between the number of alternatives that can be evaluated and the accuracy with which they can be evaluated. In this thesis, a methodology is presented for using models of various levels of fidelity during the optimization process. The intent is to use inexpensive, low-fidelity models with limited accuracy to recognize poor design alternatives and reserve the high-fidelity, accurate, but also expensive models only to characterize the best alternatives. Specifically, by setting a user-defined performance threshold, the optimizer can explore the design space using a low-fidelity model by default, and switch to a higher fidelity model only if the performance threshold is attained. In this manner, the high fidelity model is used only to discern the best solution from the set of good solutions, so that computational resources are conserved until the optimizer is close to the solution. This makes the optimization process more efficient without sacrificing the quality of the solution. The method is illustrated by optimizing the trajectory of a hydraulic backhoe. To characterize the robustness and efficiency of the method, a design space exploration is performed using both the low and high fidelity models, and the optimization problem is solved multiple times using the variable fidelity framework.
4

A Unified, Multifidelity Quasi-Newton Optimization Method with Application to Aero-Structural Design

Bryson, Dean Edward 20 December 2017 (has links)
No description available.
5

Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method

Park, Jangho 22 July 2019 (has links)
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly. / Doctor of Philosophy / In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
6

Value-based global optimization

Moore, Roxanne Adele 21 May 2012 (has links)
Computational models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models are often computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the alternatives and accurate determination of the best alternative with reasonable costs incurred, surrogate modeling and variable accuracy modeling are used widely. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model, while variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. As compared to using only very accurate and expensive models, designers can determine the best solutions more efficiently using surrogate and variable accuracy models because obviously poor solutions can be eliminated inexpensively using only the less expensive, less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this thesis, a Value-Based Global Optimization (VGO) algorithm is introduced. The algorithm uses kriging-like surrogate models and a sequential sampling strategy based on Value of Information (VoI) to optimize an objective characterized by multiple analysis models with different accuracies. It builds on two primary research contributions. The first is a novel surrogate modeling method that accommodates data from any number of analysis models with different accuracies and costs. The second contribution is the use of Value of Information (VoI) as a new metric for guiding the sequential sampling process for global optimization. In this manner, the cost of further analysis is explicitly taken into account during the optimization process. Results characterizing the algorithm show that VGO outperforms Efficient Global Optimization (EGO), a similar global optimization algorithm that is considered to be the current state of the art. It is shown that when cost is taken into account in the final utility, VGO achieves a higher utility than EGO with statistical significance. In further experiments, it is shown that VGO can be successfully applied to higher dimensional problems as well as practical engineering design examples.
7

Aerodynamic Database Generation for a Complex Hypersonic Vehicle Configuration Utilizing Variable-Fidelity Kriging

Tancred, James Anderson January 2018 (has links)
No description available.

Page generated in 0.0442 seconds