Return to search

AUTOMATED ADAPTIVE HYPERPARAMETER TUNING FOR ENGINEERING DESIGN OPTIMIZATION WITH NEURAL NETWORK MODELS

<p dir="ltr">Neural networks (NNs) effectively address the challenges of engineering design optimization by using data-driven models, thus reducing computational demands. However, their effectiveness depends heavily on hyperparameter optimization (HPO), which is a global optimization problem. While traditional HPO methods, such as manual, grid, and random search, are simple, they often fail to navigate the vast hyperparameter (HP) space efficiently. This work examines the effectiveness of integrating Bayesian optimization (BO) with multi-armed bandit (MAB) optimization for HPO in NNs. The thesis initially addresses HPO in one-shot sampling, where NNs are trained using datasets of varying sample sizes. It compares the performance of NNs optimized through traditional HPO techniques and a combination of BO and MAB optimization on the analytical Branin function and aerodynamic shape optimization (ASO) of an airfoil in transonic flow. Findings from the optimization of the Branin function indicate that the combined BO and MAB optimization approach leads to simpler NNs and reduces the sample size by approximately 10 to 20 compared to traditional HPO methods, all within half the time. This efficiency improvement is even more pronounced in ASO, where the BO and MAB optimization use about 100 fewer samples than the traditional methods to achieve the optimized airfoil design. The thesis then expands on adaptive HPOs within the framework of efficient global optimization (EGO) using a NN-based prediction and uncertainty (EGONN) algorithm. It employs the BO and MAB optimization for tuning HPs during sequential sampling, either every iteration (HPO-1itr) or every five iterations (HPO-5itr). These strategies are evaluated against the EGO as a benchmark method. Through experimentation with the analytical three-dimensional Hartmann function and ASO, assessing both comprehensive and selective tunable HP sets, the thesis contrasts adaptive HPO approaches with a static HPO method (HPO-static), which uses the initial HP settings throughout. Initially, a comprehensive set of the HPs is optimized and evaluated, followed by an examination of selectively chosen HPs. For the optimization of the three-dimensional Hartmann function, the adaptive HPO strategies surpass HPO-static in performance in both cases, achieving optimal convergence and sample efficiency comparable to EGO. In ASO, applying the adaptive HPO strategies reduces the baseline airfoil's drag coefficient to 123 drag counts (d.c.) for HPO-1itr and 120 d.c. for HPO-5itr when tuning the full set of the HPs. For a selected subset of the HPs, 123 d.c. and 121 d.c. are achieved by HPO-1itr and HPO-5itr, respectively, which are comparable to the minimum achieved by EGO. While the HPO-static method reduces the drag coefficient to 127 d.c. by tuning a subset of the HPs, which is a 15 d.c. reduction from its full set case, it falls short of the minimum of adaptive HPO strategies. Focusing on a subset of the HPs reduces time costs and enhances the convergence rate without sacrificing optimization efficiency. The time reduction is more significant with higher HPO frequencies as HPO-1itr cuts time by 66%, HPO-5itr by 38%, and HPO-static by 2%. However, HPO-5itr still requires 31% of the time needed by HPO-1itr for the full HP tuning and 56% for the subset HP tuning.</p>

  1. 10.25394/pgs.25684773.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25684773
Date28 April 2024
CreatorsTaeho Jeong (18437064)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/AUTOMATED_ADAPTIVE_HYPERPARAMETER_TUNING_FOR_ENGINEERING_DESIGN_OPTIMIZATION_WITH_NEURAL_NETWORK_MODELS/25684773

Page generated in 0.003 seconds