Return to search

Transfer Learning for Multi-surrogate-model Optimization

Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization.
The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet.
In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction
1.1 Motivation
1.2 Research objective
1.3 Solution overview
1.4 Thesis structure
2 Background
2.1 Optimization problems
2.2 From single- to multi-surrogate-model optimization
2.2.1 Classical surrogate-model-based optimization
2.2.2 The purpose of multi-surrogate-model optimization
2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning
2.3 Transfer learning
2.3.1 Definition and purpose of transfer learning
2.4 Summary of the Background
3 Related work
3.1 Questions to transfer learning
3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer
3.2.1 Meta-features-based approaches
3.2.2 Surrogate-model-based similarity
3.2.3 Relative landmarks-based approaches
3.2.4 Sampling landmarks-based approaches
3.2.5 Similarity threshold problem
3.3 What to transfer: Existing approaches to knowledge transfer
3.3.1 Ensemble learning
3.3.2 Search space pruning
3.3.3 Multi-task learning
3.3.4 Surrogate model recommendation
3.3.5 Few-shot learning
3.3.6 Other approaches to transferring knowledge
3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup
3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup
3.4.2 Required design decisions in multi-task learning
3.4.3 Few-shot learning problem
3.5 Summary of the related work analysis
4 Transfer learning for multi-surrogate-model optimization
4.1 Expediency of knowledge transfer
4.1.1 Experiments’ similarity definition as a variability point
4.1.2 Clustering to filter the most suitable experiments
4.2 Dynamic model recommendation in multi-surrogate-model setup
4.2.1 Variable recommendation granularity
4.2.2 Model recommendation by time and performance criteria
4.3 Multi-task learning
4.4 Implementation of the proposed concept
4.5 Conclusion of the proposed concept
5 Evaluation
5.1 Benchmark suite
5.1.1 APSP for the meta-heuristics
5.1.2 Hyperparameter optimization of the Random Forest algorithm
5.2 Environment setup
5.3 Evaluation plan
5.4 Baseline evaluation
5.5 Meta-tuning for a multi-task learning approach
5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance
5.5.2 Multi-task learning performance with the best found parameters
5.6 Expediency determination approach
5.6.1 Expediency determination as a variability point
5.6.2 Flexible number of the most similar experiments with the help of clustering
5.6.3 Influence of the number of initial samples on the quality of expediency determination
5.7 Multi-surrogate-model recommendation
5.8 Few-shot learning
5.8.1 Transfer of the built surrogate models’ combination
5.8.2 Transfer of the best configuration
5.8.3 Transfer from different experiment instances
5.9 Summary of the evaluation results
6 Conclusion and Future work

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:73313
Date14 January 2021
CreatorsGvozdetska, Nataliia
ContributorsPukhkaiev, Dmytro, Götz, Sebastian, Aßmann, Uwe, Technische Universität Dresden
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/publishedVersion, doc-type:masterThesis, info:eu-repo/semantics/masterThesis, doc-type:Text
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.003 seconds