This thesis presents methods for minimizing the computational effort of problem solving. Rather than looking at a particular algorithm, we consider the issue of computational complexity at a higher level, and propose techniques that, given a set of candidate algorithms, of unknown performance, learn to use these algorithms while solving a sequence of problem instances, with the aim of solving all instances in a minimum time. An analogous meta-level approach to problem solving has been adopted in many different fields, with different aims and terminology. A widely accepted term to describe it is algorithm selection. Algorithm portfolios represent a more general framework, in which computation time is allocated to a set of algorithms running on one or more processors.Automating algorithm selection is an old dream of the AI community, which has been brought closer to reality in the last decade. Most available selection techniques are based on a model of algorithm performance, assumed to be available, or learned during a separate offline training sequence, which is often prohibitively expensive. The model is used to perform a static allocation of resources, with no feedback from the actual execution of the algorithms. There is a trade-off between the performance of model-based selection, and the cost of learning the model. In this thesis, we formulate this trade-off as a bandit problem.We propose GambleTA, a fully dynamic and online algorithm portfolio selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to optimize machine time shares for the algorithms, in order to minimize runtime. A bandit problem solver picks the allocator to use on each instance, gradually increasing the impact of the best time allocators as the model improves. A similar approach is adopted for learning restart strategies online (GambleR). In both cases, the runtime distributions are modeled using survival analysis techniques; unsuccessful runs are correctly considered as censored runtime observations, allowing to save further computation time.The methods proposed are validated with several experiments, mostly based on data from solver competitions, displaying a robust performance in a variety of settings, and showing that rough performance models already allow to allocate resources efficiently, reducing the risk of wasting computation time. / Permanent URL: http://doc.rero.ch/record/20245 / info:eu-repo/semantics/nonPublished
Identifer | oai:union.ndltd.org:ulb.ac.be/oai:dipot.ulb.ac.be:2013/250787 |
Date | 24 March 2010 |
Creators | Gagliolo, Matteo |
Contributors | Schmidhuber, Juergen, Lanza, Michele, Hauswirth, Matthias, Pedone, Fernando, Birattari, Mauro, Gomes, Carla, Gomez, Faustino |
Publisher | Università della Svizzera italiana, Lugano, Switzerland |
Source Sets | Université libre de Bruxelles |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/doctoralThesis, info:ulb-repo/semantics/doctoralThesis, info:ulb-repo/semantics/openurl/vlink-dissertation |
Format | 209 p., No full-text files |
Page generated in 0.0021 seconds