• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • Tagged with
  • 27
  • 27
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Instance-based Tailoring of Parameter Settings for Metaheuristics

Dobslaw, Felix January 2011 (has links)
Many industrial problems in various fields, such as logistics, process management, orproduct design, can be formalized and expressed as optimization problems in order tomake them solvable by optimization algorithms. However, solvers that guarantee thefinding of optimal solutions (complete) can in practice be unacceptably slow. Thisis one of the reasons why approximative (incomplete) algorithms, producing near-optimal solutions under restrictions (most dominant time), are of vital importance. Those approximative algorithms go under the umbrella term metaheuristics, each of which is more or less suitable for particular optimization problems. These algorithmsare flexible solvers that only require a representation for solutions and an evaluation function when searching the solution space for optimality.What all metaheuristics have in common is that their search is guided by certain control parameters. These parameters have to be manually set by the user andare generally problem and interdependent: A setting producing near-optimal resultsfor one problem is likely to perform worse for another. Automating the parameter setting process in a sophisticated, computationally cheap, and statistically reliable way is challenging and a significant amount of attention in the artificial intelligence and operational research communities. This activity has not yet produced any major breakthroughs concerning the utilization of problem instance knowledge or the employment of dynamic algorithm configuration. The thesis promotes automated parameter optimization with reference to the inverse impact of problem instance diversity on the quality of parameter settings with respect to instance-algorithm pairs. It further emphasizes the similarities between static and dynamic algorithm configuration and related problems in order to show how they relate to each other. It further proposes two frameworks for instance-based algorithm configuration and evaluates the experimental results. The first is a recommender system for static configurations, combining experimental design and machine learning. The second framework can be used for static or dynamic configuration,taking advantage of the iterative nature of population-based algorithms, which is a very important sub-class of metaheuristics. A straightforward implementation of framework one did not result in the expected improvements, supposedly because of pre-stabilization issues. The second approach shows competitive results in the scenario when compared to a state-of-the-art model-free configurator, reducing the training time by in excess of two orders of magnitude.
2

Parameter Tuning for Optimization Software

Koripalli, RadhaShilpa 06 August 2012 (has links)
Mixed integer programming (MIP) problems are highly parameterized, and finding parameter settings that achieve high performance for specific types of MIP instances is challenging. This paper presents a method to find the information about how CPLEX solver parameter settings perform for the different classes of mixed integer linear programs by using designed experiments and statistical models. Fitting a model through design of experiments helps in finding the optimal region across all combinations of parameter settings. The study involves recognizing the best parameter settings that results in the best performance for a specific class of instances. Choosing good setting has a large effect in minimizing the solution time and optimality gap.
3

OPTIMAL PARAMETER SETTING OF SINGLE AND MULTI-TASK LASSO

Huiting Su (5930882) 04 January 2019 (has links)
This thesis considers the problem of feature selection when the number of predictors is larger than the number of samples. The performance of supersaturated design (SSD) working with least absolute shrinkage and selection operator (LASSO) is studied in this setting. In order to achieve higher feature selection correctness, self-voting LASSO is implemented to select the tuning parameter while approximately optimize the probability of achieving Sign Correctness. Furthermore, we derive the probability of achieving Direction Correctness, and extend the self-voting LASSO to multi-task self-voting LASSO, which has a group screening effect for multiple tasks.
4

Automatic Algorithm Configuration: Analysis, Improvements and Applications

Perez Caceres, Leslie 23 November 2017 (has links)
Technology has a major role in today’s world. The development and massive access to information technology has enabled the use of computers to provide assistance on a wide range of tasks, from the most trivial daily ones to the most complex challenges we face as human kind. In particular, optimisation algorithms assist us in taking decisions, improving processes, designing solutions and they are successfully applied in several contexts such as industry, health, entertainment, and so on. The design and development of effective and efficient computational algorithms is, thus, a need in modern society.Developing effective and efficient optimisation algorithms is an arduous task that includes designing and testing of several algorithmic components and schemes, and requires considerable expertise. During the design of an algorithm, the developer defines parameters, that can be used to further adjust the algorithm behaviour depending on the particular application. Setting appropriate values for the parameters of an algorithm can greatly improve its performance. This way, most high-performing algorithms define parameter settings that are “finely tuned”, typically by experts, for a particular problem or execution condition.The process of finding high-performing parameter settings, called algorithm configuration, is commonly a challenging, tedious, time consuming and computationally expensive task that hinders the application and design of algorithms. Nevertheless, the algorithm configuration process can be modelled as an optimisation problem itself and optimisation techniques can be applied to provide high-performing configurations. The use of automated algorithm configuration procedures, called configurators, allows obtaining high-performing algorithms without requiring expert knowledge and it enables the design of more flexible algorithms by easing the definition of design choices as parameters to be set. Ultimately, automated algorithm configuration could be used to fully automatise the algorithm development process, providing algorithms tailored to the problem to be solved.The aim of the work presented in this thesis is to study the automated configuration of algorithms. To do so, we formally define the algorithm configuration problem and analyse its characteristics. We study the most prominent algorithm configuration procedures and identify relevant configuration techniques and their applicability. We contribute to the field by proposing and analysing several configuration procedures, being the most prominent of these the irace configurator. This work presents and studies several modifications of the configuration process implemented by irace, which considerably improve the performance of irace and broaden its applicability. In a general context, we provide insights about the characteristics of the algorithm configuration process and techniques by performing several analyses configuring different types of algorithms under varied situations. And, finally, we provide practical examples of the usage of automated configuration techniques showing its benefits and further uses for the application and design of efficient and effective algorithms. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
5

Parameter self-tuning in internet congestion control

Chen, Wu January 2010 (has links)
Active Queue Management (AQM) aims to achieve high link utilization, low queuing delay and low loss rate in routers. However, it is difficult to adapt AQM parameters to constantly provide desirable transient and steady-state performance under highly dynamic network scenarios. They need to be a trade-off made between queuing delay and utilization. The queue size would become unstable when round-trip time or link capacity increases, or would be unnecessarily large when round-trip time or link capacity decreases. Effective ways of adapting AQM parameters to obtain good performance have remained a critical unsolved problem during the last fifteen years. This thesis firstly investigates existing AQM algorithms and their performance. Based on a previously developed dynamic model of TCP behaviour and a linear feedback model of TCP/RED, Auto-Parameterization RED (AP-RED) is proposed which unveils the mechanism of adapting RED parameters according to measurable network conditions. Another algorithm of Statistical Tuning RED (ST-RED) is developed for systematically tuning four key RED parameters to control the local stability in response to the detected change in the variance of the queue size. Under variable network scenarios like round-trip time, link capacity and traffic load, no manual parameter configuration is needed. The proposed ST-RED can adjust corresponding parameters rapidly to maintain stable performance and keep queuing delay as low as possible. Thus the sensitivity of RED's performance to different network scenarios is removed. This Statistical Tuning algorithm can be applied to a PI controller for AQM and a Statistical Tuning PI (ST-PI) controller is also developed. The implementation of ST-RED and ST-PI is relatively straightforward. Simulation results demonstrate the feasibility of ST-RED and ST-PI and their capabilities to provide desirable transient and steady-state performance under extensively varying network conditions.
6

Automated Selected of Mixed Integer Program Solver Parameters

Stewart, Charles 30 April 2010 (has links)
This paper presents a method that uses designed experiments and statistical models to extract information about how solver parameter settings perform for classes of mixed integer programs. The use of experimental design facilitates fitting a model that describes the response surface across all combinations of parameter settings, even those not explicitly tested, allowing identification of both desirable and poor settings. Identifying parameter settings that give the best expected performance for a specific class of instances and a specific solver can be used to more efficiently solve a large set of similar instances, or to ensure solvers are being compared at their best.
7

Ajuste de parâmetros de técnicas de classificação por algoritmos bioinspirados / Bioinspired parameter tuning of classifiers

Rossi, André Luis Debiaso 01 April 2009 (has links)
Aprendizado de máquina é uma área de pesquisa na qual se investiga como desenvolver sistemas capazes de aprender com a experiência. Muitos algoritmos de aprendizado possuem parâmetros cujos valores devem ser especificados pelo usuário. Em geral, esses valores influenciam diretamente no processo de aquisição do conhecimento, podendo gerar diferentes modelos. Recentemente, algoritmos de otimização bioinspirados têm sido aplicados com sucesso no ajuste de parâmetros de técnicas de aprendizado de máquina. Essas técnicas podem apresentar diferentes sensibilidades em relação aos valores escolhidos para seus parâmetros e diferentes algoritmos de ajuste de parâmetros podem apresentar desempenhos singulares. Esta dissertação investiga a utilização de algoritmos bioinspirados para o ajuste de parâmetros de redes neurais artificiais e máquinas de vetores de suporte em problemas de classificação. O objetivo dessa investigação é verificar quais são as técnicas que mais se beneficiam do ajuste de parâmetros e quais são os algoritmos mais eficientes para essas técnicas. Os resultados experimentais mostram que os algoritmos bioinspirados conseguem encontrar melhores clasificadores que outras abordagens. Porém, essa melhoria é estatisticamente significativa para alguns conjuntos de dados. Foi possível verificar que o uso dos valores padrão para os parâmetros das técnicas de classificação leva a desempenhos similares aos obtidos com os algoritmos bioinspirados. Entretanto, para alguns conjuntos de dados, o ajuste de parâmetros pode melhorar significativamente o desempenho dos classificadores / Machine learning is a research area whose main goal is to design computational systems capable of learning through experience. Many machine learning techniques have free parameters whose values are generally defined by the user. Usually, these values affect the knowledge acquisition process directly, resulting in different models. Recently, bioinspired optimization algorithms have been successfully applied to the parameter tuning of machine learning techniques. These techniques may present variable sensitivity to the selection of the values of its parameters and different parameter tuning algorithms may present different behaviors. This thesis investigates the use of bioinspired algorithms for the parameter tuning of artificial neural networks and support vector machines in classification problems. The goal of this thesis is to investigate which techniques benefits most from parameter tuning and which are the most efficient algorithms to use with these techniques. Experimental results show that these bioinspired algorithms can find better classifiers when compared to other approaches. However, this improvement is statistically significant only to some datasets. It was possible to verify that the use of standard parameter values for the classification techniques leads to similar performances to those obtained with the bioinspired algorithms. However, for some datasets, the parameter tuning may significantly improve a classifier performance
8

Simultaneously searching with multiple algorithm settings: an alternative to parameter tuning for suboptimal single-agent search

Valenzano, Richard Unknown Date
No description available.
9

Simultaneously searching with multiple algorithm settings: an alternative to parameter tuning for suboptimal single-agent search

Valenzano, Richard 11 1900 (has links)
Many single-agent search algorithms have parameters that need to be tuned. Although settings found by offline tuning will exhibit strong average performance, properly selecting parameter settings for each problem can result in substantially reduced search effort. We consider the use of dovetailing as a way to deal with this issue. This procedure performs search with multiple parameter settings simultaneously. We present results testing the use of dovetailing with the weighted A*, weighted IDA*, weighted RBFS, and BULB algorithms on the sliding tile and pancake puzzle domains. Dovetailing will be shown to significantly improve weighted IDA*, often by several orders of magnitude, and generally enhance weighted RBFS. In the case of weighted A* and BULB, dovetailing will be shown to be an ineffective addition to these algorithms. A trivial parallelization of dovetailing will also be shown to decrease the search time in all considered domains.
10

Association Rules in Parameter Tuning : for Experimental Designs

Hållén, Henrik January 2014 (has links)
The objective of this thesis was to investigate the possibility ofusing association rule algorithms to automatically generaterules for the output of a Parameter Tuning framework. Therules would be the basis for a recommendation to the user regardingwhich parameter space to reduce during experimentation.The parameter tuning output was generated by means ofan open source project (INPUT) example program. InPUT is atool used to describe computer experiment configurations in aframework independent input/output format. InPUT has adaptersfor the evolutionary algorithm framework Watchmakerand the tuning framework SPOT. The output was imported in Rand preprocessed to a format suitable for association rule algorithms.Experiments were conducted on data for which theparameter spaces were discretized in 2, 5, 10 steps. The minimumsupport threshold was set to 1% and 3% to investigatethe amount of rules over time. The Apriori and Eclat algorithmsproduced exactly the same amount of rules, and the top 5rules with regards to support were basically the same for bothalgorithms. It was not possible at the time to automatically distinguishinguseful rules. In combination with the many manualdecisions during the process of converting the tuning output toassociation rules, the conclusion was reached to not recommendassociation rules for enhancing the Parameter Tuningprocess.

Page generated in 0.0849 seconds