• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 8
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 54
  • 54
  • 22
  • 16
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Adaptivní algoritmy matchmakingu pro výpočetní multi-agentní systémy / Adaptive Matchmaking Algorithms for Computational Multi-Agent Systems

Kazík, Ondřej January 2014 (has links)
The multi-agent systems (MAS) has proven their suitability for implementation of complex software systems. In this work, we have analyzed and designed the data mining MAS by means of role-based organizational model. The organiza- tional model and the model of data mining methods have been formalized in the description logic. By matchmaking which is the main subject of our research, we understand the recommendation of computational agents, i.e. agents encap- sulating some computational method, according their capabilities and previous performances. The matchmaking thus consist of two parts: querying the ontol- ogy model and the meta-learning. Three meta-learning scenarios were tested: optimization in the parameter space, multi-objective optimization of data min- ing processes and method recommendation. A set of experiments in these areas have been performed. 1
22

Insights into Model-Agnostic Meta-Learning on Reinforcement Learning Tasks

Saitas-Zarkias, Konstantinos January 2021 (has links)
Meta-learning has been gaining traction in the Deep Learning field as an approach to build models that are able to efficiently adapt to new tasks after deployment. Contrary to conventional Machine Learning approaches, which are trained on a specific task (e.g image classification on a set of labels), meta-learning methods are meta-trained across multiple tasks (e.g image classification across multiple sets of labels). Their end objective is to learn how to solve unseen tasks with just a few samples. One of the most renowned methods of the field is Model-Agnostic Meta-Learning (MAML). The objective of this thesis is to supplement the latest relevant research with novel observations regarding the capabilities, limitations and network dynamics of MAML. For this end, experiments were performed on the meta-reinforcement learning benchmark Meta-World. Additionally, a comparison with a recent variation of MAML, called Almost No Inner Loop (ANIL) was conducted, providing insights on the changes of the network’s representation during adaptation (meta-testing). The results of this study indicate that MAML is able to outperform the baselines on the challenging Meta-World benchmark but shows little signs actual ”rapid learning” during meta-testing thus supporting the hypothesis that it reuses features learnt during meta-training. / Meta-Learning har fått dragkraft inom Deep Learning fältet som ett tillvägagångssätt för att bygga modeller som effektivt kan anpassa sig till nya uppgifter efter distribution. I motsats till konventionella maskininlärnings metoder som är tränade för en specifik uppgift (t.ex. bild klassificering på en uppsättning klasser), så metatränas meta-learning metoder över flera uppgifter (t.ex. bild klassificering över flera uppsättningar av klasser). Deras slutmål är att lära sig att lösa osedda uppgifter med bara några få prover. En av de mest kända metoderna inom området är Model-Agnostic Meta-Learning (MAML). Syftet med denna avhandling är att komplettera den senaste relevanta forskningen med nya observationer avseende MAML: s kapacitet, begränsningar och nätverksdynamik. För detta ändamål utfördes experiment på metaförstärkningslärande riktmärke Meta-World. Dessutom gjordes en jämförelse med en ny variant av MAML, kallad Almost No Inner Loop (ANIL), som gav insikter om förändringarna i nätverkets representation under anpassning (metatestning). Resultaten av denna studie indikerar att MAML kan överträffa baslinjerna för det utmanande Meta-Worldriktmärket men visar små tecken på faktisk ”snabb inlärning” under metatestning, vilket stödjer hypotesen att den återanvänder funktioner som den lärt sig under metaträning.
23

Using Instance-Level Meta-Information to Facilitate a More Principled Approach to Machine Learning

Smith, Michael Reed 01 April 2015 (has links) (PDF)
As the capability for capturing and storing data increases and becomes more ubiquitous, an increasing number of organizations are looking to use machine learning techniques as a means of understanding and leveraging their data. However, the success of applying machine learning techniques depends on which learning algorithm is selected, the hyperparameters that are provided to the selected learning algorithm, and the data that is supplied to the learning algorithm. Even among machine learning experts, selecting an appropriate learning algorithm, setting its associated hyperparameters, and preprocessing the data can be a challenging task and is generally left to the expertise of an experienced practitioner, intuition, trial and error, or another heuristic approach. This dissertation proposes a more principled approach to understand how the learning algorithm, hyperparameters, and data interact with each other to facilitate a data-driven approach for applying machine learning techniques. Specifically, this dissertation examines the properties of the training data and proposes techniques to integrate this information into the learning process and for preprocessing the training set.It also proposes techniques and tools to address selecting a learning algorithm and setting its hyperparameters.This dissertation is comprised of a collection of papers that address understanding the data used in machine learning and the relationship between the data, the performance of a learning algorithm, and the learning algorithms associated hyperparameter settings.Contributions of this dissertation include:* Instance hardness that examines how difficult an instance is to classify correctly.* hardness measures that characterize properties of why an instance may be misclassified.* Several techniques for integrating instance hardness into the learning process. These techniques demonstrate the importance of considering each instance individually rather than doing a global optimization which considers all instances equally.* Large-scale examinations of the investigated techniques including a large numbers of examined data sets and learning algorithms. This provides more robust results that are less likely to be affected by noise.* The Machine Learning Results Repository, a repository for storing the results from machine learning experiments at the instance level (the prediction for each instance is stored). This allows many data set-level measures to be calculated such as accuracy, precision, or recall. These results can be used to better understand the interaction between the data, learning algorithms, and associated hyperparameters. Further, the repository is designed to be a tool for the community where data can be downloaded and uploaded to follow the development of machine learning algorithms and applications.
24

Learning to Learn Multi-party Learning : FROM Both Distributed and Decentralized Perspectives

Ji, Jinlong 07 September 2020 (has links)
No description available.
25

Task Distillation: Transforming Reinforcement Learning into Supervised Learning

Wilhelm, Connor 12 October 2023 (has links) (PDF)
Recent work in dataset distillation focuses on distilling supervised classification datasets into smaller, synthetic supervised datasets in order to reduce per-model costs of training, to provide interpretability, and to anonymize data. Distillation and its benefits can be extended to a wider array of tasks. We propose a generalization of dataset distillation, which we call task distillation. Using techniques similar to those used in dataset distillation, any learning task can be distilled into a compressed synthetic task. Task distillation allows for transmodal distillations, where a task of one modality is distilled into a synthetic task of another modality, allowing a more complex learning task, such as a reinforcement learning environment, to be reduced to a simpler learning task, such as supervised classification. In order to advance task distillation beyond supervised-to-supervised distillation, we explore distilling reinforcement learning environments into supervised learning datasets. We propose a new distillation algorithm that allows PPO to be used to distill a reinforcement learning environment. We demonstrate k-shot learning on distilled cart-pole to demonstrate the effectiveness of our distillation algorithm, as well as to explore distillation generalization. We distill multi-dimensional cart-pole environments to their minimum-sized distillations and show that this matches the theoretical minimum number of data instances required to teach each task. We demonstrate how a distilled task can be used as an interpretability artifact, as it compactly represents everything needed to learn the task. We demonstrate the feasibility of distillation in more complex Atari environments by fully distilling Centipede and demonstrating that distillation is cheaper than training directly on Centipede for training more than 9 models. We provide a method to "partially" distill more complex environments and demonstrate it on Ms. Pac-Man, Pong, and Space Invaders and show how it scales distillation difficulty fully on Centipede.
26

Sharing to learn and learning to share : Fitting together metalearning and multi-task learning

Upadhyay, Richa January 2023 (has links)
This thesis focuses on integrating learning paradigms that ‘share to learn,’ i.e., Multitask Learning (MTL), and ‘learn (how) to share,’ i.e., meta learning. MTL involves learning several tasks simultaneously within a shared network structure so that the tasks can mutually benefit each other’s learning. While meta learning, better known as ‘learning to learn,’ is an approach to reducing the amount of time and computation required to learn a novel task by leveraging on knowledge accumulated over the course of numerous training episodes of various tasks. The learning process in the human brain is innate and natural. Even before birth, it is capable of learning and memorizing. As a consequence, humans do not learn everything from scratch, and because they are naturally capable of effortlessly transferring their knowledge between tasks, they quickly learn new skills. Humans naturally tend to believe that similar tasks have (somewhat) similar solutions or approaches, so sharing knowledge from a previous activity makes it feasible to learn a new task quickly in a few tries. For instance, the skills acquired while learning to ride a bike are helpful when learning to ride a motorbike, which is, in turn, helpful when learning to drive a car. This natural learning process, which involves sharing information between tasks, has inspired a few research areas in Deep Learning (DL), such as transfer learning, MTL, meta learning, Lifelong Learning (LL), and many more, to create similar neurally-weighted algorithms. These information-sharing algorithms exploit the knowledge gained from one task to improve the performance of another related task. However, they vary in terms of what information they share, when to share, and why to share. This thesis focuses particularly on MTL and meta learning, and presents a comprehensive explanation of both the learning paradigms. A theoretical comparison of both algorithms demonstrates that the strengths of one can outweigh the constraints of the other. Therefore, this work aims to combine MTL and meta learning to attain the best of both worlds. The main contribution of this thesis is Multi-task Meta Learning (MTML), an integration of MTL and meta learning. As the gradient (or optimization) based metalearning follows an episodic approach to train a network, we propose multi-task learning episodes to train a MTML network in this work. The basic idea is to train a multi-task model using bi-level meta-optimization so that when a new task is added, it can learn in fewer steps and perform at least as good as traditional single-task learning on the new task. The MTML paradigm is demonstrated on two publicly available datasets – the NYU-v2 and the taskonomy dataset, for which four tasks are considered, i.e., semantic segmentation, depth estimation, surface normal estimation, and edge detection. This work presents a comparative empirical analysis of MTML to single-task and multi-task learning, where it is evident that MTML excels for most tasks. The future direction of this work includes developing efficient and autonomous MTL architectures by exploiting the concepts of meta learning. The main goal will be to create a task-adaptive MTL, where meta learning may learn to select layers (or features) from the shared structure for every task because not all tasks require the same highlevel, fine-grained features from the shared network. This can be seen as another way of combining MTL and meta learning. It will also introduce modular learning in the multi-task architecture. Furthermore, this work can be extended to include multi-modal multi-task learning, which will help to study the contributions of each input modality to various tasks.
27

Game Theory and Meta Learning for Optimization of Integrated Satellite-Drone-Terrestrial-Communication Systems

Hu, Ye 01 September 2021 (has links)
Emerging integrated satellite-drone-terrestrial communication (ISDTC) technologies are expected to contribute to our life by bringing forth high speed wireless connectivity to every corner of the world. On the one hand, the Internet of Things (IoT) provides connectivity to various physical objects by enabling them to share information and to coordinate decisions. On the other hand, the non-terrestrial components of an ISDTC system, i.e. unmanned aerial vehicles (UAVs), and satellites, can boost the capacity of wireless networks by providing services to hotspots, disaster affected, and rural areas. Despite the several benefits and practical applications of ISDTC technologies, one must address many technical challenges such as, resource management, trajectory design, device cooperation, data routing, and security. The key goal of this dissertation is to develop analytical foundations for the optimization of ISDTC operations, and the deployment of non-terrestrial networks (NTNs). First, the problem of resource management within ISDTC systems is investigated for service-effective cooperation among the terrestrial networks and NTNs. The performance of a multi-layer ISDTC system is analyzed within a competitive market setting.Using a novel decentralized algorithm, spectrum resources are allocated to each one of the communication links, considering the fairness among devices. The proposed algorithm is proved to reach a Walrasian equilibrium, at which the sum-rate of the network is maximized. The results also show that the proposed algorithm can reach the equilibrium with a practical convergence speed. Then, the effective deployment of NTNs under environmental dynamics is investigated using machine learning solutions with meta training capabilities. First, the use of satellites for on-demand coverage to unforeseeable radio access needs is investigated using game theory. The optimal data routing strategies are learned by the satellite system, using a novel reinforcement learning approach with distribution-robust meta training capability. The results show that, the proposed meta training mechanism significantly reduces the learning cost on the satellites, and is guaranteed to reach the maximal service coverage in the system. Next, the problem of control of UAV-carried radio access points under energy constraints is studied. In particular, novel frameworks are proposed to design trajectories for UAVs that seek to deliver data service to distributed, dynamic, and unforeseeable wireless access requests. The results show that the proposed approaches are guaranteed to converge to an optimal trajectory, and can get a faster convergence speed and lower computation cost using decomposition, cross validation and meta learning. Finally, this dissertation looks at the security of an IoT system. In particular, the impact of human intervention on the system security is analyzed under specific resource constraints. Psychological game theory frameworks are proposed to analyze the human psychology and its impact on the security of the system. The results show that the proposed solution can help the defender optimize its connectivity within the IoT system by estimating the attacker's behavior. In summary, the outcomes of this dissertation provide key guidelines for the effective deployment of ISDTC systems. / Doctor of Philosophy / In the past decade, the goal of providing wireless connectivity to all individuals and communities, including the most disadvantaged ones, has become a national priority both in the US and globally. Yet, remarkably, until today, there is still a great portion of the Earth's population who falls out of today's wireless broadband coverage. While people who live in under-developed or rural areas are still in "wireless darkness", communities in megacities often experience below-par wireless service due to their overloaded communication systems. To provide high-speed, reliable wireless connectivity to those on the less-served side of the digital divide, an integrated space-air-ground communication system can be designed. Indeed, airborne and space-based non terrestrial networks (NTNs) can enhance the capacity and coverage of existing wireless cellular networks (e.g., 5G and beyond) by providing supplemental, affordable, flexible, and reliable service to users in rural, disaster affected, and over-crowded areas. In order to fill the coverage holes and bridge the digital divide, seamless integration among NTNs and terrestrial networks is needed. In particular, when deploying an integrated communication system, one must consider the problems of spectrum management, device cooperation, trajectory design, and data routing within the system. Meanwhile, with the increased exposure to malicious attacks on high altitude platforms and vulnerable IoT devices, the security within the integrated system must be analyzed and optimized for reliable data service. To overcome all the technological challenges that hinder the realization of global digital inclusion, this dissertation uses techniques from the fields of game theory, meta learning, and optimization theory to deploy, control, coordinate, and manage terrestrial networks and NTNs. The anticipated results show that a properly integrated satellite-drone-terrestrial communication (ISDTC) system can deliver cost-effective, high speed, seamless wireless service to our world.
28

Data-Augmented Structure-Property Mapping for Accelerating Computational Design of Advanced Material Systems

January 2018 (has links)
abstract: Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2018
29

On sparse representations and new meta-learning paradigms for representation learning

Mehta, Nishant A. 27 August 2014 (has links)
Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it translates to learning to learn itself. The presentation unfolds in two parts. In the first part, we establish learning theoretic results for learning sparse representations. The second part introduces new multi-task and meta-learning paradigms for representation learning. On the sparse representations front, our main pursuits are generalization error bounds to support a supervised dictionary learning model for Lasso-style sparse coding. Such predictive sparse coding algorithms have been applied with much success in the literature; even more common have been applications of unsupervised sparse coding followed by supervised linear hypothesis learning. We present two generalization error bounds for predictive sparse coding, handling the overcomplete setting (more original dimensions than learned features) and the infinite-dimensional setting. Our analysis led to a fundamental stability result for the Lasso that shows the stability of the solution vector to design matrix perturbations. We also introduce and analyze new multi-task models for (unsupervised) sparse coding and predictive sparse coding, allowing for one dictionary per task but with sharing between the tasks' dictionaries. The second part introduces new meta-learning paradigms to realize unprecedented types of learning guarantees for meta-learning. Specifically sought are guarantees on a meta-learner's performance on new tasks encountered in an environment of tasks. Nearly all previous work produced bounds on the expected risk, whereas we produce tail bounds on the risk, thereby providing performance guarantees on the risk for a single new task drawn from the environment. The new paradigms include minimax multi-task learning (minimax MTL) and sample variance penalized meta-learning (SVP-ML). Regarding minimax MTL, we provide a high probability learning guarantee on its performance on individual tasks encountered in the future, the first of its kind. We also present two continua of meta-learning formulations, each interpolating between classical multi-task learning and minimax multi-task learning. The idea of SVP-ML is to minimize the task average of the training tasks' empirical risks plus a penalty on their sample variance. Controlling this sample variance can potentially yield a faster rate of decrease for upper bounds on the expected risk of new tasks, while also yielding high probability guarantees on the meta-learner's average performance over a draw of new test tasks. An algorithm is presented for SVP-ML with feature selection representations, as well as a quite natural convex relaxation of the SVP-ML objective.
30

Uma hiper-heurística híbrida para a otimização de algorítmos

MIRANDA, Pericles Barbosa Cunha de 22 August 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-04T18:13:43Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Teste - Péricles Miranda.pdf: 1959669 bytes, checksum: 8b0b1e3f94dd3295bce6153865564a12 (MD5) / Made available in DSpace on 2017-05-04T18:13:43Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Teste - Péricles Miranda.pdf: 1959669 bytes, checksum: 8b0b1e3f94dd3295bce6153865564a12 (MD5) Previous issue date: 2016-08-22 / A escolha de algoritmos ou heurísticas para a resolução de um dado problema é uma tarefa desafiadora devido à variedade de possíveis escolhas de variações/configurações de algoritmos e a falta de auxílio em como escolhê-las ou combiná-las. Por exemplo, o desempenho de algoritmo de otimização depende da escolha dos seus operadores de busca e do ajuste adequado de seus hiper-parâmetros, cada um deles com muitas possibilidades de opções a serem escolhidas. Por este motivo, existe um interesse de pesquisa crescente na automatização da otimização de algoritmos de modo a tornar esta tarefa mais independente da interação humana. Diferentes abordagens têm lidado com a tarefa de ajuste de algoritmos como sendo outro problema de (meta)otimização. Estas abordagens são comumente chamadas de hiper-heurísticas, onde cada solução do espaço de busca, neste caso, é um possível algoritmo avaliado em um dado problema. Inicialmente, hiper-heurísticas foram aplicadas na seleção de valores de hiper-parâmetros em um espaço de busca pré-definido e limitado. No entanto, recentemente, hiper-heurísticas têm sido desenvolvidas para gerar algoritmos a partir de componentes e funções especificados. Hiperheurísticas de geração são consideradas mais flexíveis que as de seleção devido à sua capacidade de criar algoritmos novos e personalizados para um dado problema. As hiper-heurísticas têm sido largamente utilizadas na otimização de meta-heurísticas. No entanto, o processo de busca torna-se bastante custoso, pois a avaliação das soluções trata-se da execução do algoritmo no problema de entrada. Neste trabalho, uma nova hiper-heurística foi desenvolvida para a otimização de algoritmos considerando um dado problema. Esta solução visa prover algoritmos otimizados que sejam adequados para o problema dado e reduzir o custo computacional do processo de geração significativamente quando comparado ao de outras hiper-heurísticas. A hiper-heurística proposta combina uma abordagem de seleção de algoritmos com uma hiper-heurística de geração. A hiperheurística de geração é responsável por criar uma base de conhecimento, que contém algoritmos que foram gerados para um conjunto de problemas. Uma vez que esta base de conhecimento esteja disponível, ela é usada como fonte de algoritmos a serem recomendados pela abordagem de seleção de algoritmos. A ideia é reusar algoritmos previamente construídos pela hiper-heurística de geração em problemas similares. Vale salientar que a criação de hiper-heurísticas visando reduzir o custo de geração de algoritmos sem comprometer a qualidade destes algoritmos não foi estudada na literatura. Além disso, hiper-heurísticas híbridas que combinam de abordagens de seleção de algoritmos e hiper-heurísticas de geração para a otimização de algoritmos, proposta nesta tese, é novidade. Para avaliar o algoritmo proposto, foi considerada como estudo de caso a otimização do algoritmo baseado em enxames (PSO). Nos experimentos realizados, foram considerados 32 problemas de otimização. O algoritmo proposto foi avaliado quanto à sua capacidade de recomendar bons algoritmos para problemas de entrada, se estes algoritmos atingem resultados competitivos frente à literatura. Além disso, o sistema foi avaliado quanto à sua precisão na recomendação, ou seja, se o algoritmo recomendado seria, de fato, o melhor a ser selecionado. Os resultados mostraram que a hiper-heurística proposta é capaz de recomendar algoritmos úteis para os problemas de entrada e de forma eficiente. Adicionalmente, os algoritmos recomendados atingiram resultados competitivos quando comparados com algoritmos estado da arte e a recomendação dos algoritmos atingiu um alto percentual de precisão. / Designing an algorithm or heuristic to solve a given problem is a challenging task due to the variety of possible design choices and the lack of clear guidelines on how to choose and/or combine them. For instance, the performance of an optimization algorithm depends on the designofitssearchoperatorsaswellasanadequatesettingofspecifichyper-parameters,eachof them with many possible options to choose from. Because of that, there is a growing research interest in automating the design of algorithms by exploring mainly optimization and machine learningapproaches,aimingtomakethealgorithmdesignprocessmoreindependentfromhuman interaction. Different approaches have dealt with the task of optimizing algorithms as another (meta)optimization problem. These approaches are commonly called hyper-heuristics, where each solution of the search space is a possible algorithm. Initially, hyper-heuristics were applied for the selection of parameters in a predefined and limited search space. Nonetheless, recently, generation hyper-heuristics have been developed to generate algorithms from a set of specified components and functions. Generation hyper-heuristics are considered more flexible than the selection ones due to its capacity to create new and customized algorithms for a given problem. Hyper-heuristics have been widely used for the optimization of meta-heuristics. However, the search process becomes expensive because the evaluation of each solution depends on the execution of an algorithm in a problem. In this work, a novel hyper-heuristic was developed to optimize algorithms considering a given problem. The proposed approach aims to provide optimizedalgorithmsfortheinputproblemandreducethecomputationalcostoftheoptimization process significantly when compared to other hyper-heuristics. The proposed hyper-heuristics combines an automated algorithm selection method with a generation hyper-heuristic. The generation hyper-heuristic is responsible for the creation of the knowledge base, which contains previously built algorithms for a set of problems. Once the knowledge base is available, it is used as a source of algorithms to be recommended by the automated algorithm selection method. The idea is to reuse the algorithms already built by the generation hyper-heuristic on similar problems. It is worth mentioning that the creation of hyper-heuristics aiming to reduce the cost of the algorithm generation without harming the quality of these algorithms were not studied yet. Besides, hybrid hyper-heuristics which combine an algorithm selection approach with a generation hyper-heuristic for the algorithm optimization, proposed in this thesis, are a novelty. To evaluate the proposed algorithm, it was considered as case study the optimization of the Particle Swarm Optimization algorithm (PSO). In our experiments, we considered 32 optimizationproblems.Theproposedsystemwasevaluatedregardingitscapacitytorecommend adequate algorithms for an input problem, the quality of the recommended algorithms, and, finally, regarding its accuracy to recommend algorithms. The results showed that the proposed system recommends useful algorithms for the input problem. Besides, the algorithms achieved competitive results when compared to state-of-the-art algorithms, and also, the system presented a high percentage of accuracy in the recommendation.

Page generated in 0.0938 seconds