Spelling suggestions: "subject:"artificial been colony"" "subject:"artificial beef colony""
1 |
Vägplanering i dataspel med hjälp av Artificial Bee Colony Algorithm / Pathfinding in computer games by using Artificial Bee Colony AlgorithmLee, Jessica January 2015 (has links)
Artificial Bee Colony Algorithm är en algoritm som tidigare tillämpats på många numeriska optimeringsproblem. Algoritmen är dock inte vanligt förekommande vad gäller vägplanering i dataspel. Detta arbete undersöker ifall algoritmen presterar bättre än A* på fyra olika testmiljöer eftersom A* är en av de mest använda algoritmerna för vägplanering i dataspel och således en bra referens. De aspekter som jämförs vid mätningarna är algoritmernas tidsåtgång samt längden på de resulterande vägarna. En riktad slumpgenerering av vägar har implementerats till algoritmen som gör att den inte fungerar på djupa återvändsgränder. Algoritmen har också en roulettehjulsselektion samt egenskapen att kunna generera slumpade grannvägar till de som skapats hittills. Resultaten visar att Artificial Bee Colony Algorithm presterar betydligt sämre än A* och att algoritmen därför inte är en bättre algoritm för vägplanering i dataspel. Algoritmen har dock potential till att prestera bättre och fungera på återvändsgränder om man förbättrar dess slumpgenerering av vägar.
|
2 |
Implementation and Testing of Two Bee-Based Algorithms in Finite Element Model UpdatingMarrè Badalló, Roser January 2013 (has links)
Finite Element Model Updating has recently arisen as an issue of vast importance on the design, construction and maintenance of structures in civil engineering. Many algorithms have been proposed, developed and enhanced in order to accomplish the demands of the updating process, mainly to achieve computationally efficient programs and greater results.The present Master Thesis proposes two new algorithms to be used in Finite Element Model Updating: the Bees Algorithms (BA) and the Artificial Bee Colony algorithm (ABC). Both were first proposed in 2005, are based on the foraging behaviour of bees and have been proved to be efficient algorithms in other fields. The objective of this Master Thesis is, thus, to implement and to test these two newalgorithms in Finite Element Model Updating for a cantilever beam. The Finite Element Model and the algorithms are programmed, followed by the extraction of the experimental frequencies and the updating process. Results, comparison of these two methods and conclusions are given at the end of this report, as well as suggestions for further work.
|
3 |
Prediction of self-compacting concrete elastic modulus using two symbolic regression techniquesGolafshani, E.M., Ashour, Ashraf 28 December 2015 (has links)
yes / This paper introduces a novel symbolic regression approach, namely biogeographical-based programming (BBP), for the prediction of elastic modulus of self-compacting concrete (SCC). The BBP model was constructed directly from a comprehensive dataset of experimental results of SCC available in the literature. For comparison purposes, another new symbolic regression model, namely artificial bee colony programming (ABCP), was also developed. Furthermore, several available formulas for predicting the elastic modulus of SCC were assessed using the collected database.
The results show that the proposed BBP model provides slightly closer results to experiments than ABCP model and existing available formulas. A sensitivity analysis of BBP parameters also shows that the prediction by BBP model improves with the increase of habitat size, colony size and maximum tree depth. In addition, among all considered empirical and design code equations, Leemann and Hoffmann and ACI 318-08’s equations exhibit a reasonable performance but Persson and Felekoglu et al.’s equations are highly inaccurate for the prediction of SCC elastic modulus.
|
4 |
Dirbtinės bičių kolonijos algoritmai ir jų taikymai skirstymo uždaviniams spręsti / Artificial Bee Colony Algorithms and their Application to Assigment ProblemsMatakas, Linas 29 July 2013 (has links)
Šiame darbe yra trumpai apžvelgiami dalelių spiečių sistemų algoritmai, skirstymo uždaviniai ir jų formuluotės, bei praktinės interpretacijos, plačiau apžvelgiami ir analizuojami dirbtinių bičių kolonijų algoritmai. Taip pat šiame darbe galima rasti dirbtinių bičių kolonijų algoritmo pritaikymą skirstymo uždaviniams spręsti, bei sukurtos programos skaičiavimo rezultatų analizę. / This paper consists of short descriptions of swarm systems algorithms, assigment problems and longer overview of artificial bee colony algorithms and it‘s analysis. Moreover, you can find an Artificial Bee Colony Algorithm's Application to one of an Assigment Problems and it's computational results analysis.
|
5 |
Dirbtinės bičių kolonijos algoritmai ir jų taikymai maršrutų optimizavimo uždaviniams spręsti / Artificial Bee Colony Algorithms and their Application to Route Optimisation ProblemsKavaliauskas, Donatas 29 July 2013 (has links)
Šiame darbe yra trumpai apžvelgiami dalelių spiečių sistemų algoritmai, maršrutų optimizavimo uždaviniai ir jų formuluotės, bei praktinės interpretacijos. Plačiau apžvelgiami dirbtinių bičių kolonijų algoritmai ir jų pritaikymas keliaujančio pirklio uždaviniams spręsti. Taip pat šiame darbe galima rasti dirbtinių bičių kolonijų algoritmo pritaikymą keliaujančio pirklio uždaviniams spręsti, bei sukurtos programos skaičiavimo rezultatų analizę. / This paper consists of short description of swarm systems algorithms, route optimisation problems overview and longer description of artificial bee colony algorithms adaptation for traveling salesman problem. Moreover, you can find an artificial bee colony algorithm's application to traveling salesman problem and analysis of computational results.
|
6 |
Uma metodologia para síntese de circuitos digitais em FPGAs baseada em otimização multiobjetivoSOUZA, Viviane Lucy Santos de 20 August 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-07-12T18:32:53Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_Final_bib.pdf: 4325542 bytes, checksum: 5cafa644d256b743ce0f06490e4d5920 (MD5) / Made available in DSpace on 2016-07-12T18:32:53Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_Final_bib.pdf: 4325542 bytes, checksum: 5cafa644d256b743ce0f06490e4d5920 (MD5)
Previous issue date: 2015-08-20 / Atualmente, a evolução na arquitetura dos FPGAs (Field programable gate
arrays) permite que os mesmos sejam empregados em aplicações que vão desde a
prototipação rápida de circuitos digitais simples a coprocessadores para computação de
alto desempenho. Entretanto, a utilização eficiente dessas arquiteturas é fortemente
dependente, entre outros fatores, da ferramenta de síntese empregada.
O desafio das ferramentas de síntese está em converter a lógica do projetista em
circuitos que utilizem de maneira efetiva a área do chip, não degradem a frequência de
operação e que, sobretudo, sejam eficientes em reduzir o consumo de energia. Nesse
sentido, pesquisadores e grandes fabricantes de FPGA estão, frequentemente,
desenvolvendo novas ferramentas com vistas a esses objetivos, que se caracterizam por
serem conflitantes. O fluxo de síntese de projetos baseados em FPGAs engloba as
etapas de otimização lógica, mapeamento, agrupamento, posicionamento e roteamento.
Essas fases são dependentes, de forma que, otimizações nas etapas iniciais produzem
impactos positivos nas etapas posteriores. No âmbito deste trabalho de doutorado,
estamos propondo uma metodologia para otimização do fluxo de síntese,
especificamente, nas etapas de mapeamento e agrupamento.
Classicamente, a etapa de mapeamento é realizada mediante heurísticas que
determinam uma solução para o problema, mas que, não permitem a busca por soluções
ótimas, ou que beneficiam um objetivo em detrimento de outros. Desta forma, estamos
propondo a utilização de uma abordagem multiobjetivo baseada em algoritmo genético
e de uma abordagem multiobjetivo baseada em colônia artificial de abelhas que,
associadas a heurísticas específicas do problema, permitem que sejam obtidas soluções
de melhor qualidade e que resultam em circuitos finais com área reduzida, ganhos na
frequência de operação e com menor consumo de potência dinâmica.
Além disso, propomos uma nova abordagem de agrupamento multiobjetivo que
se diferencia do estado da arte, por utilizar uma técnica de predição e por considerar
características dinâmicas do problema, produzindo circuitos mais eficientes e que
facilitam a tarefa das etapas de posicionamento e roteamento.
Toda a metodologia proposta foi integrada ao fluxo acadêmico do VTR (Verilog
to routing), um projeto código aberto e colaborativo que conta com múltiplos grupos de
pesquisa, conduzindo trabalhos nas áreas de desenvolvimento de arquitetura de FPGAs
e de novas ferramentas de síntese. Além disso, utilizamos como benchmark, um
conjunto dos 20 maiores circuitos do MCNC (Microelectronics Center of North
Carolina) que são frequentemente utilizados em pesquisas da área.
O resultado do emprego integrado das ferramentas frutos da metodologia
proposta permite a redução de importantes aspectos pós-roteamento avaliados. Em
comparação ao estado da arte, são obtidas, em média, redução na área dos circuitos de
até 19%, além da redução do caminho crítico em até 10%, associada à diminuição na
potência dinâmica total estimada de até 18%.
Os experimentos também mostram que as metodologias de mapeamento
propostas são computacionalmente mais custosas em comparação aos métodos
presentes no estado da arte, podendo ser até 4,7x mais lento. Já a metodologia de
agrupamento apresentou pouco ou nenhum overhead em comparação ao metodo
presente no VTR. Apesar do overhead presente no mapeamento, os métodos propostos,
quando integrados ao fluxo completo, podem reduzir o tempo de execução da síntese
em cerca de 40%, isto é o resultado da produção de circuitos mais simples e que,
consequentemente, favorecem as etapas de posicionamento e roteamento. / Nowadays, the evolution of FPGAs (Field Programmable Gate Arrays) allows
them to be employed in applications from rapid prototyping of digital circuits to
coprocessor of high performance computing. However, the efficient use of these
architectures is heavily dependent, among other factors, on the employed synthesis tool.
The synthesis tools challenge is in converting the designer logic into circuits
using effectively the chip area, while, do not degrade the operating frequency and,
especially, are efficient in reducing power consumption. In this sense, researchers and
major FPGA manufacturers are often developing new tools to achieve those goals,
which are characterized by being conflicting. The synthesis flow of projects based on
FPGAs comprises the steps of logic optimization, mapping, packing, placement and
routing. These steps are dependent, such that, optimizations in the early stages bring
positive results in later steps. As part of this doctoral work, we propose a methodology
for optimizing the synthesis flow, specifically, on the steps of mapping and grouping.
Classically, the mapping step is performed by heuristics which determine a
solution to the problem, but do not allow the search for optimal solutions, or that benefit
a goal at the expense of others. Thus, we propose the use of a multi-objective approach
based on genetic algorithm and a multi-objective approach based on artificial bee
colony that, combined with problem specific heuristics, allows a better quality of
solutions are obtained, yielding circuits with reduced area, operating frequency gains
and lower dynamic power consumption.
In addition, we propose a new multi-objective clustering approach that differs
from the state-of-the-art, by using a prediction technique and by considering dynamic
characteristics of the problem, producing more efficient circuits and that facilitate the
tasks of placement and routing steps .
The proposal methodology was integrated into the VTR (Verilog to routing)
academic flow, an open source and collaborative project that has multiple research
groups, conducting work in the areas of FPGA architecture development and new
synthesis tools. Furthermore, we used a set of the 20 largest MCNC (Microelectronics
Center of North Carolina) benchmark circuits that are often used in research area.
The results of the integrated use of tools based on the proposed methodology
allow the reduction of important post-routing aspects evaluated. Compared to the stateof-
the-art, are achieved, on average, 19% reduction in circuit area, besides 10%
reduction in critical path, associated with 18% decrease in the total dynamic estimated
power.
The experiments also reveal that proposed mapping methods are
computationally more expensive in comparison to methods in the state-of-the-art, and
may even be 4.7x slower. However, the packing methodology presented little or no
overhead compared to the method in VTR. Although the present overhead mapping, the
proposed methods, when integrated into the complete flow, can reduce the running time
of the synthesis by approximately 40%, which is the result of more simple circuits and
which, consequently, favor the steps of placement and routing.
|
7 |
Um algoritmo inspirado em colônias de abelhas para otimização numérica com restriçõesDuarte, Grasiele Regina 06 March 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-06T11:57:32Z
No. of bitstreams: 1
grasielereginaduarte.pdf: 2553018 bytes, checksum: e0b9afbcc0b18965321f8db8ea7d38b8 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T20:19:40Z (GMT) No. of bitstreams: 1
grasielereginaduarte.pdf: 2553018 bytes, checksum: e0b9afbcc0b18965321f8db8ea7d38b8 (MD5) / Made available in DSpace on 2017-03-06T20:19:40Z (GMT). No. of bitstreams: 1
grasielereginaduarte.pdf: 2553018 bytes, checksum: e0b9afbcc0b18965321f8db8ea7d38b8 (MD5)
Previous issue date: 2015-03-06 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Os problemas de otimização estão presentes em diversas áreas de atuação da sociedade e o
uso de algoritmos bio-inspirados para a resolução de problemas complexos deste tipo vem
crescendo constantemente. O Algoritmo Colônia de Abelhas Artificiais (ABC – do inglês
Artificial Bee Colony) é um algoritmo bio-inspirado proposto em 2005 para a resolução de
problemas de otimização multimodais e multidimensionais. O fenômeno natural que inspirou
o desenvolvimento do ABC foi o comportamento inteligente observado em colônias
de abelhas, mais especificamente no forrageamento. O ABC foi proposto inicialmente
para ser aplicado na resolução de problemas sem restrições. Este trabalho avalia o desempenho
do ABC quando aplicado na resolução de problemas de otimização com restrições.
Para o tratamento das restrições, métodos de penalização serão incorporados ao ABC.
São analisados diversos métodos de penalização, de diferentes tipos, com o objetivo de
identificar com qual deles o algoritmo apresenta melhor desempenho. Além disto, são
avaliadas possíveis limitações e cuidados que devem ser tomados ao combinar métodos
de penalização ao ABC. O algoritmo proposto é avaliado através da resolução de problemas
de otimização encontrados na literatura. Vários experimentos computacionais são
realizados e gráficos e tabelas são gerados para demonstração dos resultados obtidos que
também são discutidos. / Optimization problems are present in several areas of society and the use of bio-inspired
algorithms to solve complex problems of this type has been growing constantly. The Artificial
Bee Colony Algorithm (ABC) is a bio-inspired algorithm proposed in 2005 for solving
multimodal and multidimensional optimization problems. The natural phenomenon that
inspired the development of the ABC was intelligent behavior observed in bee colonies,
more specifically in foraging. The ABC was initially proposed to be applied to solve
unconstrained problems. This study evaluates the performance of ABC when applied
in solving constrained optimization problems. For the treatment of constraints, penalty
methods will be incorporated into the ABC. Several penalty methods, of different types,
are analyzed with the goal of identifying which of these penalty methods offers better
performance. Furthermore, possible limitations and care that should be taken when combining
penalty methods to ABC are evaluated. The proposed algorithm is evaluated by
solving optimization problems found in the literature. Several computational experiments
are performed and graphs and tables are generated for demonstration of the obtained results
which are also discussed.
|
8 |
Amélioration des métaheuristiques d'optimisation à l'aide de l'analyse de sensibilité / Improvement of optimization metaheuristics with sensitivity analysisLoubiere, Peio 21 November 2016 (has links)
L'optimisation difficile représente une classe de problèmes dont la résolution ne peut être obtenue par une méthode exacte en un temps polynomial.Trouver une solution en un temps raisonnable oblige à trouver un compromis quant à son exactitude.Les métaheuristiques sont une classe d'algorithmes permettant de résoudre de tels problèmes, de manière générique et efficiente (i.e. trouver une solution satisfaisante selon des critères définis: temps, erreur, etc.).Le premier chapitre de cette thèse est notamment consacré à la description de cette problématique et à l'étude détaillée de deux familles de métaheuristiques à population, les algorithmes évolutionnaires et les algorithmes d'intelligence en essaim.Afin de proposer une approche innovante dans le domaine des métaheuristiques, ce premier chapitre présente également la notion d'analyse de sensibilité.L'analyse de sensibilité permet d'évaluer l'influence des paramètres d'une fonction sur son résultat.Son étude caractérise globalement le comportement de la fonction à optimiser (linéarité, influence, corrélation, etc.) sur son espace de recherche.L'incorporation d'une méthode d'analyse de sensibilité au sein d'une métaheuristique permet d'orienter sa recherche le long des dimensions les plus prometteuses.Deux algorithmes réunissant ces notions sont proposés aux deuxième et troisième chapitres.Pour le premier algorithme, ABC-Morris, la méthode de Morris est introduite dans la métaheuristique de colonie d'abeilles artificielles (ABC).Cette inclusion est dédiée, les méthodes reposant sur deux équations similaires.Afin de généraliser l'approche, une nouvelle méthode, NN-LCC, est ensuite développée et son intégration générique est illustrée sur deux métaheuristiques, ABC avec taux de modification et évolution différentielle.L'efficacité des approches proposées est testée sur le jeu de données de la conférence CEC 2013. L'étude se réalise en deux parties: une analyse classique de la méthode vis-à-vis de plusieurs algorithmes de la littérature, puis vis-à-vis de l'algorithme d'origine en désactivant un ensemble de dimensions, provoquant une forte disparité des influences / Hard optimization stands for a class of problems which solutions cannot be found by an exact method, with a polynomial complexity.Finding the solution in an acceptable time requires compromises about its accuracy.Metaheuristics are high-level algorithms that solve these kind of problems. They are generic and efficient (i.e. they find an acceptable solution according to defined criteria such as time, error, etc.).The first chapter of this thesis is partially dedicated to the state-of-the-art of these issues, especially the study of two families of population based metaheuristics: evolutionnary algorithms and swarm intelligence based algorithms.In order to propose an innovative approach in metaheuristics research field, sensitivity analysis is presented in a second part of this chapter.Sensitivity analysis aims at evaluating arameters influence on a function response. Its study characterises globally a objective function behavior (linearity, non linearity, influence, etc.), over its search space.Including a sensitivity analysis method in a metaheuristic enhances its seach capabilities along most promising dimensions.Two algorithms, binding these two concepts, are proposed in second and third parts.In the first one, ABC-Morris, Morris method is included in artificial bee colony algorithm.This encapsulation is dedicated because of the similarity of their bare bone equations, With the aim of generalizing the approach, a new method is developped and its generic integration is illustrated on two metaheuristics.The efficiency of the two methods is tested on the CEC 2013 conference benchmark. The study contains two steps: an usual performance analysis of the method, on this benchmark, regarding several state-of-the-art algorithms and the comparison with its original version when influences are uneven deactivating a subset of dimensions
|
9 |
An evolutionary Pentagon Support Vector finder methodMousavi, S.M.H., Vincent, Charles, Gherman, T. 02 March 2020 (has links)
Yes / In dealing with big data, we need effective algorithms; effectiveness that depends, among others, on the ability to remove outliers from the data set, especially when dealing with classification problems. To this aim, support vector finder algorithms have been created to save just the most important data in the data pool. Nevertheless, existing classification algorithms, such as Fuzzy C-Means (FCM), suffer from the drawback of setting the initial cluster centers imprecisely. In this paper, we avoid existing shortcomings and aim to find and remove unnecessary data in order to speed up the final classification task without losing vital samples and without harming final accuracy; in this sense, we present a unique approach for finding support vectors, named evolutionary Pentagon Support Vector (PSV) finder method. The originality of the current research lies in using geometrical computations and evolutionary algorithms to make a more effective system, which has the advantage of higher accuracy on some data sets. The proposed method is subsequently tested with seven benchmark data sets and the results are compared to those obtained from performing classification on the original data (classification before and after PSV) under the same conditions. The testing returned promising results.
|
10 |
Meta-heurísticas Iterated Local Search, GRASP e Artificial Bee Colony aplicadas ao Job Shop Flexível para minimização do atraso total. / Meta-heuristics Iterated Local Search, GRASP and Artificial Bee Colony applied to Flexible Job Shop minimizing total tardiness.Melo, Everton Luiz de 07 February 2014 (has links)
O ambiente de produção abordado neste trabalho é o Job Shop Flexível (JSF), uma generalização do Job Shop (JS). O problema de programação de tarefas, ou jobs, no ambiente JS é classificado por Garey; Johnson e Sethi (1976) como NP-Difícil e o JSF é, no mínimo, tão difícil quanto o JS. O JSF é composto por um conjunto de jobs, cada qual constituído por operações. Cada operação deve ser processada individualmente, sem interrupção, em uma única máquina de um subconjunto de máquinas habilitadas. O principal critério de desempenho considerado é a minimização dos atrasos dos jobs. São apresentados modelos de Programação Linear Inteira Mista (PLIM) para minimizar o atraso total e o instante de término da última operação, o makespan. São propostas novas regras de prioridade dos jobs, além de adaptações de regras da literatura. Tais regras são utilizadas por heurísticas construtivas e são aliadas a estratégias cujo objetivo é explorar características específicas do JSF. Visando aprimorar as soluções inicialmente obtidas, são propostas buscas locais e outros mecanismos de melhoria utilizados no desenvolvimento de três meta-heurísticas de diferentes categorias. Essas meta-heurísticas são: Iterated Local Search (ILS), classificada como meta-heurística de trajetória; Greedy Randomized Adaptive Search (GRASP), meta-heurística construtiva; e Artificial Bee Colony (ABC), meta-heurística populacional recentemente proposta. Esses métodos foram selecionados por alcançarem bons resultados para diversos problemas de otimização da literatura. São realizados experimentos computacionais com 600 instâncias do JSF, permitindo comparações entre os métodos de resolução. Os resultados mostram que explorar as características do problema permite que uma das regras de prioridade propostas supere a melhor regra da literatura em 81% das instâncias. As meta-heurísticas ILS, GRASP e ABC chegam a conseguir mais de 31% de melhoria sobre as soluções iniciais e a obter atrasos, em média, somente 2,24% superiores aos das soluções ótimas. Também são propostas modificações nas meta-heurísticas que permitem obter melhorias ainda mais expressivas sem aumento do tempo de execução. Adicionalmente é estudada uma versão do JSF com operações de Montagem e Desmontagem (JSFMD) e os experimentos realizados com um conjunto de 150 instâncias também indicam o bom desempenho dos métodos desenvolvidos. / The production environment addressed herein is the Flexible Job Shop (FJS), a generalization of the Job Shop (JS). In the JS environment, the jobs scheduling problem is classified by Garey; Johnson and Sethi (1976) as NP-Hard and the FJS is at least as difficult as the JS. FJS is composed of a set of jobs, each consisting of operations. Each operation must be processed individually, without interruption, in a single machine of a subset of enabled machines. The main performance criterion is minimizing the jobs tardiness. Mixed Integer Linear Programming (MILP) models are presented. These models minimize the total tardiness and the completion time of the last operation, makespan. New priority rules of jobs are proposed, as well as adaptations of rules from the literature. These rules are used by constructive heuristics and are combined with strategies aimed at exploiting specific characteristics of FSJ. In order to improve the solutions initially obtained, local searches and other improvement mechanisms are proposed and used in the development of metaheuristics of three different categories. These metaheuristics are: Iterated Local Search (ILS), classified as trajectory metaheuristic; Greedy Randomized Adaptive Search (GRASP), constructive metaheuristic, and Artificial Bee Colony (ABC), recently proposed population metaheuristic. These methods were selected owing to their good results for various optimization problems in the literature. Computational experiments using 600 FJS instances are carried out to allow comparisons between the resolution methods. The results show that exploiting the characteristics of the problem allows one of the proposed priority rules to exceed the best literature rule in about 81% of instances. Metaheuristics ILS, GRASP and ABC achieve more than 31% improvement over the initial solutions and obtain an average tardiness only 2.24% higher than the optimal solutions. Modifications in metaheuristics are proposed to obtain even more significant improvements without increased execution time. Additionally, a version called Disassembly and Assembly FSJ (DAFJS) is studied and the experiments performed with a set of 150 instances also indicate good performance of the methods developed.
|
Page generated in 0.0841 seconds