Spelling suggestions: "subject:"[een] PARTICLE SWARM"" "subject:"[enn] PARTICLE SWARM""
101 |
Particle swarm optimization and differential evolution for multi-objective multiple machine schedulingGrobler, Jacomine 24 June 2009 (has links)
Production scheduling is one of the most important issues in the planning and operation of manufacturing systems. Customers increasingly expect to receive the right product at the right price at the right time. Various problems experienced in manufacturing, for example low machine utilization and excessive work-in-process, can be attributed directly to inadequate scheduling. In this dissertation a production scheduling algorithm is developed for Optimatix, a South African-based company specializing in supply chain optimization. To address the complex requirements of the customer, the problem was modeled as a flexible job shop scheduling problem with sequence-dependent set-up times, auxiliary resources and production down time. The algorithm development process focused on investigating the application of both particle swarm optimization (PSO) and differential evolution (DE) to production scheduling environments characterized by multiple machines and multiple objectives. Alternative problem representations, algorithm variations and multi-objective optimization strategies were evaluated to obtain an algorithm which performs well against both existing rule-based algorithms and an existing complex flexible job shop scheduling solution strategy. Finally, the generality of the priority-based algorithm was evaluated by applying it to the scheduling of production and maintenance activities at Centurion Ice Cream and Sweets. The production environment was modeled as a multi-objective uniform parallel machine shop problem with sequence-dependent set-up times and unavailability intervals. A self-adaptive modified vector evaluated DE algorithm was developed and compared to classical PSO and DE vector evaluated algorithms. Promising results were obtained with respect to the suitability of the algorithms for solving a range of multi-objective multiple machine scheduling problems. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Industrial and Systems Engineering / unrestricted
|
102 |
A discourse concerning certain stochastic optimization algorithms and their application to the imaging of cataclysmic variable starsWood, Derren W 27 July 2005 (has links)
This thesis is primarily concerned with a description of four types of stochastic algorithms, namely the genetic algorithm, the continuous parameter genetic algorithm, the particle swarm algorithm and the differential evolution algorithm. Each of these techniques is presented in sufficient detail to allow the layman to develop her own program upon examining the text. All four algorithms are applied to the optimization of a certain set of unconstrained problems known as the extended Dixon-Szegö test set. An algorithm's performance at optimizing a set of problems such as these is often used as a benchmark for judging its efficacy. Although the same thing is done here, an argument is presented that shows that no such general benchmarking is possible. Indeed, it is asserted that drawing general comparisons between stochastic algorithms on the basis of any performance criterion is a meaningless pursuit unless the scope of such comparative statements is limited to specific sets of optimization problems. The idea is a result of the no free lunch theorems proposed by Wolpert and Macready. Two methods of presenting the results of an optimization run are discussed. They are used to show that judging an optimizer's performance is largely a subjective undertaking, despite the apparently objective performance measures which are commonly used when results are published. An important theme of this thesis is the observation that a simple paradigm shift can result in a different decision regarding which algorithm is best suited to a certain task. Hence, an effort is made to present the proper interpretation of the results of such tests (from the author's point of view). Additionally, the four abovementioned algorithms are used in a modelling environment designed to determine the structure of a Magnetic Cataclysmic Variable. This 'real world' modelling problem contrasts starkly with the well defined test set and highlights some of the issues that designers must face in the optimization of physical systems. The particle swarm optimizer will be shown to be the algorithm capable of achieving the best results for this modelling problem if an unbiased <font face="symbol">c</font>2 performance measure is used. However, the solution it generates is clearly not physically acceptable. Even though this drawback is not directly attributable to the optimizer, it is at least indicative of the fact that there are practical considerations which complicate the issue of algorithm selection. / Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2006. / Mechanical and Aeronautical Engineering / unrestricted
|
103 |
A Computational Intelligence Approach to Clustering of Temporal DataGeorgieva, Kristina Slavomirova January 2015 (has links)
Temporal data is common in real-world datasets. Analysis of such data, for example by means of clustering algorithms, can be difficult due to its dynamic behaviour. There are various types of changes that may occur to clusters in a dataset. Firstly, data patterns can migrate between clusters, shrinking or expanding the clusters. Additionally, entire
clusters may move around the search space. Lastly, clusters can split and merge.
Data clustering, which is the process of grouping similar objects, is one approach to
determine relationships among data patterns, but data clustering approaches can face
limitations when applied to temporal data, such as difficulty tracking the moving clusters.
This research aims to analyse the ability of particle swarm optimisation (PSO)
and differential evolution (DE) algorithms to cluster temporal data. These algorithms
experience two weaknesses when applied to temporal data. The first weakness is the
loss of diversity, which refers to the fact that the population of the algorithm converges,
becoming less diverse and, therefore, limiting the algorithm’s exploration capabilities.
The second weakness, outdated memory, is only experienced by the PSO and refers to
the previous personal best solutions found by the particles becoming obsolete as the
environment changes. A data clustering algorithm that addresses these two weaknesses
is necessary to cluster temporal data.
This research describes various adaptations of PSO and DE algorithms for the purpose
of clustering temporal data. The algorithms proposed aim to address the loss of diversity
and outdated memory problems experienced by PSO and DE algorithms. These problems are addressed by combining approaches previously used for the purpose of dealing with temporal or dynamic data, such as repulsion and anti-convergence, with PSO and DE approaches used to cluster data. Six PSO algorithms are introduced in this research, namely the data clustering particle swarm optimisation (DCPSO), reinitialising data clustering particle swarm optimisation (RDCPSO), cooperative data clustering particle swarm optimisation (CDCPSO), multi-swarm data clustering particle swarm optimisation (MDCPSO), cooperative multi-swarm data clustering particle swarm optimisation (CMDCPSO), and elitist cooperative multi-swarm data clustering particle swarm optimisation (eCMDCPSO). Additionally, four DE algorithms are introduced, namely the data clustering differential evolution (DCDE), re-initialising data clustering differential evolution (RDCDE), dynamic data clustering differential evolution (DCDynDE), and cooperative dynamic data clustering differential evolution (CDCDynDE).
The PSO and DE algorithms introduced require prior knowledge of the total number of
clusters in the dataset. The total number of clusters in a real-world dataset, however, is
not always known. For this reason, the best performing PSO and best performing DE are
compared. The CDCDynDE is selected as the winning algorithm, which is then adapted
to determine the optimal number of clusters dynamically. The resulting algorithm is the
k-independent cooperative data clustering differential evolution (KCDCDynDE) algorithm, which was compared against the local network neighbourhood artificial immune system (LNNAIS) algorithm, which is an artificial immune system (AIS) designed to cluster temporal data and determine the total number of clusters dynamically. It was
determined that the KCDCDynDE performed the clustering task well for problems with
frequently changing data, high-dimensions, and pattern and cluster data migration types. / Dissertation (MSc)--University of Pretoria, 2015. / Computer Science / Unrestricted
|
104 |
Proposta de uma rede neural modular que seleciona um conjunto diferente de características por móduloSEVERO, Diogo da Silva 15 August 2013 (has links)
Submitted by Luiza Maria Pereira de Oliveira (luiza.oliveira@ufpe.br) on 2017-07-12T14:35:17Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
DISSERTAÇÃO Diogo da Silva Severo.pdf: 871898 bytes, checksum: d5d7499d1a7c7d0838db7f6fc9dd682b (MD5) / Made available in DSpace on 2017-07-12T14:35:17Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
DISSERTAÇÃO Diogo da Silva Severo.pdf: 871898 bytes, checksum: d5d7499d1a7c7d0838db7f6fc9dd682b (MD5)
Previous issue date: 2013-08-15 / Redes Neurais Artificiais foram inspiradas nas redes neurais biológicas e as principais
semelhanças compartilhadas por ambas são: capacidade de processamento de informação de forma paralela e distribuída, presença de unidades de processamento simples e capacidade de aprendizado através de exemplos. Entretanto, as redes neurais artificiais não apresentam uma característica inerente às redes neurais biológicas: modularização. Em contraste com as redes neurais artificiais, nosso cérebro apresenta áreas especializadas distintas responsáveis por tarefas específicas como visão, audição e fala, por exemplo. Com o intuito de aproximar ainda mais as redes neurais artificiais das redes neurais biológicas, foram propostas as redes neurais modulares. Tais redes tiram proveito da modularização para superar as redes neurais simples quando lidam com problemas complexos. Um conceito crucial relacionado ao uso de redes neurais modulares é a decomposição. A decomposição trata da divisão do problema original
em vários subproblemas, menores e mais simples de serem resolvidos. Cada
subproblema é tratado por um especialista (rede neural simples) específico. Ao solucionar seus respectivos subproblemas, cada módulo faz uso de todo o conjunto original de características para treinar seus especialistas. Entretanto, é esperado que diferentes módulos requeiram diferentes características para realizar suas tarefas. Dessa forma, é importante escolher quais características melhor preservam a informação discriminatória entre classes necessária à tarefa de classificação de cada módulo. Este trabalho propõe uma arquitetura de rede neural modular que seleciona um conjunto específico de características por módulo, sendo este um tópico pouco explorado na
literatura uma vez que, em sua maioria, os trabalhos envolvendo redes neurais modulares não realizam seleção de características para cada módulo específico. O procedimento de seleção de características é um método de otimização global baseado no PSO binário. Outra contribuição do presente trabalho é um método híbrido de seleção e ponderação de características baseado no PSO binário. Foram realizados experimentos com bases de dados públicas e os resultados mostraram que a arquitetura proposta obteve melhores taxas de classificação ou taxas iguais, porém, fazendo uso de menos características quando comparadas a redes neurais modulares que não realizam a seleção de características por módulo. Os experimentos realizados com o método híbrido de seleção e ponderação de características baseado em otimização por enxame de partículas mostraram taxas de classificação superiores às taxas obtidas pelos métodos que serviram de comparação. / Artificial Neural Networks were inspired by biological neural networks and the major similarities shared by both are: the ability to process information in a parallel and istributed way, the presence of simple processing units and the ability for learning through examples. However, artificial neural networks do not present an inherent characteristic of biological neural networks: modularization. In contrast to artifical neural networks, our brain has distinct specialized areas for specific tasks such as vision, hearing and speech, for example. With the aim of bringing even more artificial neural networks to biological neural networks, modular neural networks were proposed. Such networks take advantage of modularization to outperform the simple neural networks when dealing with complex problems. A crucial concept related to the use of neural networks is the task decomposition. The task decomposition divides the original problem into several subproblems, smaller and simpler to resolve. Each subproblem is handled by a specific
expert (simple neural network). To solve their subproblems, each module makes use of the whole set of features to train its expert. Nevertheless, it is expected that different modules require different features to perform their tasks. Thus, it is important to choose which features better preserve the discriminant information among classes for each module. This work proposes a modular neural network architecture that selects a specific set of features per module. This approach is a topic little explored in literature since in most cases research involving modular neural networks do not perform feature selection for each particular module. The feature selection procedure is an optimization method based on the binary particle swarm optimization. Another contribution of this work is a hybrid feature selection and weighting method based on binary PSO. Experiments were carried out on public datasets and the results show that the proposed architecture achieved better accuracy rates or equal rates, however, using less features when compared to modular neural networks that do not select features per module. Experiments with the hybrid feature selection and weighting method based on optimization particle swarm show
better accuracy rates when compared to other hybrids methods used in this work as comparison methods.
|
105 |
Computational Evacuation Models for Populations with Heterogeneous Mobility RequirementsHata, John Myerly 09 September 2021 (has links)
No description available.
|
106 |
Spacecraft Trajectory Optimization Suite (STOPS): Optimization of Low-Thrust Interplanetary Spacecraft Trajectories Using Modern Optimization TechniquesSheehan, Shane P 01 September 2017 (has links)
The work presented here is a continuation of Spacecraft Trajectory Optimization Suite (STOpS), a master’s thesis written by Timothy Fitzgerald at California Polytechnic State University, San Luis Obispo. Low-thrust spacecraft engines are becoming much more common due to their high efficiency, especially for interplanetary trajectories. The version of STOpS presented here optimizes low-thrust trajectories using the Island Model Paradigm with three stochastic evolutionary algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. While the algorithms used here were designed for the original STOpS, they were modified for this work.
The low-thrust STOpS was successfully validated with two trajectory problems and their known near-optimal solutions. The first verification case was a constant-thrust, variable-time Earth orbit to Mars orbit transfer where the thrust was 3.787 Newtons and the time was approximately 195 days. The second verification case was a variable-thrust, constant-time Earth orbit to Mercury orbit transfer with the thrust coming from a solar electric propulsion model equation and the time being 355 days. Low-thrust STOpS found similar near-optimal solutions in each case. The final result of this work is a versatile MATLAB tool for optimizing low-thrust interplanetary trajectories.
|
107 |
Evoluční algoritmy / Evolutionary AlgorithmsSzöllösi, Tomáš January 2012 (has links)
The task of this thesis was focused on comparison selected evolutionary algorithms for their success and computing needs. The paper discussed the basic principles and concepts of evolutionary algorithms used for optimization problems. Author programmed selected evolutionary algorithms and subsequently tasted on various test functions with exactly the given input conditions. Finally the algorithms were compared and evaluated the results obtained for different settings.
|
108 |
Akcelerace částicových rojů PSO pomocí GPU / Acceleration of Particle Swarm Optimization Using GPUsKrézek, Vladimír January 2012 (has links)
This work deals with the PSO technique (Particle Swarm Optimization), which is capable to solve complex problems. This technique can be used for solving complex combinatorial problems (the traveling salesman problem, the tasks of knapsack), design of integrated circuits and antennas, in fields such as biomedicine, robotics, artificial intelligence or finance. Although the PSO algorithm is very efficient, the time required to seek out appropriate solutions for real problems often makes the task intractable. The goal of this work is to accelerate the execution time of this algorithm by the usage of Graphics processors (GPU), which offers higher computing potential while preserving the favorable price and size. The boolean satisfiability problem (SAT) was chosen to verify and benchmark the implementation. As the SAT problem belongs to the class of the NP-complete problems, any reduction of the solution time may broaden the class of tractable problems and bring us new interesting knowledge.
|
109 |
Akcelerace částicových rojů PSO pomocí GPU / Particle Swarm Optimization on GPUsZáň, Drahoslav January 2013 (has links)
This thesis deals with a population based stochastic optimization technique PSO (Particle Swarm Optimization) and its acceleration. This simple, but very effective technique is designed for solving difficult multidimensional problems in a wide range of applications. The aim of this work is to develop a parallel implementation of this algorithm with an emphasis on acceleration of finding a solution. For this purpose, a graphics card (GPU) providing massive performance was chosen. To evaluate the benefits of the proposed implementation, a CPU and GPU implementation were created for solving a problem derived from the known NP-hard Knapsack problem. The GPU application shows 5 times average and almost 10 times the maximum speedup of computation compared to an optimized CPU application, which it is based on.
|
110 |
An Analysis of Overfitting in Particle Swarm Optimised Neural Networksvan Wyk, Andrich Benjamin January 2014 (has links)
The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains
on training data at the cost of generalisation accuracy is known to be speci c to the
training algorithm used. This study investigates over tting within the context of particle
swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are
compared in terms of FFNN accuracy and a description of the over tting behaviour is
established. Each of the PSO components are in turn investigated to determine their
e ect on FFNN over tting. A study of the maximum velocity (Vmax) parameter is
performed and it is found that smaller Vmax values are optimal for FFNN training. The
analysis is extended to the inertia and acceleration coe cient parameters, where it is
shown that speci c interactions among the parameters have a dominant e ect on the
resultant FFNN accuracy and may be used to reduce over tting. Further, the signi cant
e ect of the swarm size on network accuracy is also shown, with a critical range being
identi ed for the swarm size for e ective training. The study is concluded with an
investigation into the e ect of the di erent activation functions. Given strong empirical
evidence, an hypothesis is made that stating the gradient of the activation function
signi cantly a ects the convergence of the PSO. Lastly, the PSO is shown to be a very
effective algorithm for the training of self-adaptive FFNNs, capable of learning from
unscaled data. / Dissertation (MSc)--University of Pretoria, 2014. / tm2015 / Computer Science / MSc / Unrestricted
|
Page generated in 0.0438 seconds