Spelling suggestions: "subject:"1article swarm optimization"" "subject:"1article awarm optimization""
1 |
A Generalized theoretical deterministic particle swarm modelCleghorn, Christopher Wesley January 2013 (has links)
Particle swarm optimization (PSO) is a well known population-based search algorithm,
originally developed by Kennedy and Eberhart in 1995. The PSO has been utilized
in a variety of application domains, providing a wealth of empirical evidence for its
effectiveness as an optimizer. The PSO itself has undergone many alterations subsequent
to its inception, some of which are fundamental to the PSO's core behavior, others have
been more application specific. The fundamental alterations to the PSO have to a large
extent been a result of theoretical analysis of the PSO's particle's long term trajectory.
The most obvious example, is the need for velocity clamping in the original PSO. While
there were empirical fndings that suggested that each particle's velocity was increasing
at a rapid rate, it was only once a solid theoretical study was performed that the reason
for the velocity explosion was understood. There has been a large amount of theoretical
research done on the PSO, both for the deterministic model, and more recently for the
stochastic model.
This thesis presents an extension to the theoretical deterministic PSO model. Under the
extended model, conditions for particle convergence to a point are derived. At present
all theoretical PSO research is done under the stagnation assumption, in some form or
another. The analysis done under the stagnation assumption is one where the personal
best and neighborhood best are assumed to be non-changing. While analysis under the
stagnation assumption is very informative, it could never provide a complete description
of a PSO's behavior. Furthermore, the assumption implicitly removes the notion of
a social network structure from the analysis. The model used in this thesis greatly
weakens the stagnation assumption, by instead assuming that each particle's personal
best and neighborhood best can occupy an arbitrarily large number of unique positions.
Empirical results are presented to support the theoretical fndings. / Dissertation (MSc)--University of Pretoria, 2013. / gm2014 / Computer Science / Unrestricted
|
2 |
Optimal Sequencing of Aircraft Engine Maintenance Events Using Particle Swarm OptimizationVander Linde, Rebecca Behrends 09 December 2016 (has links)
This research explores optimal sequencing of aircraft engine maintenance events. Due to the high ongoing maintenance costs and large capital investments required for supporting an aircraft engine fleet, the timing and associated costs of maintenance events are key to minimizing overall costs for an airline. This paper examines a novel application of particle swarm optimization techniques in order to create a decision tool which may be easily implemented by the practitioner. Numerical experiments demonstrate the quality of this solution method under multiple maintenance pricing structures and operational constraints.
|
3 |
Diseño e implementación de algoritmos aproximados de clustering balanceado en PSOLai, Chun-Hau January 2012 (has links)
Magíster en Ciencias, Mención Computación / Este trabajo de tesis está dedicado al diseño e implementación de algoritmos aproximados que permiten explorar las mejores soluciones para el problema de Clustering Balanceado, el cual consiste en dividir un conjunto de n puntos en k clusters tal que cada cluster tenga como m ́ınimo ⌊ n ⌋ puntos, k y éstos deben estar lo más cercano posible al centroide de cada cluster. Estudiamos los algoritmos existentes para este problema y nuestro análisis muestra que éstos podrían fallar en entregar un resultado óptimo por la ausencia de la evaluación de los resultados en cada iteración del algoritmo. Entonces, recurrimos al concepto de Particles Swarms, que fue introducido inicialmente para simular el comportamiento social humano y que permite explorar todas las posibles soluciones de manera que se aproximen a la óptima rápidamente. Proponemos cuatro algoritmos basado en Particle Swarm Optimization (PSO): PSO-Hu ́ngaro, PSO-Gale-Shapley, PSO-Aborci ́on-Punto-Cercano y PSO-Convex-Hull, que aprovechan la característica de la generación aleatoria de los centroides por el algoritmo PSO, para asignar los puntos a estos centroides, logrando una solución más aproximada a la óptima.
Evaluamos estos cuatro algoritmos con conjuntos de datos distribuidos en forma uniforme y no uniforme. Se encontró que para los conjuntos de datos distribuidos no uniformemente, es impredecible determinar cuál de los cuatro algoritmos propuestos llegaría a tener un mejor resultado de acuerdo al conjunto de métricas (intra-cluster-distancia, índice Davies-Doublin e índice Dunn). Por eso, nos concentramos con profundidad en el comportamiento de ellos para los conjuntos de datos distribuidos en forma uniforme.
Durante el proceso de evaluación se descubrió que la formación de los clusters balanceados de los algoritmos PSO-Absorcion-Puntos-Importantes y PSO-Convex-Hull depende fuertemente del orden con que los centroides comienzan a absorber los puntos más cercanos. En cambio, los algoritmos PSO-Hungaro y PSO-Gale-Shapley solamente dependen de los centroides generados y no del orden de los clusters a crear. Se pudo concluir que el algoritmo PSO-Gale-Shapley presenta el rendimiento menos bueno para la creación de clusters balanceados, mientras que el algoritmo PSO-Hungaro presenta el rendimiento más eficiente para lograr el resultado esperado. Éste último está limitado al tamaño de los datos y la forma de distribución. Se descubrió finalmente que, para los conjuntos de datos de tamaños grandes, independiente de la forma de distribución, el algoritmo PSO-Convex-Hull supera a los demás, entregando mejor resultado según las métricas usadas.
|
4 |
VHTR Core Shuffling Algorithm Using Particle Swarm Optimization ReloPSO-3DLakshmipathy, Sathish Kumar 2012 May 1900 (has links)
Improving core performance by reshuffling/reloading the fuel blocks within the core is one of the in-core fuel management methods with two major benefits: a possibility to improve core life and increase core safety. VHTR is a hexagonal annular core reactor with reflectors in the center and outside the fuel rings (3-rings). With the block type fuel assemblies, there is an opportunity for muti-dimensional fuel bocks movement within the core during scheduled reactor refueling operations.
As the core is symmetric, by optimizing the shuffle operation of 1/6th of the core, the same process can be repeated through the remaining 5/6th of the core. VHTR has 170 fuel blocks in the core of which 50 are control rod blocks and are not movable to regular fuel block locations. The reshuffling problem now is to find the best combination of 120 fuel blocks that has a minimized power peaking and/or increased core life under safety constraints among the 120! combinations.
For evaluating each LP during the shuffling, a fitness function that is developed from the parameters affecting the power peaking and core life is required. Calculating the power peaking at each step using Monte Carlo simulations on a whole core exact geometry model is a time consuming process and not feasible. A parameter is developed from the definitions of reactivity and power peaking factor called the localized reactivity potential that can be estimated for every block movement based on the reaction rates and atom densities of the initial core burnup at the time of shuffling.
The algorithm (ReloPSO) is based on Particle Swarm Optimization algorithm the search process by improving towards the optimum from a set of random LPs based on the fitness function developed with the reactivity potential parameter. The algorithm works as expected and the output obtained has a flatter reactivity profile than the input. The core criticality is found to increase when shuffled closer to end of life. Detailed analysis on the burn runs after shuffling at different time of core operation is required to correlate the estimated and actual values of the reactivity parameter and to optimize the time of shuffle.
|
5 |
Cooperative Models of Particle Swarm OptimizersEl-Abd, Mohammed January 2008 (has links)
Particle Swarm Optimization (PSO) is one of the most effFective optimization tools, which
emerged in the last decade. Although, the original aim was to simulate the behavior of a group of birds or a school of fish looking for food, it was quickly realized that it could be applied in optimization problems. Different directions have been taken to analyze the PSO behavior as well as improving its performance. One approach is the introduction of the concept of cooperation. This thesis focuses on studying this concept in PSO by investigating the different design decisions that influence the cooperative PSO models' performance and introducing new approaches for information exchange.
Firstly, a comprehensive survey of all the cooperative PSO models proposed in the literature is compiled and a definition of what is meant by a cooperative PSO model is introduced. A taxonomy for classifying the different surveyed cooperative PSO models is given. This taxonomy classifies the cooperative models based on two different aspects: the approach the model uses for
decomposing the problem search space and the method used for placing the particles into the different cooperating swarms. The taxonomy helps in gathering all the proposed models under one roof and understanding the similarities and differences between these models.
Secondly, a number of parameters that control the performance of cooperative PSO models are identified. These parameters give answers to the four questions: Which information to share? When to share it? Whom to share it with? and What to do with it? A complete empirical study is conducted on one of the cooperative PSO models in order to understand how the performance changes under the influence of these parameters.
Thirdly, a new heterogeneous cooperative PSO model is proposed, which is based on the exchange of probability models rather than the classical migration of particles. The model uses two swarms that combine the ideas of PSO and Estimation of Distribution Algorithms (EDAs) and is considered heterogeneous since the cooperating swarms use different approaches to sample the
search space. The model is tested using different PSO models to ensure that the performance is robust against changing the underlying population topology. The experiments show that the model is able to produce better results than its components in many cases. The model also proves to be
highly competitive when compared to a number of state-of-the-art cooperative PSO algorithms.
Finally, two different versions of the PSO algorithm are applied in the FPGA placement problem. One version is applied entirely in the discrete domain, which is the first attempt to solve this problem in this domain using a discrete PSO (DPSO). Another version is implemented in the continuous domain. The PSO algorithms are applied to several well-known FPGA benchmark problems with increasing dimensionality. The results are compared to those obtained by the academic Versatile Place and Route (VPR) placement tool, which is based on Simulated Annealing
(SA). The results show that these methods are competitive for small and medium-sized problems. For higher-sized problems, the methods provide very close results. The work also proposes the use of different cooperative PSO approaches using the two versions and their performances are compared to the single swarm performance.
|
6 |
Cooperative Models of Particle Swarm OptimizersEl-Abd, Mohammed January 2008 (has links)
Particle Swarm Optimization (PSO) is one of the most effFective optimization tools, which
emerged in the last decade. Although, the original aim was to simulate the behavior of a group of birds or a school of fish looking for food, it was quickly realized that it could be applied in optimization problems. Different directions have been taken to analyze the PSO behavior as well as improving its performance. One approach is the introduction of the concept of cooperation. This thesis focuses on studying this concept in PSO by investigating the different design decisions that influence the cooperative PSO models' performance and introducing new approaches for information exchange.
Firstly, a comprehensive survey of all the cooperative PSO models proposed in the literature is compiled and a definition of what is meant by a cooperative PSO model is introduced. A taxonomy for classifying the different surveyed cooperative PSO models is given. This taxonomy classifies the cooperative models based on two different aspects: the approach the model uses for
decomposing the problem search space and the method used for placing the particles into the different cooperating swarms. The taxonomy helps in gathering all the proposed models under one roof and understanding the similarities and differences between these models.
Secondly, a number of parameters that control the performance of cooperative PSO models are identified. These parameters give answers to the four questions: Which information to share? When to share it? Whom to share it with? and What to do with it? A complete empirical study is conducted on one of the cooperative PSO models in order to understand how the performance changes under the influence of these parameters.
Thirdly, a new heterogeneous cooperative PSO model is proposed, which is based on the exchange of probability models rather than the classical migration of particles. The model uses two swarms that combine the ideas of PSO and Estimation of Distribution Algorithms (EDAs) and is considered heterogeneous since the cooperating swarms use different approaches to sample the
search space. The model is tested using different PSO models to ensure that the performance is robust against changing the underlying population topology. The experiments show that the model is able to produce better results than its components in many cases. The model also proves to be
highly competitive when compared to a number of state-of-the-art cooperative PSO algorithms.
Finally, two different versions of the PSO algorithm are applied in the FPGA placement problem. One version is applied entirely in the discrete domain, which is the first attempt to solve this problem in this domain using a discrete PSO (DPSO). Another version is implemented in the continuous domain. The PSO algorithms are applied to several well-known FPGA benchmark problems with increasing dimensionality. The results are compared to those obtained by the academic Versatile Place and Route (VPR) placement tool, which is based on Simulated Annealing
(SA). The results show that these methods are competitive for small and medium-sized problems. For higher-sized problems, the methods provide very close results. The work also proposes the use of different cooperative PSO approaches using the two versions and their performances are compared to the single swarm performance.
|
7 |
Multi-population PSO-GA hybrid techniques: integration, topologies, and parallel compositionFranz, Wayne January 2014 (has links)
Recent work in metaheuristic algorithms has shown that solution quality may be improved by composing algorithms with orthogonal characteristics. In this thesis, I study multi-population particle swarm optimization (MPSO) and genetic algorithm (GA) hybrid strategies. I begin by investigating the behaviour of MPSO with crossover, mutation, swapping, and all three, and show that the latter is able to solve the most difficult benchmark functions. Because GAs converge slowly and MPSO provides a large degree of parallelism, I also develop several parallel hybrid algorithms. A composite approach executes PSO and GAs simultaneously in different swarms, and shows advantages when arranged in a star topology, particularly with a central GA. A static scheme executes in series, with a GA performing the exploration followed by MPSO for exploitation. Finally, the last approach dynamically alternates between algorithms. Hybrid algorithms are well-suited for parallelization, but exhibit tradeoffs between performance and solution quality.
|
8 |
Particle Swarm Optimization: Implementace a testování biologicky inspirované optimalizační metodyPrzybek, Tomáš January 2016 (has links)
This thesis analyzes the implementation of a testing algorithm, Particle Swarm Optimization, biologically inspired optimization method. Introduce us briefly with evolutionary algorithms, analyzes in detail the PSO algorithm and its parameters. Testing is performed on numerical, nominal, and binary data. The application contains graphical user interface. The algorithm is compared with genetic algorithm at the end and results are appropriately discussed.
|
9 |
Motion correction of PET/CT imagesChong Chie, Juan Antonio Kim Hoo January 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The advances in health care technology help physicians make more accurate diagnoses about the health conditions of their patients. Positron Emission Tomography/Computed Tomography (PET/CT) is one of the many tools currently used to diagnose health and disease in patients. PET/CT explorations are typically used to detect: cancer, heart diseases, disorders in the central nervous system. Since PET/CT studies can take up to 60 minutes or more, it is impossible for patients to remain motionless throughout the scanning process. This movements create motion-related artifacts which alter the quantitative and qualitative results produced by the scanning process. The patient's motion results in image blurring, reduction in the image signal to noise ratio, and reduced image contrast, which could lead to misdiagnoses.
In the literature, software and hardware-based techniques have been studied to implement motion correction over medical files. Techniques based on the use of an external motion tracking system are preferred by researchers because they present a better accuracy. This thesis proposes a motion correction system that uses 3D affine registrations using particle swarm optimization and an off-the-shelf Microsoft Kinect camera to eliminate or reduce errors caused by the patient's motion during a medical imaging study.
|
10 |
Parallel Particle Swarm Optimization and Large SwarmsMcNabb, Andrew W. 27 January 2011 (has links) (PDF)
Optimization is the search for the maximum or minimum of a given objective function. Particle Swarm Optimization (PSO) is a simple and effective evolutionary algorithm, but it may take hours or days to optimize difficult objective functions which are deceptive or expensive. Deceptive functions may be highly multimodal and multidimensional, and PSO requires extensive exploration to avoid being trapped in local optima. Expensive functions, whose computational complexity may arise from dependence on detailed simulations or large datasets, take a long time to evaluate. For deceptive or expensive objective functions, PSO must be parallelized to use multiprocessor systems and clusters efficiently. This thesis investigates the implications of parallelizing PSO and in particular, the details of parallelization and the effects of large swarms. PSO can be expressed naturally in Google's MapReduce framework to develop a simple and robust parallel implementation that automatically includes communication, load balancing, and fault tolerance. This flexible implementation makes it easy to apply modifications to the algorithm, such as those that improve optimization of difficult objective functions and improve parallel performance. Results show that larger swarms help with both of these goals, but they are most effective if arranged into sparse topologies with lower overhead from communication. Additionally, PSO must be modified to use communication more efficiently in a large sparse swarm for objective functions where information ideally flows quickly through a large swarm. Swarm size is usually fixed at a modest number around 50, but particularly in a parallel computational environment, much larger swarms are much more effective for deceptive objective functions. Likewise, swarms much smaller than 50 are more effective for expensive but less deceptive functions. In general, swarm size should be carefully chosen using all available information about the objective function and computational environment.
|
Page generated in 0.1405 seconds