Spelling suggestions: "subject:"particle swarm"" "subject:"particle awarm""
11 |
Parallel Particle Swarm Optimization and Large SwarmsMcNabb, Andrew W. 27 January 2011 (has links) (PDF)
Optimization is the search for the maximum or minimum of a given objective function. Particle Swarm Optimization (PSO) is a simple and effective evolutionary algorithm, but it may take hours or days to optimize difficult objective functions which are deceptive or expensive. Deceptive functions may be highly multimodal and multidimensional, and PSO requires extensive exploration to avoid being trapped in local optima. Expensive functions, whose computational complexity may arise from dependence on detailed simulations or large datasets, take a long time to evaluate. For deceptive or expensive objective functions, PSO must be parallelized to use multiprocessor systems and clusters efficiently. This thesis investigates the implications of parallelizing PSO and in particular, the details of parallelization and the effects of large swarms. PSO can be expressed naturally in Google's MapReduce framework to develop a simple and robust parallel implementation that automatically includes communication, load balancing, and fault tolerance. This flexible implementation makes it easy to apply modifications to the algorithm, such as those that improve optimization of difficult objective functions and improve parallel performance. Results show that larger swarms help with both of these goals, but they are most effective if arranged into sparse topologies with lower overhead from communication. Additionally, PSO must be modified to use communication more efficiently in a large sparse swarm for objective functions where information ideally flows quickly through a large swarm. Swarm size is usually fixed at a modest number around 50, but particularly in a parallel computational environment, much larger swarms are much more effective for deceptive objective functions. Likewise, swarms much smaller than 50 are more effective for expensive but less deceptive functions. In general, swarm size should be carefully chosen using all available information about the objective function and computational environment.
|
12 |
Optimization-based mechanism synthesis using multi-objective parallel asynchronous particle swarm optimizationMcDougall, Robin David 01 December 2008 (has links)
A distributed variant of multi-objective particle swarm optimization (MOPSO) called
multi-objective parallel asynchronous particle swarm optimization (MOPAPSO) is
presented, and the effects of distribution of objective function calculations to slave
processors on the results and performance are investigated and employed for the
synthesis of Grashof mechanisms.
By using a formal multi-objective handling scheme based on Pareto dominance criteria, the need to pre-weight competing systemic objective functions is removed and the optimal solution for a design problem can be selected from a front of candidates after the parameter optimization has been completed.
MOPAPSO's ability to match MOPSO's results using parallelization for improved performance is presented. Results for both four and ve bar mechanism synthesis
examples are shown. / UOIT
|
13 |
A Current-Based Preventive Security-Constrained Optimal Power Flow by Particle Swarm OptimizationZhong, Yi-Shun 14 February 2008 (has links)
An Equivalent Current Injection¡]ECI¡^based Preventive Security-
Constrained Optimal Power Flow¡]PSCOPF¡^is presented in this paper
and a particle swarm optimization (PSO) algorithm is developed for
solving non-convex Optimal Power Flow (OPF) problems. This thesis
integrated Simulated Annealing Particle Swarm Optimization¡]SAPSO¡^
and Multiple Particle Swarm Optimization¡]MPSO¡^, enabling a fast
algorithm to find the global optimum. Optimal power flow is
solved based on Equivalent- Current Injection¡]ECIOPF¡^algorithm. This
OPF deals with both continuous and discrete control variables and is a
mixed-integer optimal power flow¡]MIOPF¡^. The continuous control
variables modeled are the active power output and generator-bus voltage
magnitudes, while the discrete ones are the shunt capacitor devices. The
feasibility of the proposed method is exhibited for a standard IEEE 30 bus
system, and it is compared with other stochastic methods for the solution
quality. Security Analysis is also conducted. Ranking method is used to
highlight the most severe event caused by a specific fault. A preventive
algorithm will make use of the contingency information, and keep the
system secure to avoid violations when fault occurs. Generators will be
used to adjust the line flow to the point that the trip of the most severe line
would not cause a major problem.
|
14 |
Nature inspired computational intelligence for financial contagion modellingLiu, Fang January 2014 (has links)
Financial contagion refers to a scenario in which small shocks, which initially affect only a few financial institutions or a particular region of the economy, spread to the rest of the financial sector and other countries whose economies were previously healthy. This resembles the “transmission” of a medical disease. Financial contagion happens both at domestic level and international level. At domestic level, usually the failure of a domestic bank or financial intermediary triggers transmission by defaulting on inter-bank liabilities, selling assets in a fire sale, and undermining confidence in similar banks. An example of this phenomenon is the failure of Lehman Brothers and the subsequent turmoil in the US financial markets. International financial contagion happens in both advanced economies and developing economies, and is the transmission of financial crises across financial markets. Within the current globalise financial system, with large volumes of cash flow and cross-regional operations of large banks and hedge funds, financial contagion usually happens simultaneously among both domestic institutions and across countries. There is no conclusive definition of financial contagion, most research papers study contagion by analyzing the change in the variance-covariance matrix during the period of market turmoil. King and Wadhwani (1990) first test the correlations between the US, UK and Japan, during the US stock market crash of 1987. Boyer (1997) finds significant increases in correlation during financial crises, and reinforces a definition of financial contagion as a correlation changing during the crash period. Forbes and Rigobon (2002) give a definition of financial contagion. In their work, the term interdependence is used as the alternative to contagion. They claim that for the period they study, there is no contagion but only interdependence. Interdependence leads to common price movements during periods both of stability and turmoil. In the past two decades, many studies (e.g. Kaminsky et at., 1998; Kaminsky 1999) have developed early warning systems focused on the origins of financial crises rather than on financial contagion. Further authors (e.g. Forbes and Rigobon, 2002; Caporale et al, 2005), on the other hand, have focused on studying contagion or interdependence. In this thesis, an overall mechanism is proposed that simulates characteristics of propagating crisis through contagion. Within that scope, a new co-evolutionary market model is developed, where some of the technical traders change their behaviour during crisis to transform into herd traders making their decisions based on market sentiment rather than underlying strategies or factors. The thesis focuses on the transformation of market interdependence into contagion and on the contagion effects. The author first build a multi-national platform to allow different type of players to trade implementing their own rules and considering information from the domestic and a foreign market. Traders’ strategies and the performance of the simulated domestic market are trained using historical prices on both markets, and optimizing artificial market’s parameters through immune - particle swarm optimization techniques (I-PSO). The author also introduces a mechanism contributing to the transformation of technical into herd traders. A generalized auto-regressive conditional heteroscedasticity - copula (GARCH-copula) is further applied to calculate the tail dependence between the affected market and the origin of the crisis, and that parameter is used in the fitness function for selecting the best solutions within the evolving population of possible model parameters, and therefore in the optimization criteria for contagion simulation. The overall model is also applied in predictive mode, where the author optimize in the pre-crisis period using data from the domestic market and the crisis-origin foreign market, and predict in the crisis period using data from the foreign market and predicting the affected domestic market.
|
15 |
Modelación y Optimización de Redes IP Usando Herramientas de Inteligencia ComputacionalUrrutia Arestizábal, Patricio Alejandro January 2007 (has links)
No description available.
|
16 |
Cognitive smart agents for optimising OpenFlow rules in software defined networksSabih, Ann Faik January 2017 (has links)
This research provides a robust solution based on artificial intelligence (AI) techniques to overcome the challenges in Software Defined Networks (SDNs) that can jeopardise the overall performance of the network. The proposed approach, presented in the form of an intelligent agent appended to the SDN network, comprises of a new hybrid intelligent mechanism that optimises the performance of SDN based on heuristic optimisation methods under an Artificial Neural Network (ANN) paradigm. Evolutionary optimisation techniques, including Particle Swarm Optimisation (PSO) and Genetic Algorithms (GAs) are deployed to find the best set of inputs that give the maximum performance of an SDN-based network. The ANN model is trained and applied as a predictor of SDN behaviour according to effective traffic parameters. The parameters that were used in this study include round-trip time and throughput, which were obtained from the flow table rules of each switch. A POX controller and OpenFlow switches, which characterise the behaviour of an SDN, have been modelled with three different topologies. Generalisation of the prediction model has been tested with new raw data that were unseen in the training stage. The simulation results show a reasonably good performance of the network in terms of obtaining a Mean Square Error (MSE) that is less than 10−6 [superscript]. Following the attainment of the predicted ANN model, utilisation with PSO and GA optimisers was conducted to achieve the best performance of the SDN-based network. The PSO approach combined with the predicted SDN model was identified as being comparatively better than the GA approach in terms of their performance indices and computational efficiency. Overall, this research demonstrates that building an intelligent agent will enhance the overall performance of the SDN network. Three different SDN topologies have been implemented to study the impact of the proposed approach with the findings demonstrating a reduction in the packets dropped ratio (PDR) by 28-31%. Moreover, the packets sent to the SDN controller were also reduced by 35-36%, depending on the generated traffic. The developed approach minimised the round-trip time (RTT) by 23% and enhanced the throughput by 10%. Finally, in the event where SDN controller fails, the optimised intelligent agent can immediately take over and control of the entire network.
|
17 |
Using swarm intelligence for distributed job scheduling on the gridMoallem, Azin 16 April 2009
With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. Grids are playing an important and growing role in today networks. The huge amount of computations a Grid can fulfill in a specificc time cannot be done by the best super computers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized by a good load balancing algorithm. The purpose of such algorithms is to make sure all nodes are equally involved in Grid computations. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One is based on ant colony optimization and is called AntZ, the other one is based on particle swarm optimization and is called ParticleZ. Distributed load balancing does not incorporate a single point of failure in the system. In the AntZ algorithm, an ant is invoked in response to submitting a job to the Grid and this ant surfs the network to find the best resource to deliver the job to. In the ParticleZ algorithm, each node plays a role as a particle and moves toward
other particles by sharing its workload among them. We will be simulating our proposed approaches using a Grid simulation toolkit (GridSim) dedicated to Grid simulations. The
performance of the algorithms will be evaluated using several performance criteria (e.g.
makespan and load balancing level). A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches will also be provided. Experimental results show the proposed algorithms (AntZ and ParticleZ) can perform very well in a Grid environment. In particular, the use of particle swarm optimization, which has not been addressed in the literature, can yield better performance results in many scenarios than the ant colony approach.
|
18 |
Using swarm intelligence for distributed job scheduling on the gridMoallem, Azin 16 April 2009 (has links)
With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. Grids are playing an important and growing role in today networks. The huge amount of computations a Grid can fulfill in a specificc time cannot be done by the best super computers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized by a good load balancing algorithm. The purpose of such algorithms is to make sure all nodes are equally involved in Grid computations. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One is based on ant colony optimization and is called AntZ, the other one is based on particle swarm optimization and is called ParticleZ. Distributed load balancing does not incorporate a single point of failure in the system. In the AntZ algorithm, an ant is invoked in response to submitting a job to the Grid and this ant surfs the network to find the best resource to deliver the job to. In the ParticleZ algorithm, each node plays a role as a particle and moves toward
other particles by sharing its workload among them. We will be simulating our proposed approaches using a Grid simulation toolkit (GridSim) dedicated to Grid simulations. The
performance of the algorithms will be evaluated using several performance criteria (e.g.
makespan and load balancing level). A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches will also be provided. Experimental results show the proposed algorithms (AntZ and ParticleZ) can perform very well in a Grid environment. In particular, the use of particle swarm optimization, which has not been addressed in the literature, can yield better performance results in many scenarios than the ant colony approach.
|
19 |
Particle Swarm Optimization Algorithm for Multiuser Detection in DS-CDMA SystemFang, Ping-hau 31 July 2010 (has links)
In direct-sequence code division multiple access (DS-CDMA) systems, the
heuristic optimization algorithms for multiuser detection include genetic algorithms
(GA) and simulated annealing (SA) algorithm. In this thesis, we use particle swarm
optimization (PSO) algorithms to solve the optimization problem of multiuser
detection (MUD). PSO algorithm has several advantages, such as fast convergence,
low computational complexity, and good performance in searching optimum solution.
In order to enhance the performance and reduce the number of parameters, we
propose two modified PSO algorithms, inertia weighting controlled PSO (W-PSO)
and reduced-parameter PSO (R-PSO). From simulation results, the performance of
our proposed algorithms can achieve that of optimal solution. Furthermore, our
proposed algorithms have faster convergence performance and lower complexity
when compared with other conventional algorithms.
|
20 |
Applying MapReduce Island-based Genetic Algorithm-Particle Swarm Optimization to the inference of large Gene Regulatory Network in Cloud Computing environmentHuang, Wei-Jhe 13 September 2012 (has links)
The construction of Gene Regulatory Networks (GRNs) is one of the most important issues in systems biology. To infer a large-scale GRN with a nonlinear mathematical model, researchers need to encounter the time-consuming problem due to the large number of network parameters involved. In recent years, the cloud computing technique has been widely used to solve large-scale problems. Among others, Hadoop is currently the most well-known and reliable cloud computing framework, which allows users to analyze large amount of data in a distributed environment (i.e., MapReduce). It also supports data backup and data recovery mechanisms.
This study proposes an Island-based GAPSO algorithm under the Hadoop cloud computing environment to infer large-scale GRNs. GAPSO exploited the position and velocity functions of PSO, and integrated the operations of Genetic Algorithm. This approach is often used to derive the optimal solution in nonlinear mathematical models. Several sets of experiments have been conducted, in which the number of network nodes varied from 50 to 125. The experiments were executed in the Hadoop distributed environment with 10, 20, and 26 computers, respectively. In the experiments of inferring the network with 125 gene nodes on the largest Hadoop cluster (i.e. 26 computers), the proposed framework performed up to 9.7 times faster than the stand-alone computer. It means that our work can successfully reduce 90% of the computation time in a single experimental run.
|
Page generated in 0.0647 seconds