• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 38
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cognitive smart agents for optimising OpenFlow rules in software defined networks

Sabih, Ann Faik January 2017 (has links)
This research provides a robust solution based on artificial intelligence (AI) techniques to overcome the challenges in Software Defined Networks (SDNs) that can jeopardise the overall performance of the network. The proposed approach, presented in the form of an intelligent agent appended to the SDN network, comprises of a new hybrid intelligent mechanism that optimises the performance of SDN based on heuristic optimisation methods under an Artificial Neural Network (ANN) paradigm. Evolutionary optimisation techniques, including Particle Swarm Optimisation (PSO) and Genetic Algorithms (GAs) are deployed to find the best set of inputs that give the maximum performance of an SDN-based network. The ANN model is trained and applied as a predictor of SDN behaviour according to effective traffic parameters. The parameters that were used in this study include round-trip time and throughput, which were obtained from the flow table rules of each switch. A POX controller and OpenFlow switches, which characterise the behaviour of an SDN, have been modelled with three different topologies. Generalisation of the prediction model has been tested with new raw data that were unseen in the training stage. The simulation results show a reasonably good performance of the network in terms of obtaining a Mean Square Error (MSE) that is less than 10−6 [superscript]. Following the attainment of the predicted ANN model, utilisation with PSO and GA optimisers was conducted to achieve the best performance of the SDN-based network. The PSO approach combined with the predicted SDN model was identified as being comparatively better than the GA approach in terms of their performance indices and computational efficiency. Overall, this research demonstrates that building an intelligent agent will enhance the overall performance of the SDN network. Three different SDN topologies have been implemented to study the impact of the proposed approach with the findings demonstrating a reduction in the packets dropped ratio (PDR) by 28-31%. Moreover, the packets sent to the SDN controller were also reduced by 35-36%, depending on the generated traffic. The developed approach minimised the round-trip time (RTT) by 23% and enhanced the throughput by 10%. Finally, in the event where SDN controller fails, the optimised intelligent agent can immediately take over and control of the entire network.
2

The automatic placement of multiple indoor antennas using Particle Swarm Optimisation

Kelly, Marvin G. January 2016 (has links)
In this thesis, a Particle Swarm Optimization (PSO) method combined with a ray propagation method is presented as a means to optimally locate multiple antennas in an indoor environment. This novel approach uses Particle Swarm Optimisation combined with geometric partitioning. The PSO algorithm uses swarm intelligence to determine the optimal transmitter location within the building layout. It uses the Keenan-Motley indoor propagation model to determine the fitness of a location. If a transmitter placed at that optimum location, transmitting a maximum power is not enough to meet the coverage requirements of the entire indoor space, then the space is geometrically partitioned and the PSO initiated again independently in each partition. The method outputs the number of antennas, their effective isotropic radiated power (EIRP) and physical location required to meet the coverage requirements. An example scenario is presented for a real building at Loughborough University and is compared against a conventional planning technique used widely in practice.
3

An efficient intelligent analysis system for confocal corneal endothelium images

Sharif, Mhd Saeed, Qahwaji, Rami S.R., Shahamatnia, E., Alzubaidi, R., Ipson, Stanley S., Brahma, A. 01 September 2015 (has links)
Yes / A confocal microscope provides a sequence of images of the corneal layers and structures at different depths from which medical clinicians can extract clinical information on the state of health of the patient’s cornea. Hybrid model based on snake and particle swarm optimisation (S-PSO) is proposed in this paper to analyse the confocal endothelium images. The proposed system is able to pre-process (quality enhancement, noise reduction), detect the cells, measure the cell density and identify abnormalities in the analysed data sets. Three normal corneal data sets acquired using confocal microscope, and two abnormal endothelium images associated with diseases have been investigated in the proposed system. Promising results are achieved and the performance of this system are compared with the performance of two morphological based approaches. The developed system can be deployed as clinical tool to underpin the expertise of ophthalmologists in analysing confocal corneal images.
4

Bayesian inference for compact binary sources of gravitational waves / Inférence Bayésienne pour les sources compactes binaires d’ondes gravitationnelles

Bouffanais, Yann 11 October 2017 (has links)
La première détection des ondes gravitationnelles en 2015 a ouvert un nouveau plan d'étude pour l'astrophysique des étoiles binaires compactes. En utilisant les données des détections faites par les détecteurs terrestres advanced LIGO et advanced Virgo, il est possible de contraindre les paramètres physiques de ces systèmes avec une analyse Bayésienne et ainsi approfondir notre connaissance physique des étoiles binaires compactes. Cependant, pour pouvoir être en mesure d'obtenir de tels résultats, il est essentiel d’avoir des algorithmes performants à la fois pour trouver les signaux de ces ondes gravitationnelles et pour l'estimation de paramètres. Le travail de cette thèse a ainsi été centré autour du développement d’algorithmes performants et adaptées au problème physique à la fois de la détection et de l'estimation des paramètres pour les ondes gravitationnelles. La plus grande partie de ce travail de thèse a ainsi été dédiée à l'implémentation d’un algorithme de type Hamiltonian Monte Carlo adapté à l'estimation de paramètres pour les signaux d’ondes gravitationnelles émises par des binaires compactes formées de deux étoiles à neutrons. L'algorithme développé a été testé sur une sélection de sources et a été capable de fournir de meilleures performances que d'autres algorithmes de type MCMC comme l'algorithme de Metropolis-Hasting et l'algorithme à évolution différentielle. L'implémentation d'un tel algorithme dans les pipelines d’analyse de données de la collaboration pourrait augmenter grandement l'efficacité de l'estimation de paramètres. De plus, il permettrait également de réduire drastiquement le temps de calcul nécessaire, ce qui est un facteur essentiel pour le futur où de nombreuses détections sont attendues. Un autre aspect de ce travail de thèse a été dédié à l'implémentation d'un algorithme de recherche de signaux gravitationnelles pour les binaires compactes monochromatiques qui seront observées par la future mission spatiale LISA. L'algorithme est une mixture de plusieurs algorithmes évolutionnistes, avec notamment l'inclusion d'un algorithme de Particle Swarm Optimisation. Cette algorithme a été testé dans plusieurs cas tests et a été capable de trouver toutes les sources gravitationnelles comprises dans un signal donné. De plus, l'algorithme a également été capable d'identifier des sources sur une bande de fréquence aussi grande que 1 mHz, ce qui n'avait pas été réalisé au moment de cette étude de thèse. / The first detection of gravitational waves in 2015 has opened a new window for the study of the astrophysics of compact binaries. Thanks to the data taken by the ground-based detectors advanced LIGO and advanced Virgo, it is now possible to constrain the physical parameters of compact binaries using a full Bayesian analysis in order to increase our physical knowledge on compact binaries. However, in order to be able to perform such analysis, it is essential to have efficient algorithms both to search for the signals and for parameter estimation. The main part of this thesis has been dedicated to the implementation of a Hamiltonian Monte Carlo algorithm suited for the parameter estimation of gravitational waves emitted by compact binaries composed of neutron stars. The algorithm has been tested on a selection of sources and has been able to produce better performances than other types of MCMC methods such as Metropolis-Hastings and Differential Evolution Monte Carlo. The implementation of the HMC algorithm in the data analysis pipelines of the Ligo/Virgo collaboration could greatly increase the efficiency of parameter estimation. In addition, it could also drastically reduce the computation time associated to the parameter estimation of such sources of gravitational waves, which will be of particular interest in the near future when there will many detections by the ground-based network of gravitational wave detectors. Another aspect of this work was dedicated to the implementation of a search algorithm for gravitational wave signals emitted by monochromatic compact binaries as observed by the space-based detector LISA. The developed algorithm is a mixture of several evolutionary algorithms, including Particle Swarm Optimisation. This algorithm has been tested on several test cases and has been able to find all the sources buried in a signal. Furthermore, the algorithm has been able to find the sources on a band of frequency as large as 1 mHz which wasn’t done at the time of this thesis study
5

Task Scheduling Using Discrete Particle Swarm Optimisation / Schemaläggning genom diskret Particle Swarm Optimisation

Karlberg, Hampus January 2020 (has links)
Optimising task allocation in networked systems helps in utilising available resources. When working with unstable and heterogeneous networks, task scheduling can be used to optimise task completion time, energy efficiency and system reliability. The dynamic nature of networks also means that the optimal schedule is subject to change over time. The heterogeneity and variability in network design also complicate the translation of setups from one network to another. Discrete Particle Swarm Optimisation (DPSO) is a metaheuristic that can be used to find solutions to task scheduling. This thesis will explore how DPSO can be used to optimise job scheduling in an unstable network. The purpose is to find solutions for networks like the ones used on trains. This in turn is done to facilitate trajectory planning calculations. Through the use of an artificial neural network, we estimate job scheduling costs. These costs are then used by our DPSO meta heuristic to explore a solution space of potential scheduling. The results focus on the optimisation of batch sizes in relation to network reliability and latency. We simulate a series of unstable and heterogeneous networks and compare completion time. The baseline comparison is the case where scheduling is done by evenly distributing jobs at fixed sizes. The performance of the different approaches is then analysed with regards to usability in real-life scenarios on vehicles. Our results show a noticeable increase in performance within a wide range of network set-ups. This is at the cost of long search times for the DPSO algorithm. We conclude that under the right circumstances, the method can be used to significantly speed up distributed calculations at the cost of requiring significant ahead of time calculations. We recommend future explorations into DPSO starting states to speed up convergence as well as benchmarks of real-life performance. / Optimering av arbetsfördelning i nätverk kan öka användandet av tillgängliga resurser. I instabila heterogena nätverk kan schemaläggning användas för att optimera beräkningstid, energieffektivitet och systemstabilitet. Då nätverk består av sammankopplade resurser innebär det också att vad som är ett optimalt schema kan komma att ändras över tid. Bredden av nätverkskonfigurationer gör också att det kan vara svårt att överföra och applicera ett schema från en konfiguration till en annan. Diskret Particle Swarm Optimisation (DPSO) är en meta heuristisk metod som kan användas för att ta fram lösningar till schemaläggningsproblem. Den här uppsatsen kommer utforska hur DPSO kan användas för att optimera schemaläggning för instabila nätverk. Syftet är att hitta en lösning för nätverk under liknande begränsningar som de som återfinns på tåg. Detta för att i sin tur facilitera planerandet av optimala banor. Genom användandet av ett artificiellt neuralt nätverk (ANN) uppskattar vi schemaläggningskostnaden. Denna kostnad används sedan av DPSO heuristiken för att utforska en lösningsrymd med potentiella scheman. Våra resultat fokuserar på optimeringen av grupperingsstorleken av distribuerade problem i relation till robusthet och letens. Vi simulerar ett flertal instabila och heterogena nätverk och jämför deras prestanda. Utgångspunkten för jämförelsen är schemaläggning där uppgifter distribueras jämnt i bestämda gruperingsstorlekar. Prestandan analyseras sedan i relation till användbarheten i verkliga scenarion. Våra resultat visar på en signifikant ökning i prestanda inom ett brett spann av nätverkskonfigurationer. Det här är på bekostnad av långa söktider för DPSO algoritmen. Vår slutsats är att under rätt förutsättningar kan metoden användas för att snabba upp distribuerade beräkningar förutsatt att beräkningarna för schemaläggningen görs i förväg. Vi rekommenderar vidare utforskande av DPSO algoritmens parametrar för att snabba upp konvergens, samt undersökande av algoritmens prestanda i verkliga miljöer.
6

Equivalent Models for Hydropower Operation in Sweden

Prianto, Pandu Nugroho January 2021 (has links)
Hydropower systems often contain complex river systems which cause the simulations and analyses of a hydropower operation to be computationally heavy. The complex river system is referred to as something called a Detailed model. By creating a simpler model, denoted the Equivalent model, the computational issue could be circumvented. The purpose of this Equivalent model is to emulate the results of the Detailed model. This thesis computes the Equivalent model for a large hydropower system using Particle Swarm Optimisation- algorithm, then evaluates the Equivalent model performance. Simulations are performed on ten rivers in Sweden, representing four trading areas for one year, October 2017 – September 2018. Furthermore, the year is divided into Quarterly and Seasonal periods, to investigate whether the Equivalent model changes over time. The Equivalent model performance is evaluated based on the relative power difference and computational time compared to the Detailed model. The relative power difference is 4%23% between Equivalent and Detailed models, depending on the period and trading area, with the computational time can be reduced by more than 90%. Furthermore, the Equivalent model changes over time, suggesting that when the year is divided appropriately, the Equivalent model could perform better. The relative power difference results indicate that the Equivalent model performance can still be improved by dividing the periods more appropriately, other than Quarterly or Seasonal. Nevertheless, the results provide a satisfactory Equivalent model, based on the faster computation time and a reasonable relative power difference. Finally, the Equivalent model could be used as a foundation for further analyses and simulations. / Vattenkraftsystem består ofta av komplexa älvsystem som gör att simuleringar och analyser av vattenkraftens operation blir beräkningsmässigt tunga. Det komplexa älvsystem kallas en Detaljeraded modell. Genom att skapa en enklare modell, betecknas som en Ekvivalent modell, beräkningsproblemen kan kringgås. Syftet med denna Ekvivalenta modell är att emulera resultaten av den komplexa Detaljerade modellen. Detta examensarbete beräknar den Ekvivalenta modellen för ett stort vattenkraftssystem med hjälp av Particle Swarm Optimisation- algorithmen, och utvärderar modellprestandan hos Ekvivalenten. Simuleringar utförs på tio älvar i Sverige, som representerar fyra handelsområden under ett år, från oktober 2017 september 2018. Dessutom är året uppdelat i kvartals- och säsongsperioder för att undersöka om den Ekvivalenta modellen förändras över tid. Denna Ekvivalenta modell utvärderas baserat på den relativa effektskillnaden och beräkningstiden jämfört med den Detaljerade modellen. Den relativa effektskillnaden är 4% 23% mellan de Ekvivalenta och Detaljerade modellerna, beroende på period och handelsområde, och beräkningstiden minskas med mer än 90%. Vidare ändras Ekvivalenta modellen över tiden, vilket tyder på att när året delas upp på rätt sätt kan den Ekvivalenta modellen prestera ännu bättre. De relativa effektskillnaderna indikerar att vissa perioder fortfarande kan förbättras genom att dela upp perioden mer korrekt. Trots allt, förser resultanten en tillfredsställande Ekvivalent modell som har en mer effektiv beräkningstid och rimliga effektskillnader. Slutligen skulle den Ekvivalenta modellen kunna användas som en grund för ytterligare analyser och simuleringar.
7

An intelligent manufacturing system for heat treatment scheduling

Al-Kanhal, Tawfeeq January 2010 (has links)
This research is focused on the integration problem of process planning and scheduling in steel heat treatment operations environment using artificial intelligent techniques that are capable of dealing with such problems. This work addresses the issues involved in developing a suitable methodology for scheduling heat treatment operations of steel. Several intelligent algorithms have been developed for these propose namely, Genetic Algorithm (GA), Sexual Genetic Algorithm (SGA), Genetic Algorithm with Chromosome differentiation (GACD), Age Genetic Algorithm (AGA), and Mimetic Genetic Algorithm (MGA). These algorithms have been employed to develop an efficient intelligent algorithm using Algorithm Portfolio methodology. After that all the algorithms have been tested on two types of scheduling benchmarks. To apply these algorithms on heat treatment scheduling, a furnace model is developed for optimisation proposes. Furthermore, a system that is capable of selecting the optimal heat treatment regime is developed so the required metal properties can be achieved with the least energy consumption and the shortest time using Neuro-Fuzzy (NF) and Particle Swarm Optimisation (PSO) methodologies. Based on this system, PSO is used to optimise the heat treatment process by selecting different heat treatment conditions. The selected conditions are evaluated so the best selection can be identified. This work addresses the issues involved in developing a suitable methodology for developing an NF system and PSO for mechanical properties of the steel. Using the optimisers, furnace model and heat treatment system model, the intelligent system model is developed and implemented successfully. The results of this system were exciting and the optimisers were working correctly.
8

Automatic generation control of the Petroleum Development Oman (PDO) and the Oman Electricity Transmission Company (OETC) interconnected power systems

Al-Busaidi, Adil G. January 2012 (has links)
Petroleum Development Oman (PDO) and Oman Electricity Transmission Company (OETC) are running the main 132kV power transmission grids in the Sultanate of Oman. In the year 2001, PDO and OETC grids were interconnected with a 132kV Over head transmission line linking Nahada 132kV substation at PDO's side to Nizwa 132kV sub-station at OETC's side. Since then the power exchange between PDO and OETC is driven by the natural impedances of the system and the frequency and power exchange is controlled by manually re-dispatching the generators. In light of the daily load profile and the forecasted Gulf Cooperation Council (GCC) states electrical interconnection, it is a great challenge for PDO and OETC grids operators to maintain the existing operation philosophy. The objective of this research is to investigate Automatic Generation Control (AGC) technology as a candidate to control the grid frequency and the power exchange between PDO and OETC grid. For this purpose, a dynamic power system model has been developed to represent PDO-OETC interconnected power system. The model has been validated using recorded data from the field which has warranted the requirement of refining the model. Novel approaches have been followed during the course of the model refining process which have reduced the modelling error to an acceptable limit. The refined model has then been used to assess the performance of different AGC control topologies. The recommended control topologies have been further improved using sophisticated control techniques like Linear Quadratic Regulator (LQR) and Fuzzy Logic (FL). Hybrid Fuzzy Logic Proportional Integral Derivative (FLPID) AGC controller has produced outstanding results. The FLPID AGC controller parameters have then been optimised using Multidimensional Unconstrained Nonlinear Minimization function (fminsearch) and Particle Swarm Optimisation (PSO) method. The PSO has been proved to be much superior to fminsearch function. The robustness of the LQR, the fminsearch optimized FLPID and the PSO FLPID optimized AGC controllers has been assessed. The LQR robustness found to be slightly better than the FLPID technique. However the FLPID supercedes the LQR due to the limited number of field feedback signals in comparison to the LQR. Finally, a qualitative assessment of the benefits of the ongoing GCC interconnection project on PDO and OETC has been done through modelling approach. The results proved that the GCC interconnection will bring considerable benefits to PDO and OETC but the interconnection capacity between PDO and OETC needs to be enhanced. However, the application of AGC on PDO and OETC will alleviate the PDO-OETC interconnection capacity enhancement imposed by the GCC interconnection.
9

Markerless multiple-view human motion analysis using swarm optimisation and subspace learning

John, Vijay January 2011 (has links)
The fundamental task in human motion analysis is the extraction or capture of human motion and the established industrial technique is marker-based human motion capture. However, marker-based systems, apart from being expensive, are obtrusive and require a complex, time-consuming experimental setup, resulting in increased user discomfort. As an alternative solution, research on markerless human motion analysis has increased in prominence. In this thesis, we present three human motion analysis algorithms performing markerless tracking and classification from multiple-view studio-based video sequences using particle swarm optimisation and charting, a subspace learning technique.In our first framework, we formulate, and perform, human motion tracking as a multi-dimensional non-linear optimisation problem, solved using particle swarm optimisation (PSO), a swarm-intelligence algorithm. PSO initialises automatically, does not need a sequence-specific motion model, functioning as a blackbox system, and recovers from tracking divergence through the use of a hierarchical search algorithm (HPSO). We compare experimentally HPSO with particle filter, annealed particle filter and partitioned sampling annealed particle filter, and report similar or better tracking performance. Additionally we report an extensive experimental study of HPSO over ranges of values of its parameters and propose an automatic-adaptive extension of HPSO called as adaptive particle swarm optimisation. Next, in line with recent interest in subspace tracking, where low-dimensional subspaces are learnt from motion models of actions, we perform tracking in a low-dimensional subspace obtained by learning motion models of common actions using charting, a nonlinear dimensionality reduction tool. Tracking takes place in the subspace using an efficient modified version of particle swarm optimisation. Moreover, we perform a fast and efficient pose evaluation by representing the observed image data, multi-view silhouettes, using vector-quantized shape contexts and learning the mapping from the action subspace to shape space using multi-variate relevance vector machines. Tracking results with various action sequences demonstrate the good accuracy and performance of our approach.Finally, we propose a human motion classification algorithm, using charting-based low-dimensional subspaces, to classify human action sub-sequences of varying lengths, or snippets of poses. Each query action is mapped to a single subspace space, learnt from multiple actions. Furthermore we present a system in which, instead of mapping multiple actions to a single subspace, each action is mapped separately to its action-specific subspace. We adopt a multi-layered subspace classification scheme with layered pruning and search. One of the search layers involves comparing the input snippet with a sequence of key-poses extracted from the subspace. Finally, we identify the minimum length of action snippet, of skeletal features, required for classification, using competing classification systems as the baseline. We test our classification component on HumanEva and CMU mocap datasets, achieving similar or better classification accuracy than various comparable systems. human motion and the established industrial technique is marker-based human motion capture. However, marker-based systems, apart from being expensive, are obtrusive and require a complex, time-consuming experimental setup, resulting in increased user discomfort. As an alternative solution, research on markerless human motion analysis has increased in prominence. In this thesis, we present three human motion analysis algorithms performing markerless tracking and clas- si?cation from multiple-view studio-based video sequences using particle swarm optimisation and charting, a subspace learning technique. In our ?rst framework, we formulate, and perform, human motion tracking as a multi-dimensional non-linear optimisation problem, solved using particle swarm optimisation (PSO), a swarm-intelligence algorithm. PSO initialises automat- ically, does not need a sequence-speci?c motion model, functioning as a black- box system, and recovers from temporary tracking divergence through the use of a powerful hierarchical search algorithm (HPSO). We compare experiment- ally HPSO with particle ?lter, annealed particle ?lter and partitioned sampling annealed particle ?lter, and report similar or better tracking performance. Addi- tionally we report an extensive experimental study of HPSO over ranges of values of its parameters and propose an automatic-adaptive extension of HPSO called as adaptive particle swarm optimisation. Next, in line with recent interest in subspace tracking, where low-dimensional subspaces are learnt from motion models of actions, we perform tracking in a low-dimensional subspace obtained by learning motion models of common actions using charting, a nonlinear dimensionality reduction tool. Tracking takes place in the subspace using an e?cient modi?ed version of particle swarm optimisa- tion. Moreover, we perform a fast and e?cient pose evaluation by representing the observed image data, multi-view silhouettes, using vector-quantized shape contexts and learning the mapping from the action subspace to shape space us- ing multi-variate relevance vector machines. Tracking results with various action sequences demonstrate the good accuracy and performance of our approach. Finally, we propose a human motion classi?cation algorithm, using charting-based low-dimensional subspaces, to classify human action sub-sequences of varying lengths, or snippets of poses. Each query action is mapped to a single subspace space, learnt from multiple actions. Furthermore we present a system in which, instead of mapping multiple actions to a single subspace, each action is mapped separately to its action-speci?c subspace. We adopt a multi-layered subspace classi?cation scheme with layered pruning and search. One of the search lay- ers involves comparing the input snippet with a sequence of key-poses extracted from the subspace. Finally, we identify the minimum length of action snippet, of skeletal features, required for accurate classi?cation, using competing classi?ca- tion systems as the baseline. We test our classi?cation component on HumanEva and CMU mocap datasets, achieving similar or better classi?cation accuracy than
10

Analysis of behaviours in swarm systems

Erskine, Adam January 2016 (has links)
In nature animal species often exist in groups. We talk of insect swarms, flocks of birds, packs of lions, herds of wildebeest etc. These are characterised by individuals interacting by following their own rules, privy only to local information. Robotic swarms or simulations can be used explore such interactions. Mathematical formulations can be constructed that encode similar ideas and allow us to explore the emergent group behaviours. Some behaviours show characteristics reminiscent of the phenomena of criticality. A bird flock may show near instantaneous collective shifts in direction: velocity changes that appear to correlated over distances much larger individual separations. Here we examine swarm systems inspired by flocks of birds and the role played by criticality. The first system, Particle Swarm Optimisation (PSO), is shown to behave optimally when operating close to criticality. The presence of a critical point in the algorithm’s operation is shown to derive from the swarm’s properties as a random dynamical system. Empirical results demonstrate that the optimality lies on or near this point. A modified PSO algorithm is presented which uses measures of the swarm’s diversity as a feedback signal to adjust the behaviour of the swarm. This achieves a statistically balanced mixture of exploration and exploitation behaviours in the resultant swarm. The problems of stagnation and parameter tuning often encountered in PSO are automatically avoided. The second system, Swarm Chemistry, consists of heterogeneous particles combined with kinetic update rules. It is known that, depending upon the parametric configuration, numerous structures visually reminiscent of biological forms are found in this system. The parameter set discovered here results in a cell-division-like behaviour (in the sense of prokaryotic fission). Extensions to the swarm system produces a swarm that shows repeated cell division. As such, this model demonstrates a behaviour of interest to theories regarding the origin of life.

Page generated in 0.119 seconds