• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Information Integration in a Grid Environment Applications in the Bioinformatics Domain

Radwan, Ahmed M. 16 December 2010 (has links)
Grid computing emerged as a framework for supporting complex operations over large datasets; it enables the harnessing of large numbers of processors working in parallel to solve computing problems that typically spread across various domains. We focus on the problems of data management in a grid/cloud environment. The broader context of designing a services oriented architecture (SOA) for information integration is studied, identifying the main components for realizing this architecture. The BioFederator is a web services-based data federation architecture for bioinformatics applications. Based on collaborations with bioinformatics researchers, several domain-specific data federation challenges and needs are identified. The BioFederator addresses such challenges and provides an architecture that incorporates a series of utility services; these address issues like automatic workflow composition, domain semantics, and the distributed nature of the data. The design also incorporates a series of data-oriented services that facilitate the actual integration of data. Schema integration is a core problem in the BioFederator context. Previous methods for schema integration rely on the exploration, implicit or explicit, of the multiple design choices that are possible for the integrated schema. Such exploration relies heavily on user interaction; thus, it is time consuming and labor intensive. Furthermore, previous methods have ignored the additional information that typically results from the schema matching process, that is, the weights and in some cases the directions that are associated with the correspondences. We propose a more automatic approach to schema integration that is based on the use of directed and weighted correspondences between the concepts that appear in the source schemas. A key component of our approach is a ranking mechanism for the automatic generation of the best candidate schemas. The algorithm gives more weight to schemas that combine the concepts with higher similarity or coverage. Thus, the algorithm makes certain decisions that otherwise would likely be taken by a human expert. We show that the algorithm runs in polynomial time and moreover has good performance in practice. The proposed methods and algorithms are compared to the state of the art approaches. The BioFederator design, services, and usage scenarios are discussed. We demonstrate how our architecture can be leveraged on real world bioinformatics applications. We preformed a whole human genome annotation for nucleosome exclusion regions. The resulting annotations were studied and correlated with tissue specificity, gene density and other important gene regulation features. We also study data processing models on grid environments. MapReduce is one popular parallel programming model that is proven to scale. However, using the low-level MapReduce for general data processing tasks poses the problem of developing, maintaining and reusing custom low-level user code. Several frameworks have emerged to address this problem; these frameworks share a top-down approach, where a high-level language is used to describe the problem semantics, and the framework takes care of translating this problem description into the MapReduce constructs. We highlight several issues in the existing approaches and alternatively propose a novel refined MapReduce model that addresses the maintainability and reusability issues, without sacrificing the low-level controllability offered by directly writing MapReduce code. We present MapReduce-LEGOS (MR-LEGOS), an explicit model for composing MapReduce constructs from simpler components, namely, "Maplets", "Reducelets" and optionally "Combinelets". Maplets and Reducelets are standard MapReduce constructs that can be composed to define aggregated constructs describing the problem semantics. This composition can be viewed as defining a micro-workflow inside the MapReduce job. Using the proposed model, complex problem semantics can be defined in the encompassing micro-workflow provided by MR-LEGOS while keeping the building blocks simple. We discuss the design details, its main features and usage scenarios. Through experimental evaluation, we show that the proposed design is highly scalable and has good performance in practice.
142

Statistical Methods for Computational Markets : Proportional Share Market Prediction and Admission Control

Sandholm, Thomas January 2008 (has links)
We design, implement and evaluate statistical methods for managing uncertainty when consuming and provisioning resources in a federated computational market. To enable efficient allocation of resources in this environment, providers need to know consumers' risk preferences, and the expected future demand. The guarantee levels to offer thus depend on techniques to forecast future usage and to accurately capture and model uncertainties. Our main contribution in this thesis is threefold; first, we evaluate a set of techniques to forecast demand in computational markets; second, we design a scalable method which captures a succinct summary of usage statistics and allows consumers to express risk preferences; and finally we propose a method for providers to set resource prices and determine guarantee levels to offer. The methods employed are based on fundamental concepts in probability theory, and are thus easy to implement, as well as to analyze and evaluate. The key component of our solution is a predictor that dynamically constructs approximations of the price probability density and quantile functions for arbitrary resources in a computational market. Because highly fluctuating and skewed demand is common in these markets, it is difficult to accurately and automatically construct representations of arbitrary demand distributions. We discovered that a technique based on the Chebyshev inequality and empirical prediction bounds, which estimates worst case bounds on deviations from the mean given a variance, provided the most reliable forecasts for a set of representative high performance and shared cluster workload traces. We further show how these forecasts can help the consumers determine how much to spend given a risk preference and how providers can offer admission control services with different guarantee levels given a recent history of resource prices. / QC 20100909
143

PARALLEL HYBRID OPTIMIZATION METHODS FOR PERMUTATION BASED PROBLEMS

Mehdi, Malika 20 October 2011 (has links) (PDF)
La résolution efficace de problèmes d'optimisation a permutation de grande taille nécessite le développement de méthodes hybrides complexes combinant différentes classes d'algorithmes d'optimisation. L'hybridation des metaheuristiques avec les méthodes exactes arborescentes, tel que l'algorithme du branch-and-bound (B&B), engendre une nouvelle classe d'algorithmes plus efficace que ces deux classes de méthodes utilisées séparément. Le défi principal dans le développement de telles méthodes consiste a trouver des liens ou connections entre les stratégies de recherches divergentes utilisées dans les deux classes de méthodes. Les Algorithmes Genetiques (AGs) sont des metaheuristiques, a base de population, tr'es populaires bas'es sur des op'erateurs stochastiques inspirés de la théorie de l'évolution. Contrairement aux AGs et aux m'etaheuristiques généralement, les algorithmes de B&B sont basées sur l'énumération implicite de l'espace de recherche représente par le moyen d'un arbre, dit arbre de recherche. Notre approche d'hybridation consiste a définir un codage commun des solutions et de l'espace de recherche ainsi que des opérateurs de recherche ad'equats afin de permettre un couplage efficace de bas niveau entre les deux classes de méthodes AGs et B&B. La représentation de l'espace de recherche par le moyen d'arbres est traditionnellement utilis'ee dans les algorithmes de B&B. Dans cette thèse, cette représentation a été adaptée aux metaheuristiques. L'encodage des permutations au moyen de nombres naturels faisant référence a l'ordre d'énumération lexicographique des permutations dans l'arbre du B&B, est proposé comme une nouvelle manière de représenter l'espace de recherche des problèmes 'a permutations dans les metaheuristiques. Cette méthode de codage est basée sur les propriétés mathématiques des permutations, 'a savoir les codes de Lehmer et les tables d'inversions ainsi que les système d'énumération factoriels. Des fonctions de transformation permettant le passage entre les deux représentations (permutations et nombres) ainsi que des opérateurs de recherche adaptes au codage, sont définis pour les problèmes 'a permutations généralisés. Cette représentation, désormais commune aux metaheuristiques et aux algorithmes de B&B, nous a permis de concevoir des stratégies d'hybridation et de collaboration efficaces entre les AGs et le B&B. En effet, deux approches d'hybridation entre les AGs et les algorithmes de B&B (HGABB et COBBIGA) bas'es sur cette représentation commune ont été proposées dans cette thèse. Pour validation, une implémentation a été réalisée pour le problème d'affectation quadratique 'a trois dimension (Q3AP). Afin de résoudre de larges instances de ce problème, nous avons aussi propose une parallélisation pour les deux algorithme hybrides, basée sur des techniques de décomposition d'espace (décomposition par intervalle) utilisées auparavant pour la parallélisation des algorithmes de B&B. Du point de vue implémentation, afin de faciliter de futurs conceptions et implémentations de méthodes hybrides combinant metaheuristiques et méthodes exacte arborescentes, nous avons développe une plateforme d'hybridation intégrée au logiciel pour metaheuristiques, ParadisEO. La nouvelle plateforme a été utilisée pour réaliser des expérimentations intensives sur la grille de calcul Grid'5000.
144

Performance Prediction and Evaluation Tools

Girona Turell, Sergi 24 July 2003 (has links)
La predicció és un concepte de recerca molt interessant. No es només predir el resultat futur, sinó que també cal predir el resultat conegut, a vegades anomenat validació. L'aplicació de tècniques de predicció sobre el comportament de sistemes és sempre molt útil perquè ens ajuda a comprendre el funcionament del elements que estem analitzant.Aquest treball va començar tot analitzant la influència de l'execució concurrent de diverses aplicacions de pas de missatges. Així, l'objectiu original era trobar i proposar uns algorismes de planificació de processos que obtinguessin un throughput màxim, equitatiu, i amb un rendiment adecuat del sistema.Per a poder avaluar adecuadament aquestes polítiques de planificació, varem trobar que ens calien eines d'analisi. Dimemas i Paraver són les eines de l'entorn que anomenem DiP. Tot i que aqueste eines varem estar dissenyades fa més de deu anys, són vàlides i ampliables.Dimemas es l'eina de predicció de rendiment. Tot fent ús de models senzills, pot predir el temps d'execució de les aplicacions de pas de missatges, fent servir un conjunt de paràmetres que modelitzen el funcionament del sistema. No és només una eina que prediu el temps d'execució, sinó que s'ha demostrat molt útil per entendre la influència dels diferents paràmetres del sistema en el temps de resposta de les aplicacionsL'eina d'anàlisi de l'entorn DiP s'anomena Paraver. Permet analitzar simultàniament moltes aplicacions i el sistema des de diferent punts de vista: analitzant els missatges, les contencions a la xarxa de comunicació, la planificació del processador.Promenvir/ST-ORM és l'eina d'anàlisi estocàstic. Inclou utilitats que permeten analitzar la influència de qualsevol paràmetre del sistema, així com sintonitzar els paràmetres de simulació, per aconseguir que les prediccions s'aproximin a la realitat.La qualitat i categoria de les decisions que varem prendre fa anys queda demostrada per la mètode de com cal utilitzar conjuntament totes les eines, i perquè són eines què es corresponen a l'estat de l'art actual.Aquest treball inclou la descripció de les diferents eines, des de el punt de vista de disseny fins la seva utilització (en cert grau), la validació de Dimemas, el disseny conceptual de Promenvir, la presentació del mètode que cal emprar amb aquestes eines (incloent anàlisi d'aplicacions individuals fins a anàlisis més complex), i alguns dels nostres primers anàlisi sobre polítiques de planificació de processador. / Prediction is an interesting research topic. It is not only to predict the future result, but also to predict the past, often called validation. Applying prediction techniques to observed system behavior has always been extremely useful to understand the internals of the elements under analysis.We have started this work to analyze the influence of several message passing application when running in parallel. The original objective was to find and propose a process scheduling algorithm that maximizes the system throughput, fair, proper system utilization.In order to evaluate properly the different schedulers, it is necessary to use some tools. Dimemas and Paraver, conform the core of DiP environment. These tools has been designed ten years ago, but still valid and extensible.Dimemas is a performance prediction tool. Using a single models, it capable to predict execution time for message passing applications, considering few system parameters for the system. It is useful not only to predict the result of an execution, but to understand the influence of the system parameters in the execution time of the application.Paraver is the analysis tool of DiP environment. It allows the analysis of applications and system from several points of view: analyzing messages, contention in the interconnection network, processor scheduling.Promenvir/ST-ORM is a stochastic analysis tool. It incorporates facilities to analyze the influence of any parameter in the system, as well as to tune the simulation parameters, so the prediction is close to reality. The methodology on how to use these tools as a group to analyze the whole environment, and the fact that all those tools are State of the Art, demonstrates the quality of the decisions we made some years ago.This work includes description of the different tools, from its internal design to some external utilization, the validation of Dimemas, the concept design of Promenvir, the architecture for Promenvir, the presentation of the methodology used with these tools (for simple application analysis to complex system analysis), and some of our first analyses on processor scheduling policies.
145

Efficient Scheduling In Distributed Computing On Grid

Kaya, Ozgur 01 December 2006 (has links) (PDF)
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
146

Distributed computations in a dynamic, heterogeneous Grid environment

Dramlitsch, Thomas January 2002 (has links)
Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen.<br /> <br /> Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen.<br /> <br /> Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde. / In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing.<br /> <br /> This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software.<br /> <br /> Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor.<br /> <br /> In this work we are closing this gap. In our thesis, we will<br /> - show that an execution of classical parallel codes in Grid environments is possible but very slow<br /> - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance<br /> - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes<br /> - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment.<br /> <br /> The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications.<br /> <br /> The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus.
147

Analysis And Predictions Of DNA Sequence Transformations On Grids

Joshi, Yadnyesh R 08 1900 (has links)
Phylogenetics is the study of evolution of organisms. Evolution occurs due to mutations of DNA sequences. The reasons behind these seemingly random mutations are largely unknown. There are many algorithms that build phylogenetic trees from DNA sequences. However, there are certain uncertainties associated with these phylogenetic trees. Fine level analysis of these phylogenetic trees is both important and interesting for evolutionary biologists. In this thesis, we try to model evolutions of DNA sequences using Cellular Automata and resolve the uncertainties associated with the phylogenetic trees. In particular, we determine the effect of neighboring DNA base-pairs on the mutation of a base-pair. Cellular Automata can be viewed as an array of cells which modifies itself in discrete time-steps according to a governing rule. The state of the cell at the next time-step depends on its current state and state of its neighbors. We have used cellular automata rules for analysis and predictions of DNA sequence transformations on Computational grids. In the first part of the thesis, DNA sequence evolution is modeled as a cellular automaton with each cell having one of the four possible states, corresponding to four bases. Phylogenetic trees are explored in order to find out the cellular automata rules that may have guided the evolutions. Master-client paradigm is used to exploit the parallelism in the sequence transformation analysis. Load balancing and fault-tolerance techniques are developed to enable the execution of the explorations on grid resources. The analysis of the sequence transformations is used to resolve uncertainties associated with the phylogenetic trees namely, intermediate sequences in the phylogenetic tree and the exact number of time-steps required for the evolution of a branch. The model is further used to find out various statistics such as most popular rules at a particular time-step in the evolution history of a branch in a phylogenetic tree. We have observed some interesting statistics regarding the unknown base pairs in the intermediate sequences of the phylogenetic tree and the most popular rules used for sequence transformations. Next part of the thesis deals with predictions of future sequences using the previous sequences. First, we try to find out the preserved sequences so that cellular automata rules can be applied selectively. Then, random strategies are developed as base benchmarks. Roulette Wheel strategy is used for predicting future DNA sequences. Though the prediction strategies are able to better the random benchmarks in most of the cases, average performance improvement over the random strategies is not significant. The possible reasons are discussed.
148

網格運算在證券業之應用研究 / The Application Study of Grid Computing on Security Industry

劉繕源, Liu,Shan- Yuan Unknown Date (has links)
證券市場商品種類、參與者及交易方式既多且複雜,在實際交易中,常形成系統尖峰負載,但硬體設備往往無法適時進行調配。本研究利用網格計算技術,提供一個在商業環境中可行及可用性的驗證。 驗證程序,首先建構網格計算環境,再者進行驗證項目測試,最後結論與建議。步驟先採IBM公司所提供中介軟體(Middleware)- IBM Grid ToolBox V3-進行測試,惟系統因軟、硬體版本問題無法安裝成功;再採用另一中介軟體-IBM WebSphere Extend(WebSphere XD)-多次調整軟、硬體版本後,建構完成測試環境。為測試不同廠牌之硬體是否可以在相同之網格環境中正常運作,分別採用IBM及HP各四部刀鋒伺服器為硬體架構。本研究測試驗證項目共計兩項,第一項測試網格技算解決硬體調配問題;第二項測試網格計算解決交易尖峰負載量問題。經測試驗證,第一項部份,分別以手動及自動之動態調整測試,WXD系統確實可依服務負載來動態調整可用Node來服務,亦即,網格技術可用來解決硬體調配問題。第二項部份,經模擬進行需長時間計算之批次工作(Batch Job),觀察是否由多部主機同時分工運算完成整個計算工作,WXD系統確實可在系統部份Node失效後,其他Node仍可保持其服務之水準,與一般網路Load Balance設計,盲目的將服務負載導向存在之伺服器,在部份硬體失效後,可能會導致整體系統當掉之情形,驗證網格技術是較符合企業實務運作之要求。 / There are a lot of complicated type of merchandize, investors and trading methods on security market. It is often the prime cause of transaction peak load and hardware resource allocation problem. This research utilizes grid computing technology to offer a feasible and usable verification in business environment. The verification procedure, at first, build and construct the grid computing environment; moreover, the project is tested and conclusion and suggestion is made, finally. The step adopts IBM middleware, IBM Grid ToolBox V3, first. The system is unable to install successfully because of the edition question of the software and hardware. And then adopt another IBM middleware, IBM WebSphere Extend (WebSphere XD). After adjusting the software and hardware edition many times, we build and construct the testing environment. In order to test the different factories hardware that can normal operate in the same grid computing environment, this study adopts four server of IBM and HP respectively. There are two test items in this research. The first test wants to solve the allocation problem of hardware. The second test wants to solve the problem of transaction peak load. The first part, test with manual and automatic dynamic adjustment separately. We find WXD system can adjust idle Node dynamically in accordance with serve load. The second part, the test uses batch job which need to calculate for a long time and observe whether the whole calculation can finish by dividing the work operation for many host computers at the same time. We also prove that WXD system can really keep the level of its service while some Node systems lose efficiency. In a word, according to the traditional Load Balance design, system will lead its peak load to other server after some hardwares lose its efficiency, but it may cause the whole system to shot down. From this study, we prove that grid computing technology is comparatively conforms to the requirement of enterprise's practice operation.
149

Dienstauswahlverfahren im Grid /

Reinicke, Michael. January 2007 (has links) (PDF)
Universiẗat, Diss.--Bayreuth, 2006.
150

Evolving Aggregation Behaviors For Swarm Robotic Systems: A Systematic Case Study

Bahceci, Erkin 01 August 2005 (has links) (PDF)
Evolutionary methods are shown to be useful in developing behaviors in robotics. Interest in the use of evolution in swarm robotics is also on the rise. However, when one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he is faced with decisions to be made regarding some parameters of fitness evaluations and of the genetic algorithm. In this thesis, aggregation behavior is chosen as a case, where performance and scalability of aggregation behaviors of perceptron controllers that are evolved for a simulated swarm robotic system are systematically studied with different parameter settings. Using a cluster of computers to run simulations in parallel, four experiments are conducted varying some of the parameters. Rules of thumb are derived, which can be of guidance to the use of evolutionary methods to generate other swarm robotic behaviors as well.

Page generated in 0.1017 seconds