• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Memcapacitive Reservoir Computing Architectures

Tran, Dat Tien 03 June 2019 (has links)
In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices. Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck. Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power. Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron. The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency. A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency. Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well. Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors. Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption. As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)? Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)? Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)? My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST. In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks. Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures. My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.
2

Computação biogeográfica : fundamentos, estrutura conceitual e aplicações / Biogeographic computation : foundations, conceptual framework and applications

Pasti, Rodrigo, 1980- 22 August 2018 (has links)
Orientadores: Fernando José Von Zuben, Leandro Nunes de Castro Silva / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-22T19:55:45Z (GMT). No. of bitstreams: 1 Pasti_Rodrigo_D.pdf: 4931504 bytes, checksum: 28f0e5bc0f2210db1cfb2cd2fc4c7f15 (MD5) Previous issue date: 2013 / Resumo: Existem muitas formas de se entender e descrever a natureza, sendo que a Computação Natural parte do princípio de que sistemas naturais são processadores de informação, ou seja, realizam computação. Esta tese recorre aos mecanismos da Computação Natural para o entendimento das computações realizadas em um sistema natural específico: os ecossistemas. O primeiro passo está fundamentado na ciência da Biogeografia, que estuda os ecossistemas e seus padrões emergentes. Na Biogeografia, é possível identificar elementos, relações entre eles e processos. A principal contribuição desta tese está na formalização computacional da Biogeografia, dando origem à Computação Biogeográfica. A proposta da Computação Biogeográfica é desenvolvida em várias frentes. A primeira delas promove a formalização do metamodelo, definido como uma estrutura conceitual que busca contextualizar a existência de ecossistemas artificiais e seus processos espaço temporais. Em seguida, para ilustrar a aplicação do metamodelo, são propostas definições de computação de ecossistemas em superfícies adaptativas fenotípicas. Essas definições resultam em um conjunto de relações e processos, os quais são aplicáveis à construção de ecossistemas artificiais. Estes, por sua vez, permitem o entendimento de dinâmicas e padrões de ecossistemas e também podem contribuir para a resolução de problemas computáveis. Na etapa final da tese, será proposto um algoritmo de radiação adaptativa que exibe padrões similares aos encontrados em ecossistemas reais e que se mostra competitivo para otimização multimodal em espaços contínuos. Por fim, perspectivas futuras são apresentadas visando indicar caminhos para se consolidar a Computação Biogeográfica como um novo ramo da Computação Natural / Abstract: There are several attempts to understand and describe nature, and Natural Computing is founded on the principle that natural systems are information processors, in the sense that they perform computation. This thesis makes use of Natural Computing mechanisms for the understanding of the computation taking place in a specific natural system: the ecosystems. The first step is based on the science of Biogeography, devoted to the study of ecosystems and their emerging patterns. In Biogeography, it is possible to identify elements, relations among them, and processes. The main contribution of this thesis resides in the computational formalization of Biogeography, thus establishing the research area of Biogeographic Computation. The proposal of Biogeographic Computation is introduced in several fronts. The first front promotes the metamodel formalism, which defines a conceptual framework focused on contextualizing the existence of artificial ecosystems and their spatio-temporal processes. After that, aiming at illustrating the application of the metamodel, definitions of ecosystems computing in phenotypic adaptive surfaces is proposed. These definitions proceed to a set of relations and processes directly applicable to the proposition of artificial ecosystems. These artificial ecosystems promote the understanding of natural ecosystems dynamics and patterns, and can also contribute to the resolution of computable problems. At the final stage of the thesis, it is presented an adaptive radiation algorithm exhibiting patterns which are similar to the ones found in real ecosystems, and also proving to be competitive for multimodal optimization in continuous spaces. To conclude, some perspectives for the further steps of the research are outlined with the purpose of indicating some routes to consolidate Biogeographic Computation as a new branch of Natural Computing / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
3

The Computational Approach to Vision and Motor Control

Hildreth, Ellen C., Hollerbach, John M. 01 August 1985 (has links)
Over the past decade it has become increasingly clear that to understand the brain, we must study not only its biochemical and biophysical mechanisms and its outward perceptual and physical behavior. We also must study the brain at a theoretical level that investigated the computations that are necessary to perform its functions. The control of movements such as reaching, grasping and manipulating objects requires complex mechanisms that elaborate information form many sensors and control the forces generated by a large number of muscles. The act of seeing, which intuitively seems so simple and effortless, requires information processing whose complexity we are just beginning to grasp. A computational approach to the study of vision and motor tasks. This paper discusses a particular view of the computational approach and its relevance to experimental neuroscience.
4

Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons

Almassian, Amin 23 March 2016 (has links)
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
5

Development of a Flapping Actuator Based on Oscillating Electromagnetic Fields

Unknown Date (has links)
In this work a bio-inspired flapping actuator based on varied magnetic fields is developed, controlled and characterized. The actuator is sought to contribute to the toolbox of options for bio-mimetics research. The design is that of a neodymium bar magnet on one end of an armature which is moved by two air core electromagnetic coils in the same manner as agonist and antagonist muscle pairs function in biological systems. The other end of the armature is fitted to a rigid fin extending beyond the streamline enclosure body to produce propulsion. A series of tests in still water were performed to measure the kinematics and propulsive force for different control schemes including the effect of adding antagonistic resistance to the control schemes. Control methods based on armature position and based on setpoint error were tested and antagonist force was found to increase consistency of control of the systems in certain cases. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection
6

Um algoritmo bioinspirado para agrupamento de dados

David, Marcio Frayze 03 May 2010 (has links)
Made available in DSpace on 2016-03-15T19:38:16Z (GMT). No. of bitstreams: 1 Marcio Frayze David.pdf: 699315 bytes, checksum: 587538708d29252e3c3a8f5c46cbaa53 (MD5) Previous issue date: 2010-05-03 / Fundo Mackenzie de Pesquisa / This dissertation discusses the use of bio-inspired algorithms for data clustering, with emphasis on a model of emergent collective behavior of agents and a new clustering algorithm called cBoids is presented. The cBoids algorithm is a variation of the classic Boids model. In this new algorithm, each Boid represents an object from the data base and the three original rules from the Boids model were modified so that the objects of the database have influence on the behaviour of the Boids. Two new rules have also been proposed, responsible for the creation and destruction of centroids, which represent the formed clusters. In the experiments conducted in this work the algorithm was successfully tested on four databases. / Esta dissertação aborda o uso de algoritmos bioinspirados para a tarefa de agrupamento de dados , com ênfase nos modelos de comportamentos emergentes coletivos de agentes e um novo algoritmo de agrupamento de dados chamado cBoids é apresentado. O algoritmo cBoids é uma variação do clássico modelo Boids. Neste novo algoritmo, cada Boid representa um objeto da base de dados e as três regras originais do modelo Boids foram alteradas para que os objetos da base de dados influenciem o comportamento dos Boids. Duas novas regras também foram propostas, responsáveis pela criação e destruição de centróides, que representam os clusters formados. Nos experimentos realizados nesta dissertação o algoritmo foi testado com sucesso em quatro bases de dados.
7

Mechanistic Models of Neural Computation in the Fruit Fly Brain

Yeh, Chung-Heng January 2019 (has links)
Understanding the operating principles of the brain functions is the key to building novel computing architectures for mimicking human intelligence. Neural activities at different scales lead to different levels of brain functions. For example, cellular functions, such as sensory transduction, occur in the molecular reactions, and cognitive functions, such as recognition, emerge in neural systems across multiple brain regions. To bridge the gap between neuroscience and artificial computation, we need systematic development of mechanistic models for neural computation across multiple scales. Existing models of neural computation are often independently developed for a specific scale and hence not compatible with others. In this thesis, we investigate the neural computations in the fruit fly brain and devise mechanistic models at different scales in a systematic manner so that models at one scale constitute functional building blocks for the next scale. Our study spans from the molecular and circuit computations in the olfactory system to the system-level computation of the central complex in the fruit fly. First, we study how the two key aspects of odorant, identity and concentration, are encoded by the odorant transduction process at the molecular scale. We mathematically quantify the odorant space and propose a biophysical model of the olfactory sensory neuron (OSN). To validate our modeling approaches, we examine the OSN model with a multitude of odorant waveforms and demonstrate that the model output reproduces the temporal responses of OSNs obtained from in vivo electrophysiology recordings. In addition, we evaluate the model at the OSN population level and quantify the combinatorial complexity of the transformation taking place between the odorant space and the OSNs. The resulting concentration-dependent combinatorial code determines the complexity of the input space driving olfactory processing in the downstream neuropil, the antennal lobe. Second, we investigate the neural information processing in the antennal lobe across the molecule scale and the circuit scale. The antennal lobe encodes the output of the OSN population from a concentration-dependent code into a concentration-independent combinatorial code. To study the transformation of the combinatorial code, we construct a computational model of the antennal lobe that consists of two sub circuits, a predictive coding circuit and an on-off circuit, realized by two distinct local neuron networks, respectively. By examining the entire circuit model with both monomolecular odorant and odorant mixtures, we demonstrate that the predictive coding circuit encodes the odorant identity into concentration invariant code and the on-off circuit encodes the onset and the offset of a unique odorant identity. Third, we investigate the odorant representation inherent in the Kenyon cell activities in the mushroom body. The Kenyon cells encodes the output of the antennal lobe into a high-dimensional, sparse neural code that is immediately used for learning and memory formation. We model the Kenyon cell circuitry as a real-time feedback normalization circuit converting odorant information into a time-dependent hash codes. The resultant real-time hash code represents odorants, pure or mixture alike, in a way conducive to classifications, and suggests an intrinsic partition of the odorant space with similar hash codes. Forth, we study at the system scale the neural coding of the central complex. The central complex is a set of neuropils in the center of the fly brain that integrates multiple sensory information and play an important role in locomotor control. We create an application that enables simultaneous graphical querying and construction of executable model of the central complex neural circuitry. By reconfiguring the circuitry and generating different executable models, we compare the model response of the wild type and mutant fly strains. Finally, we show that the multi-scale study of the fruit fly brain is made possible by the Fruit Fly Brain Observatory (FFBO), an open-source platform to support open, collaborative fruit fly neuroscience research. The software architecture of the FFBO and its key application are highlighted along with several examples.
8

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks

Goudarzi, Alireza 01 January 2012 (has links)
The high cost of processor fabrication plants and approaching physical limits have started a new wave research in alternative computing paradigms. As an alternative to the top-down manufactured silicon-based computers, research in computing using natural and physical system directly has recently gained a great deal of interest. A branch of this research promotes the idea that any physical system with sufficiently complex dynamics is able to perform computation. The power of networks in representing complex interactions between many parts make them a suitable choice for modeling physical systems. Many studies used networks with a homogeneous structure to describe the computational circuits. However physical systems are inherently heterogeneous. We aim to study the effect of heterogeneity in the dynamics of physical systems that pertains to information processing. Two particularly well-studied network models that represent information processing in a wide range of physical systems are Random Boolean Networks (RBN), that are used to model gene interactions, and Liquid State Machines (LSM), that are used to model brain-like networks. In this thesis, we study the effects of function heterogeneity, in-degree heterogeneity, and interconnect irregularity on the dynamics and the performance of RBN and LSM. First, we introduce the model parameters to characterize the heterogeneity of components in RBN and LSM networks. We then quantify the effects of heterogeneity on the network dynamics. For the three heterogeneity aspects that we studied, we found that the effect of heterogeneity on RBN and LSM are very different. We find that in LSM the in-degree heterogeneity decreases the chaoticity in the network, whereas it increases chaoticity in RBN. For interconnect irregularity, heterogeneity decreases the chaoticity in LSM while its effects on RBN the dynamics depends on the connectivity. For {K} < 2, heterogeneity in the interconnect will increase the chaoticity in the dynamics and for {K} > 2 it decreases the chaoticity. We find that function heterogeneity has virtually no effect on the LSM dynamics. In RBN however, function heterogeneity actually makes the dynamics predictable as a function of connectivity and heterogeneity in the network structure. We hypothesize that node heterogeneity in RBN may help signal processing because of the variety of signal decomposition by different nodes.
9

PCAISO-GT: uma metaheurística co-evolutiva paralela de otimização aplicada ao problema de alocação de berços

Oliveira, Carlos Eduardo de Jesus Guimarães 24 March 2013 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-03-30T11:51:21Z No. of bitstreams: 1 Carlos Eduardo de Jesus Guimarães Oliveira.pdf: 1236896 bytes, checksum: ef9d04e6f25aee7908b56a622411bc74 (MD5) / Made available in DSpace on 2015-03-30T11:51:21Z (GMT). No. of bitstreams: 1 Carlos Eduardo de Jesus Guimarães Oliveira.pdf: 1236896 bytes, checksum: ef9d04e6f25aee7908b56a622411bc74 (MD5) Previous issue date: 2014-01-31 / Nenhuma / Este trabalho apresenta um algoritmo de otimização baseado na metaheurística dos Sistemas Imunológicos Artificiais, princípios de Teoria dos Jogos, Co-evolução e Paralelização. Busca-se a combinação adequada dos conceitos de Teoria dos Jogos, Co-evolução e Paralelização aplicados ao algoritmo AISO (Artificial Immune System Optimization) para resolução do Problema de Alocação de Berços (PAB). Dessa maneira, o algoritmo é formalizado a partir das técnicas citadas, formando o PCAISO-GT: Parallel Coevolutionary Artificial Immune System Optimization with Game Theory. Inicialmente, foram realizados experimentos visando à sintonia dos parâmetros empregados nas diferentes versões da ferramenta desenvolvida. Com base nas melhores configurações identificadas, foram realizados experimentos de avaliação através da solução de um conjunto de instâncias do PAB. Os resultados obtidos permitiram a indicação da versão co-evolutiva associada à teoria dos jogos como a melhor para solução do problema em estudo. / This paper presents an optimization algorithm based on metaheuristic of Artificial Immune Systems, principles of Game Theory, Co-evolution and parallelization. The objective is find the appropriate combination of the concepts of Game Theory, Co-evolution and Parallelization applied to AISO algorithm (Artificial Immune System Optimization) for solving the Berth Allocation Problem (BAP). Thus, the algorithm is formalized from the above mentioned techniques, forming the PCAISO-GT: Parallel Coevolutionary Artificial Immune System Optimization with Game Theory. Initially, experiments aiming to tune the parameters were performed using different versions of the tool developed. Based on the identified best settings, evaluation experiments were carried out by solving a set of instances of the PAB. The results obtained allowed the appointment of co-evolutionary version associated with game theory as the best solution to the problem under study.

Page generated in 0.1028 seconds