• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estudo da influência dos parâmetros de algoritmos paralelos da computação evolutiva no seu desempenho em plataformas multicore

Pais, Mônica Sakuray 14 March 2014 (has links)
Parallel computing is a powerful way to reduce the computation time and to improve the quality of solutions of evolutionary algorithms (EAs). At first, parallel evolutionary algorithms (PEAs) ran on very expensive and not easily available parallel machines. As multicore processors become ubiquitous, the improved performance available to parallel programs is a great motivation to computationally demanding EAs to turn into parallel programs and exploit the power of multicores. The parallel implementation brings more factors to influence performance, and consequently adds more complexity on PEAs evaluations. Statistics can help in this task and guarantee the significance and correct conclusions with minimum tests, provided that the correct design of experiments is applied. This work presents a methodology that guarantees the correct estimation of speedups and applies a factorial design on the analysis of PEAs performance. As a case study, the influence of migration related parameters on the performance of a parallel evolutionary algorithm solving two benchmark problems executed on a multicore processor is evaluated. / A computação paralela é um modo poderoso de reduzir o tempo de processamento e de melhorar a qualidade das soluções dos algoritmos evolutivos (AE). No princípio, os AE paralelos (AEP) eram executados em máquinas paralelas caras e pouco disponíveis. Desde que os processadores multicore tornaram-se largamente disponíveis, sua capacidade de processamento paralelo é um grande incentivo para que os AE, programas exigentes de poder computacional, sejam paralelizados e explorem ao máximo a capacidade de processamento dos multicore. A implementação paralela traz mais fatores que podem influenciar a performance dos AEP e adiciona mais complexidade na avaliação desses algoritmos. A estatística pode ajudar nessa tarefa e garantir conclusões corretas e significativas, com o mínimo de testes, se for aplicado o planejamento de experimentos adequado. Neste trabalho é apresentada uma metodologia de experimentação com AEP. Essa metodologia garante a correta estimação do speedup e aplica ao planejamento fatorial na análise dos fatores que influenciam o desempenho. Como estudo de caso, um algoritmo genético, denominado AGP-I, foi paralelizado segundo o modelo de ilhas. O AGP-I foi executado em plataformas com diferentes processadores multicore na resolução de duas funções de teste. A metodologia de experimentação com AEP foi aplicada para se determinar a influência dos fatores relacionados à migração no desempenho do AGP-I. / Doutor em Ciências
2

Εκπαίδευση τεχνητών νευρωνικών δικτύων με την χρήση εξελικτικών αλγορίθμων, σε σειριακά και κατανεμημένα συστήματα

Επιτροπάκης, Μιχαήλ 14 January 2009 (has links)
Σε αυτή την εργασία, μελετάμε την κλάση των Υψηλής Τάξης Νευρωνικών Δικτύων και ειδικότερα των Πι—Σίγμα Νευρωνικών Δικτύων. Η απόδοση των Πι—Σίγμα Νευρωνικών Δικτύων αξιολογείται με την εφαρμογή τους σε διάφορα πολύ γνωστά χαρακτηριστικά προβλήματα εκπαίδευσης νευρωνικών δικτύων. Στα πειράματα που πραγματοποιήθηκαν, για την εκπαίδευση των Πι—Σίγμα Νευρωνικών Δικτύων υλοποιήθηκαν και εφαρμόστηκαν Σειριακοί και Παράλληλοι/Κατανεμημένοι Εξελικτικοί Αλγόριθμοι. Πιο συγκεκριμένα χρησιμοποιήθηκαν οι σειριακές καθώς και οι παράλληλες/κατανεμημένες εκδοχές των Διαφοροεξελικτικών Αλγόριθμων. Η προτεινόμενη μεθοδολογία βασίστηκε σε αυτές τις εκδοχές και εφαρμόστηκε για την εκπαίδευση των Πι—Σίγμα δικτύων χρησιμοποιώντας συναρτήσεις ενεργοποίησης «κατώφλια». Επιπρόσθετα, όλα τα βάρη και οι μεροληψίες των δικτύων περιορίστηκαν σε ένα μικρό εύρος ακέραιων αριθμών, στο διάστημα [-32, 32]. Συνεπώς, τα εκπαιδευμένα Πι—Σίγμα νευρωνικά δίκτυα μπορούν να αναπαρασταθούν με ακεραίους των 6-bits. Αυτής της μορφής τα δίκτυα είναι πιο κατάλληλα για την εφαρμογή τους σε «υλικό» (hardware), από νευρωνικά δίκτυα με πραγματικά βάρη. Τα πειραματικά αποτελέσματα μας δείχνουν ότι η διαδικασία εκπαίδευσης είναι γρήγορη, σταθερή και αξιόπιστη. Ακόμα η εφαρμογή των παράλληλων/κατανεμημένων Εξελικτικών Αλγορίθμων για την εκπαίδευση των Πι—Σίγμα δικτύων μας επιδεικνύει αρκετά καλές ικανότητες γενίκευσης των εκπαιδευμένων δικτύων καθώς και προσφέρει επιτάχυνση στην διαδικασία εκπαίδευσης τους. / In this contribution, we study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Evolutionary Algorithms and Parallel/Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically the serial as well as a parallel/distributed version of the Differential Evolution have been employed. The proposed approach is applied to train Pi-Sigma networks using threshold activation functions. Moreover, the weights and biases were confined to a narrow band of integers, constrained in the range [-32, 32]. Thus the trained Pi-Sigma neural networks can be represented by just 6 bits. Such networks are better suited for hardware implementation than the real weight ones. Experimental results suggest that this training process is fast, stable and reliable and the trained Pi-Sigma networks, with both serial and parallel/distributed algorithms, exhibited good generalization capabilities. Furthermore, the usage of a distributed version of the Differential Evolution, has demonstrated a speedup of the training process.

Page generated in 0.1234 seconds