Spelling suggestions: "subject:"adaptative algorithm"" "subject:"adaptative allgorithm""
1 |
Estratégia para redução de congestionamento em sistemas multiprocessadores baseados em NOCKAMEI, Camila Ascendina Nunes 07 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-01T13:03:48Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao_Camila_Ascendina_Nunes_Kamei.pdf: 2427056 bytes, checksum: 9c4bd5bb499271557f86edce757edec2 (MD5) / Made available in DSpace on 2016-07-01T13:03:48Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao_Camila_Ascendina_Nunes_Kamei.pdf: 2427056 bytes, checksum: 9c4bd5bb499271557f86edce757edec2 (MD5)
Previous issue date: 2015-08-07 / CNPq / Duas questões são críticas em sistemas com paralelismo de memória em rede NoC baseados
em MPSoC, a ordem de entrega da mensagem e o congestionamento da rede. Os
congestionamentos são frequentes em NoC quando as demandas de pacotes excedem a
capacidade dos recursos da rede e a ordem das mensagens precisam ser mantidas para que
a informação de coerência de cache tenha signi cado para as memórias. Assim, métodos
de controle de congestionamento são necessários para estes sistemas e devem lidar com o
congestionamento da rede, enquanto mantém a ordem das transações.
Este trabalho propõe uma técnica de roteamento baseada no algoritmo de roteamento
Odd-Even associado ao conceito de congestionamento local e global da rede para a escolha
do melhor caminho de encaminhamento dos pacotes de comunicação. Desta forma se
objetiva a redução dos gargalos de comunicação da rede para os sistemas NoC baseado
em MPSoC. Nos experimentos realizados para 16 núcleos, a técnica proposta alcançou a
redução de 13,35% da energia consumida, 25% de redução de latência de envio de pacotes
em comparação o algoritmo XY e 23% de redução de latência de envio de pacotes em
comparação o algoritmo Odd-Even sem modi cação. / Two issues are critical in systems with memory parallelism network NoC-based MPSoC,
the delivery order of messages and network congestion. The congestions are frequent in
NoC when the packages demands exceed the capacity of the network resources and the
order of the messages need to be maintained so that the cache coherency information is
meaningful to the memories. Thus, congestion control methods are needed to deal with
network congestion while they keep the order of the transactions.
This paper proposes the use of the routing algorithm Odd-Even associated with the concept
of local and global network congestion to choose the best routing path of communication
packages. In this way it aims to reduce the network communication bottlenecks
for NoC systems based on MPSoC. In experiments conducted for 16 cores, the proposed
technique has achieved the reduction of 13.35 % of energy consumption, 25% of latency
compared with the XY algorithm and 23% of latency compared with the Odd-Even algorithm
without the modi cation.
|
2 |
Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage / adaptative metaheuristics for continuous optimization based on learning methodsGhoumari, Asmaa 10 December 2018 (has links)
Les problèmes d'optimisation continue sont nombreux, en économie, en traitement de signal, en réseaux de neurones, etc. L'une des solutions les plus connues et les plus employées est l'algorithme évolutionnaire, métaheuristique basée sur les théories de l'évolution qui emprunte des mécanismes stochastiques et qui a surtout montré de bonnes performances dans la résolution des problèmes d'optimisation continue. L’utilisation de cette famille d'algorithmes est très populaire, malgré les nombreuses difficultés qui peuvent être rencontrées lors de leur conception. En effet, ces algorithmes ont plusieurs paramètres à régler et plusieurs opérateurs à fixer en fonction des problèmes à résoudre. Dans la littérature, on trouve pléthore d'opérateurs décrits, et il devient compliqué pour l'utilisateur de savoir lesquels sélectionner afin d'avoir le meilleur résultat possible. Dans ce contexte, cette thèse avait pour objectif principal de proposer des méthodes permettant de remédier à ces problèmes sans pour autant détériorer les performances de ces algorithmes. Ainsi nous proposons deux algorithmes :- une méthode basée sur le maximum a posteriori qui utilise les probabilités de diversité afin de sélectionner les opérateurs à appliquer, et qui remet ce choix régulièrement en jeu,- une méthode basée sur un graphe dynamique d'opérateurs représentant les probabilités de passages entre les opérateurs, et en s'appuyant sur un modèle de la fonction objectif construit par un réseau de neurones pour mettre régulièrement à jour ces probabilités. Ces deux méthodes sont détaillées, ainsi qu'analysées via un benchmark d'optimisation continue / The problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
|
Page generated in 0.0435 seconds