• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Equilibrage de charge dynamique sur plates-formes hiérarchiques / dynamic Load-Balancing on hierarchical platforms

Quintin, Jean-Noël 08 December 2011 (has links)
La course à l'augmentation de la puissance de calcul qui se déroule depuis de nombreuses années entre les différents producteurs de matériel a depuis quelques années changé de visage: nous assistons en effet désormais à une véritable démocratisation des machines parallèles avec une complexification sans cesse croissante de la structure des processeurs. À terme, il est tout à fait envisageable de voir apparaître pour le grand public des architecture pleinement hétérogènes composées d'un ensemble de cœurs reliés par un réseau sur puce. La parallélisation et l'exécution parallèle d'applications sur les machines à venir soulèvent ainsi de nombreux problèmes. Parmi ceux-ci, nous nous intéressons ici au problème de l'ordonnancement d'un ensemble de tâches sur un ensemble de cœurs, c'est à dire le choix de l'affectation du travail à réaliser sur les ressources disponibles. Parmi les méthodes existantes, on distingue deux types d'algorithmes: en-ligne et hors-ligne. Les algorithmes en-ligne comme le vol de travail présentent l'avantage de fonctionner en l'absence d'informations sur le matériel ou la durée des tâches mais ne permettent généralement pas une gestion efficace des communications. Dans cette thèse, nous nous intéressons à l'ordonnancement de tâches en-ligne sur des plates-formes complexes pour lesquelles le réseau peut, par des problèmes de congestion, limiter les performances. Plus précisément, nous proposons de nouveaux algorithmes d'ordonnancement en-ligne, basés sur le vol de travail, ciblant deux configurations différentes. D'une part, nous considérons des applications pour lesquelles le graphe de dépendance est connu à priori. L'utilisation de cette information nous permet ainsi de limiter les quantités de données transférées et d'obtenir des performances supérieures aux meilleurs algorithmes hors-ligne connus. D'autre part, nous étudions les optimisations possibles lorsque l'algorithme d'ordonnancement connaît la topologie de la plate-forme. Encore une fois, nous montrons qu'il est possible de tirer parti de cette information pour réaliser un gain non-négligeable en performance. Nos travaux permettent ainsi d'étendre le champ d'application des algorithmes d'ordonnancement vers des architectures plus complexes et permettront peut-être une meilleure utilisation des machines de demain. / The race towards more processing power between all different hardware manufacturers has in recent years faced deep changes. We see nowadays a huge development in the use of parallel machines with more and more cores and increasingly complex architectures. It seems now clear that we will witness in a near future the development of cheap Network On Chip computers. Executing parallel applications on such machines raises several problems. Amongst them we take in this work interest in the problem of scheduling a set of tasks on a set of computing resources. Between all existing methods we can generally distinguish on-line or off-line algorithms. On-line algorithms like work-stealing present the advantage to work without informations on hardware or tasks durations but do not generally achieve an efficient control of communications. In this book we take interest in on-line tasks scheduling on complex platforms where networking can impact (through congestion) performance. More precisely, we propose several new scheduling algorithms based on work-stealing targeting two different configurations. In a first study, we consider applications whose dependency graph is known in advance. By taking advantage of this information we manage to limit the amount of data transfered and thus to achieve high performance and even outperform the best known off-line algorithms. Concurrently to that, we also study possible optimisations in the case where knowledge of platform topology is available. We show again that it is possible to use this information to enhance performance. Our work allows therefore to extend the application field of scheduling algorithms towards more complex architectures and we hope will allow a better use of tomorrow's machine.
72

Graph Partitioning for the Finite Element Method: Reducing Communication Volume with the Directed Sorted Heavy Edge Matching

González García, José Luis 02 May 2019 (has links)
No description available.
73

ROBIN HOOD : um ambiente para a avaliação de políticas de balanceamento de carga / Robin Hood: an environment to load balancing policies evaluation

Nogueira, Mauro Lucio Baioneta January 1998 (has links)
É ponto passivo a importância dos sistemas distribuídos no desenvolvimento da computação de alto desempenho nas próximas décadas. No entanto, ainda muito se debate sobre políticas de gerenciamento adequadas para os recursos computacionais espacialmente dispersos disponíveis em tais sistemas. Políticas de balanceamento de carga procuram resolver o problema da ociosidade das maquinas(ou, por outro lado, da super-utilização) em um sistema distribuído. Não são raras situações nas quais somente algumas maquinas da rede estão sendo efetivamente utilizadas, enquanto que varias outras se encontram subutilizadas, ou mesmo completamente ociosas. Aberta a possibilidade de executarmos remotamente uma tarefa, com o intuito de reduzirmos o tempo de resposta da mesma, ainda falta decidirmos "como" fazê-lo. Das decisões envolvidas quanto a execução remota de tarefas tratam as políticas de balanceamento de carga. Tais políticas, muito embora a aparente simplicidade quanto as decisões de controle tomadas ou ao reduzido numero de parâmetros envolvidos, não possuem um comportamento fácil de se prever. Sob determinadas condições, tais políticas podem ser tomar excessivamente instáveis, tomando sucessivas decisões equivocadas e, como consequência, degradando de forma considerável o desempenho do sistema. Em tais casos, muitas das vezes, melhor seria não tê-las. Este trabalho apresenta um ambiente desenvolvido com o objetivo de auxiliar projetistas de sistema ou analistas de desempenho a construir, simular e compreender mais claramente o impacto causado pelas decisões de balanceamento no desempenho do sistema. / There is no doubts about the importance of distributed systems in the development of high performance computing in the next decades. However, there are so much debates about appropriated management policies to spatially scattered computing resources available in this systems. Load balancing policies intend to resolve the problem of underloaded machines (or, in other hand, overloaded machines) in a distributed system. Moments in which few machines are really being used, meanwhile several others are underused, or even idle, aren't rare. Allowed the remote execution of tasks in order to decrease the response time of theirs, it remains to decide 'how' to do it. Load balancing policies deal with making decisions about remote execution. Such policies, in spite of the supposed simplicity about their control decisions and related parameters, doesn't have a predictable behavior. In some cases, such policies can become excessively unstable, making successive wrong decisions and, as consequence, degrading the system performance. In such cases, it's better no policy at all. This work presents an environment developed whose purpose is to help system designers or performance analysts to build, to simulate and to understand the impact made by balancing decisions over the system performance.
74

Contributions à la parallélisation de méthodes de type transport Monte-Carlo / Contributions to the parallelization of Monte Carlo transport methods

Gonçalves, Thomas 28 September 2017 (has links)
Les applications de transport de particules Monte-Carlo consistent à étudier le comportement de particules se déplaçant dans un domaine de simulation. La répartition des particules sur le domaine de simulation n'est pas uniforme et évolue dynamiquement au cours de la simulation. La parallélisation de ce type d'applications sur des architectures massivement parallèles amène à résoudre une problématique complexe de répartition de charges de calculs et de données sur un grand nombre de cœurs de calcul.Nous avons d'abord identifié les difficultés de parallélisation des applications de transport de particules Monte-Carlo à l'aide d'analyses théoriques et expérimentales des méthodes de parallélisation de référence. Une approche semi-dynamique reposant sur des techniques de partitionnement a ensuite été proposée. Enfin, nous avons défini une approche dynamique capable de redistribuer les charges de calcul et les données tout en maintenant un faible volume de communication. L'approche dynamique a obtenu des accélérations en extensibilité forte et une forte réduction de la consommation mémoire par rapport à une méthode de réplication de domaine parfaitement équilibrée. / Monte Carlo particle transport applications consist in studying the behaviour of particles moving about a simulation domain. Particles distribution among simulation domains is not uniform and change dynamically during simulation. The parallelization of this kind of applications on massively parallel architectures leads to solve a complex issue of workloads and data balancing among numerous compute cores.We started by identifying parallelization pitfalls of Monte Carlo particle transport applications using theoretical and experimental analysis of reference parallelization methods. A semi-dynamic based on partitioning techniques has been proposed then. Finally, we defined a dynamic approach able to redistribute workloads and data keeping a low communication volume. The dynamic approach obtains speedups using strong scaling and a memory footprint reduction compared to the perfectly balanced domain replication method.
75

An Approach to QoS-based Task Distribution in Edge Computing Networks for IoT Applications

January 2018 (has links)
abstract: Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures. Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks. In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2018
76

JUMP: Uma política de escalonamento unificada com migração de processos / JUMP: A unified scheduling policy with process migration

Ravasi, Juliano Ferraz 02 April 2009 (has links)
Este trabalho apresenta o projeto e a implementação da política de escalonamento com suporte à migração de processos JUMP. A migração de processos é uma ferramenta importante que complementa a alocação inicial realizada pela política de escalonamento em um ambiente paralelo distribuído, permitindo um balanceamento de carga dinâmico e mais refinado, resultando em um melhor desempenho do ambiente e menor tempo de resposta das aplicações paralelas distribuídas. A nova política unifica a alocação inicial e migração de processos em um único algoritmo, de forma a compartilhar decisões para o objetivo comum de prover um melhor desempenho para aplicações de uso intensivo de processamento em clusters heterogêneos. A política é implementada sobre o ambiente de escalonamento flexível e dinâmico AMIGO, adaptado para o suporte à migração de processos. A avaliação de desempenho mostrou que a nova política oferece ganhos expressivos nos tempos de resposta quando comparada às outras duas políticas de escalonamento implementadas no AMIGO, em quase todos os cenários, para diversas aplicações e diversas situações de carga do ambiente / This work presents the project and implementation of the scheduling policy with process migration support JUMP. Process migration is an important tool that complements the initial placement performed by the scheduling policy in a distributed parallel environment, allowing for dynamic and more refined load balancing, resulting in better performance of the environment and shorter response time for distributed parallel applications. The new policy unifies initial placement and process migration in a single algorithm, enabling the sharing of decisions for the common goal of providing a better performance for CPU-bound applications in heterogeneous clusters. The policy is implemented over the dynamical and flexible environment AMIGO, adapted in order to support process migration. Performance evaluation showed that the new policy offers expressive gains in response times when compared to other two scheduling policies implemented in AMIGO in almost all scenarios, for different applications and different environment load situations
77

Visualizing Carrier Aggregation Combinations

Helders, Fredrik January 2019 (has links)
As wireless communications is becoming an increasingly important part of ourevery day lives, the amount of transmitted data is constantly growing, creating ademand for ever-increasing data rates. One of the technologies used for boostingdata rates is carrier aggregation, which allows for wireless units to combine multipleconnections to the cellular network. However, there is a limited number ofpossible combinations defined, meaning that there is a need to search for the bestcombination in any given setup. This thesis introduces software capable of organizingthe defined combinations into tree structures, simplifying the search foroptimal combinations as well as allowing for visualizations of the connectionspossible. In the thesis, a proposed method of creating these trees is presented,together with suggestions on how to visualize important combination characteristics.Studies has also been made on different tree traversal algorithms, showingthat there is little need for searching through all possible combinations, but thata greedy approach has a high performance while substantially limiting the searchcomplexity. / I samband med att trådlösa kommunikationssystem blir en allt större del av våraliv och mängden data som skickas fortsätter att stiga, skapas en efterfrågan förökade datatakter. En av teknologierna som används för att skapa högre datatakterär bäraraggregering (carrier aggregation), som möjliggör för trådlösa enheteratt kombinera flertalet uppkopplingar mot det mobila nätverket. Det finns dockbara ett begränsat antal kombinationer definierade, vilket skapar ett behov av attsöka upp den bästa kombinationen i varje givet tillfälle. Detta arbete introducerarmjukvara som organiserar dessa kombinationer i trädstrukturer, vilket förenklarsökning efter optimala kombinationer tillsammans med möjligheten att visualiserade potentiella uppkopplingarna. I arbetet presenteras en föreslagen metodför att skapa dessa träd, tillsammans med uppslag på hur viktiga egenskaperhos kombinationerna kan visualiseras. Olika trädsökningsalgoritmer har ocksåundersökts, och det visas att det inte är nödvändigt att söka igenom hela träd.Istället visar sig giriga algoritmer ha hög prestanda, samtidigt som sökstorlekenkan hållas kraftigt begränsad.
78

Effective task assignment strategies for distributed systems under highly variable workloads

Broberg, James Andrew, james@broberg.com.au January 2007 (has links)
Heavy-tailed workload distributions are commonly experienced in many areas of distributed computing. Such workloads are highly variable, where a small number of very large tasks make up a large proportion of the workload, making the load very hard to distribute effectively. Traditional task assignment policies are ineffective under these conditions as they were formulated based on the assumption of an exponentially distributed workload. Size-based task assignment policies have been proposed to handle heavy-tailed workloads, but their applications are limited by their static nature and assumption of prior knowledge of a task's service requirement. This thesis analyses existing approaches to load distribution under heavy-tailed workloads, and presents a new generalised task assignment policy that significantly improves performance for many distributed applications, by intelligently addressing the negative effects on performance that highly variable workloads cause. Many problems associated with the modelling and optimisations of systems under highly variable workloads were then addressed by a novel technique that approximated these workloads with simpler mathematical representations, without losing any of their pertinent original properties. Finally, we obtain advance queuing metrics (such as the variance of key measurements like waiting time and slowdown that are difficult to obtain analytically) through rigorous simulation.
79

Load balancing of IP telephony / Lastbalansering av IP-telefoni

Montag, David January 2008 (has links)
<p>In today's world, more and more phone calls are made over IP. This results in an increasing demand for scalable IP telephony equipment.</p><p>Ingate Systems AB produces firewalls specialized in handling IP telephony. They have an inherent limit in the number of concurrent phone calls that they can handle. This can be a bottleneck at high loads. There is a load balancing solution available in the platform, but it has a number of drawbacks, such as media latency and client capability requirements, limiting its usage.</p><p>Many companies provide load balancing solutions for SIP. However, it appears few handle all the problematic scenarios that the Ingate firewall does. This master's thesis aims to add load balancing functionality to the Ingate firewall, so that it can handle all types of clients.</p><p>By splitting the firewall into two completely separate layers - a SIP layer and a firewall layer - the concept of a virtual machine emerges. A machine is no longer restricted to its physical SIP and firewall layers. Instead, virtual machines are used to process calls. They still have SIP and firewall layers, but the layers can reside on different physical machines.</p><p>This thesis demonstrates the operation of an innovative load balancing implementation. The implementation was evaluated, and using four machines the test setup performed 50% better than the original Ingate platform, while still retaining all functionality -- something that was not possible with the original platform. This surpassed both the company's and my own expectations.</p>
80

Evaluation of Load Balancing Algorithms in IP Networks : A case study at TeliaSonera

Hasselström, Emil, Sjögren, Therese January 2005 (has links)
<p>The principle of load balancing is to distribute the data load more evenly over the network in order to increase the network performance and efficiency. With dynamic load balancing the routing is undated at certain intervals. This thesis was developed to evaluate load balancing methods in the IP-network of TeliaSonera.Load balancing using short path routing, bottleneck load balancing and load balancing using MPLS have been evaluated. Short path routing is a flow sharing technique that allows routing on paths other than the shortest one. </p><p>Load balancing using short path routing is achieved by dynamic updates of the link weights. Bottleneck is in its nature a dynamic load balancing algorithm. Unlike load balancing using short path routing it updates the flow sharing, not the metrics. The algorithm uses information about current flow sharing and link loads to detect bottlenecks within the network. The information is used to calculate new flow sharing parameters. When using MPLS, one or more complete routing paths (LSPs) are defined at each edge LSR before sending any traffic. MPLS brings the ability to perform flow sharing by defining the paths to be used and how the outgoing data load is to be shared among these.</p><p>The model has been built from data about the network supplied by TeliaSonera. The model consists of a topology part, a traffic part, a routing part and cost part. The traffic model consists of a OD demand matrix. The OD demand matrix has been estimated from collected link loads. This was done with estimation models; the gravity model and an optimisation model.</p><p>The algorithms have been analysed at several scenarios; normal network, core node failure, core link failure and DWDM system failure. A cost function, where the cost increases as the link load increases has been used to evaluate the algorithms. The signalling requirements for implementation of the load balancing algorithm have also been investigated.</p>

Page generated in 0.0487 seconds