Spelling suggestions: "subject:"computer networks workload"" "subject:"computer networks workload's""
11 |
Power-Aware Datacenter Networking and OptimizationYi, Qing 02 March 2017 (has links)
Present-day datacenter networks (DCNs) are designed to achieve full bisection bandwidth in order to provide high network throughput and server agility. However, the average utilization of typical DCN infrastructure is below 10% for significant time intervals. As a result, energy is wasted during these periods. In this thesis we analyze traffic behavior of datacenter networks using traces as well as simulated models. Based on the insight developed, we present techniques to reduce energy waste by making energy use scale linearly with load. The solutions developed are analyzed via simulations, formal analysis, and prototyping. The impact of our work is significant because the energy savings we obtain for networking infrastructure of DCNs are near optimal.
A key finding of our traffic analysis is that network switch ports within the DCN are grossly under-utilized. Therefore, the first solution we study is to modify the routing within the network to force most traffic to the smallest of switches. This increases the hop count for the traffic but enables the powering off of many switch ports. The exact extent of energy savings is derived and validated using simulations. An alternative strategy we explore in this context is to replace about half the switches with fewer switches that have higher port density. This has the effect of enabling even greater traffic consolidation, thus enabling even more ports to sleep. Finally, we explore a third approach in which we begin with end-to-end traffic models and incrementally build a DCN topology that is optimized for that model. In other words, the network topology is optimized for the potential use of the datacenter. This approach makes sense because, as other researchers have observed, the traffic in a datacenter is heavily dependent on the primary use of the datacenter.
A second line of research we undertake is to merge traffic in the analog domain prior to feeding it to switches. This is accomplished by use of a passive device we call a merge network. Using a merge network enables us to attain linear scaling of energy use with load regardless of datacenter traffic models. The challenge in using such a device is that layer 2 and layer 3 protocols require a one-to-one mapping of hardware addresses to IP (Internet Protocol) addresses. We overcome this problem by building a software shim layer that hides the fact that traffic is being merged. In order to validate the idea of a merge network, we build a simple mere network for gigabit optical interfaces and demonstrate correct operation at line speeds of layer 2 and layer 3 protocols. We also conducted measurements to study how traffic gets mixed in the merge network prior to being fed to the switch. We also show that the merge network uses only a fraction of a watt of power, which makes this a very attractive solution for energy efficiency.
In this research we have developed solutions that enable linear scaling of energy with load in datacenter networks. The different techniques developed have been analyzed via modeling and simulations as well as prototyping. We believe that these solutions can be easily incorporated into future DCNs with little effort.
|
12 |
Metrics collecting tool for load balancing of distributed applicationsFernandes, Michael P. 01 July 2002 (has links)
No description available.
|
13 |
A multiple ant colony optimization approach for load-balancing.January 2003 (has links)
Sun Weng Hong. / Thesis submitted in: October 2002. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 116-121). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.7 / Chapter 2. --- Ant Colony Optimization (ACO) --- p.9 / Chapter 2.1 --- ACO vs. Traditional Routing --- p.10 / Chapter 2.1.1 --- Routing information --- p.10 / Chapter 2.1.2 --- Routing overhead --- p.12 / Chapter 2.1.3 --- Adaptivity and Stagnation --- p.14 / Chapter 2.2 --- Approaches to Mitigate Stagnation --- p.15 / Chapter 2.2.1 --- Pheromone control --- p.15 / Chapter 2.2.1.1 --- Evaporation: --- p.15 / Chapter 2.2.1.2 --- Aging: --- p.16 / Chapter 2.2.1.3 --- Limiting and smoothing pheromone: --- p.17 / Chapter 2.2.2 --- Pheromone-Heuristic Control --- p.18 / Chapter 2.2.3 --- Privileged Pheromone Laying --- p.19 / Chapter 2.2.4 --- Critique and Comparison --- p.21 / Chapter 2.2.4.1 --- Aging --- p.22 / Chapter 2.2.4.2 --- Limiting pheromone --- p.22 / Chapter 2.2.4.3 --- Pheromone smoothing --- p.23 / Chapter 2.2.4.4 --- Evaporation --- p.25 / Chapter 2.2.4.5 --- Privileged Pheromone Laying --- p.25 / Chapter 2.2.4.6 --- Pheromone-heuristic control --- p.26 / Chapter 2.3 --- ACO in Routing and Load Balancing --- p.27 / Chapter 2.3.1 --- Ant-based Control and Its Ramifications --- p.27 / Chapter 2.3.2 --- AntNet and Its Extensions --- p.35 / Chapter 2.3.3 --- ASGA and SynthECA --- p.40 / Chapter 3. --- Multiple Ant Colony Optimization (MACO) --- p.45 / Chapter 4. --- MACO vs. ACO --- p.51 / Chapter 4.1 --- Analysis of MACO vs. ACO --- p.53 / Chapter 5. --- Applying MACO in Load Balancing --- p.89 / Chapter 5.1 --- Applying MACO in Load-balancing --- p.89 / Chapter 5.2 --- Problem Formulation --- p.91 / Chapter 5.3 --- Types of ant in MACO --- p.93 / Chapter 5.3.1 --- Allocator. --- p.94 / Chapter 5.3.2 --- Destagnator. --- p.95 / Chapter 5.3.3 --- Deallocator. --- p.100 / Chapter 5.4 --- Global Algorithm --- p.100 / Chapter 5.5 --- Discussion of the number of ant colonies --- p.103 / Chapter 6. --- Experimental Results --- p.105 / Chapter 7. --- Conclusion --- p.114 / Chapter 8. --- References --- p.116 / Appendix A. Ants in MACO --- p.122 / Appendix B. Ants in SACO. --- p.123
|
14 |
A Bandwidth Market in an IP NetworkLusilao-Zodi, Guy-Alain 03 1900 (has links)
Thesis (MSc (Mathematical Sciences. Computer Science))--University of Stellenbosch, 2008. / Consider a path-oriented telecommunications network where calls arrive to each route in a
Poisson process. Each call brings on average a fixed number of packets that are offered to
route. The packet inter-arrival times and the packet lengths are exponentially distributed.
Each route can queue a finite number of packets while one packet is being transmitted. Each
accepted packet/call generates an amount of revenue for the route manager. At specified
time instants a route manager can acquire additional capacity (“interface capacity”) in
order to carry more calls and/or the manager can acquire additional buffer space in order
to carry more packets, in which cases the manager earns more revenue; alternatively a
route manager can earn additional revenue by selling surplus interface capacity and/or by
selling surplus buffer space to other route managers that (possibly temporarily) value it
more highly. We present a method for efficiently computing the buying and the selling
prices of buffer space.
Moreover, we propose a bandwidth reallocation scheme capable of improving the network
overall rate of earning revenue at both the call level and the packet level. Our
reallocation scheme combines the Erlang price [4] and our proposed buffer space price
(M/M/1/K prices) to reallocate interface capacity and buffer space among routes. The
proposed scheme uses local rules and decides whether or not to adjust the interface capacity
and/or the buffer space. Simulation results show that the reallocation scheme achieves
good performance when applied to a fictitious network of 30-nodes and 46-links based on
the geography of Europe.
|
Page generated in 0.07 seconds