Spelling suggestions: "subject:"datacenters"" "subject:"barycenters""
41 |
Steady State Analysis of Load Balancing Algorithms in the Heavy Traffic RegimeJanuary 2019 (has links)
abstract: This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time.
When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
|
42 |
Optical frequency comb generation using InP based quantum-dash/ quantum-well single section mode-locked lasers / Génération de peignes de fréquences optiques à l’aide de lasers à verrouillage de modes mono-section, à base de bâtonnets et puits quantiques élaborés sur InPPanapakkam Venkatesan, Vivek 05 December 2016 (has links)
Les interconnections optiques dans les fermes de données (data centers) nécessitent la mise au point de nouvelles approches technologiques pour répondre aux besoins grandissants en composants d’interface respectant des cahiers de charge drastiques en termes de débit, coût, encombrement et dissipation d’énergie. Les peignes de fréquences optiques sont particulièrement adaptés comme nouvelles sources optiques, à mêmes de générer un grand nombre de porteuses optiques cohérentes. Leur utilisation dans des systèmes de transmission en multiplexage de longueurs d’onde (WDM) et exploitant de nouveaux formats de modulation, peut aboutir à des capacités jamais atteintes auparavant. Ce travail de thèse s’inscrit dans le cadre du projet européen BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems) et a pour but l’étude de peignes de fréquences générés à l’aide de lasers à verrouillage de modes, à section unique, à base de bâtonnets quantiques InAs/InP et puits quantiques InGaAsP/InP. Nous avons entrepris l’étude de nouvelles couches actives et conceptions de cavités lasers en vue de répondre au cahier des charges du projet européen. Une étude systématique du bruit d’amplitude et de phase de ces sources a en particulier été menée à l’aide de nouvelles techniques de mesure afin d’évaluer leur compatibilité dans des systèmes de transmission à très haut débit. Ces peignes de fréquences optiques ont été utilisées avec succès dans des expériences de transmission sur fibre optique avec des débits records dépassant le Tbit/s par puce et une dissipation raisonnable d’énergie par bit, montrant leur fort potentiel pour les applications d’interconnections optiques dans les fermes de données / The increasing demand for high capacity, low cost, high compact and energy efficient optical transceivers for data center interconnects requires new technological solutions. In terms of transmitters, optical frequency combs generating a large number of phase coherent optical carriers are attractive solutions for next generation datacenter interconnects, and along with wavelength division multiplexing and advanced modulation formats can demonstrate unprecedented transmission capacities. In the framework of European project BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems), this thesis investigates the generation of optical frequency combs using single-section mode-locked lasers based on InAs/InP Quantum-Dash and InGaAsP/InP Quantum-Well semiconductor nanostructures. These novel light sources, based on new active layer structures and cavity designs are extensively analyzed to meet the requirements of the project. Comprehensive investigation of amplitude and phase noise of these optical frequency comb sources is performed with advanced measurement techniques, to evaluate the feasibility of their use in high data rate transmission systems. Record Multi-Terabit per second per chip capacities and reasonably low energy per bit consumption are readily demonstrated, making them well suited for next generation datacenter interconnects
|
43 |
Extending the Cutting Stock Problem for Consolidating Services with Stochastic WorkloadsHähnel, Markus, Martinovic, John, Scheithauer, Guntram, Fischer, Andreas, Schill, Alexander, Dargie, Waltenegus 16 May 2023 (has links)
Data centres and similar server clusters consume a large amount of energy. However, not all consumed energy produces useful work. Servers consume a disproportional amount of energy when they are idle, underutilised, or overloaded. The effect of these conditions can be minimised by attempting to balance the demand for and the supply of resources through a careful prediction of future workloads and their efficient consolidation. In this paper we extend the cutting stock problem for consolidating workloads having stochastic characteristics. Hence, we employ the aggregate probability density function of co-located and simultaneously executing services to establish valid patterns. A valid pattern is one yielding an overall resource utilisation below a set threshold. We tested the scope and usefulness of our approach on a 16-core server with 29 different benchmarks. The workloads of these benchmarks have been generated based on the CPU utilisation traces of 100 real-world virtual machines which we obtained from a Google data centre hosting more than 32000 virtual machines. Altogether, we considered 600 different consolidation scenarios during our experiment. We compared the performance of our approach-system overload probability, job completion time, and energy consumption-with four existing/proposed scheduling strategies. In each category, our approach incurred a modest penalty with respect to the best performing approach in that category, but overall resulted in a remarkable performance clearly demonstrating its capacity to achieve the best trade-off between resource consumption and performance.
|
44 |
ICT Design Unsustainability & the Path toward Environmentally Sustainable TechnologiesBibri, Mohamed January 2009 (has links)
This study endeavors to investigate the negative environmental impacts of the prevailing ICT design approaches and to explore some potential remedies for ICT design unsustainability from environmental and corporate sustainability perspectives. More specifically, it aims to spotlight key environmental issues related to ICT design, including resource depletion; GHG emissions resulting from energy-intensive consumption; toxic waste disposal; and hazardous chemicals use; and also to shed light on how alternative design solutions can be devised based on environmental sustainability principles to achieve the goals of sustainable technologies. The study highlights the relationship between ICT design and sustainability and how they can symbiotically affect one another. To achieve the aim of this study, an examination was performed through an extensive literature review covering empirical, theoretical, and critical scholarship. The study draws on a variety of sources to survey the negative environmental impacts of the current mainstream ICT design approach and review the potential remedies for unsustainability of ICT design. For theory, central themes were selected for review given the synergy and integration between them as to the topic under investigation. They include: design issues; design science; design research framework for ICT; sustainability; corporate sustainability; and design and sustainability. Findings highlight the unsustainability of the current mainstream ICT design approach. Key environmental issues for consideration include: resource depletion through extracting huge amounts of material and scarce elements; energy-intensive consumption and GHG emissions, especially from ICT use phase; toxic waste disposal; and hazardous substances use. Potential remedies for ICT design unsustainability include dematerialization as an effective strategy to minimize resources depletion, de-carbonization to cut energy consumption through using efficient energy required over life cycle and renewable energy; recyclability through design with life cycle thinking (LCT) and extending ICT equipment’s operational life through reuse; mitigating hazardous chemicals through green design - low or non-noxious/less hazardous products. As to solving data center dilemma, design solutions vary from hardware and software to technological improvements and adjustments. Furthermore, corporate sustainability can be a strategic model for ICT sector to respond to environmental issues, including those associated with unsustainable ICT design. In the same vein, through adopting corporate sustainability, ICT-enabled organizations can rationalize energy usage to reduce GHG emissions, and thereby alleviating global warming. This study provides a novel approach to sustainable ICT design, highlighting unsustainability of its current mainstream practices. Review of the literature makes an advance on extant reviews of the literature by highlighting the symbiotic relationship between ICT design and environmental sustainability from both research and practice perspectives. This study adds to the body of knowledge and previous endeavours in research of ICT and sustainability. Overall, it endeavours to present contributions and avenues for further theoretical and empirical research and development. / +46704352135/+212662815009
|
45 |
Vliv obtokového součinitele na návrh a geometrii přímého výparníku pro chladící jednotku / The Effect of the Bypass Factor on Design and Geometry of the Evaporator for the Cooling UnitVytasil, Michal January 2016 (has links)
Diploma thesis focuses on effect of the bypass factor on design and geometry of the evaporator for the cooling unit of data centre. Effect of the bypass factor on individual design parameters is solved in detail. All dependendecies are captured by using graphs in which s placed a cement on that parameter. In part C, mathematical and physical solutions are demonstrated calculations and processes leading to the design of the exchanger. In the end, evaluation of the calculations is done and there is also showed possible improvements for the practise.
|
Page generated in 0.0864 seconds