Spelling suggestions: "subject:"data centers"" "subject:"mata centers""
21 |
Powering the Information Age: Metrics, Social Cost Optimization Strategies, and Indirect Effects Related to Data Center Energy UseHorner, Nathaniel Charles 01 August 2016 (has links)
This dissertation contains three studies examining aspects of energy use by data centers and other information and communication technology (ICT) infrastructure necessary to support the electronic services that now form such a pervasive aspect of daily life. The energy consumption of ICT in general and data centers in particular has been of growing interest to both industry and the public, with continued calls for increased efficiency and greater focus on environmental impacts. The first study examines the metrics used to assess data center energy performance and finds that power usage effectiveness (PUE), the de facto industry standard, only accounts for one of four critical aspects of data center energy performance. PUE measures the overhead of the facility infrastructure but does not consider the efficiency of the IT equipment, its utilization, or the emissions profile of the power source. As a result, PUE corresponds poorly with energy and carbon efficiency, as demonstrated using a small set of empirical data center energy use measurements. The second study lays out a taxonomy of indirect energy impacts to help assess whether ICT’s direct energy consumption is offset by its energy benefits, and concludes that ICT likely has a large potential net energy benefit, but that there is no consensus on the sign or magnitude of actual savings, which are largely dependent upon implementation details. The third study estimates the potential of dynamic load shifting in a content distribution network to reduce both private costs and emissions-related externalities associated with electricity consumption. Utilizing variable marginal retail prices based on wholesale electricity markets and marginal damages estimated from emissions data in a cost-minimization model, the analysis finds that load shifting can either reduce data center power bills by approximately 25%–33% or avoid 30%–40% of public damages, while a range of joint cost minimization strategies enables simultaneous reduction of both private and public costs. The vast majority of these savings can be achieved even under existing bandwidth and network distance constraints, although current industry trends towards virtualization, energy efficiency, and green powermay make load shifting less appealing.
|
22 |
Online Social Network Data Placement over CloudsJiao, Lei 10 July 2014 (has links)
No description available.
|
23 |
Efficient Workload and Resource Management in DatacentersXu, Hong 13 August 2013 (has links)
This dissertation focuses on developing algorithms and systems to improve the efficiency of operating mega datacenters with hundreds of thousands of servers. In particular, it seeks to address two challenges: First, how to distribute the workload among the set of datacenters geographically deployed across the wide area? Second, how to manage the server resources of datacenters using virtualization technology?
In the first part, we consider the workload management problem in geo-distributed datacenters. We first present a novel distributed workload management algorithm that jointly considers request mapping, which determines how to direct user requests to an appropriate datacenter for processing, and response routing, which decides how to select a path among the set of ISP links of a datacenter to route the response packets back to a user. In the next chapter, we study some key aspects of cost and workload in geo-distributed datacenters that have not been fully understood before. Through extensive empirical studies of climate data and cooling systems, we make a case for temperature aware workload management, where the geographical diversity of temperature and its impact on cooling energy efficiency can be used to reduce the overall cooling energy. Moreover, we advocate for holistic workload management for both interactive and batch jobs, where the delay-tolerant elastic nature of batch jobs can be exploited to further reduce the energy cost. A consistent 15% to 20% cooling energy reduction, and a 5% to 20% overall cost reduction are observed from extensive trace-driven simulations.
In the second part of the thesis, we consider the resource management problem in virtualized datacenters. We design Anchor, a scalable and flexible architecture that efficiently supports a variety of resource management policies. We implement a prototype of Anchor on a small-scale in-house datacenter with 20 servers. Experimental results and trace-driven simulations show that Anchor is effective in realizing various resource management policies, and its simple algorithms are practical to solve virtual machine allocation with thousands of VMs and servers in just ten seconds.
|
24 |
Efficient Workload and Resource Management in DatacentersXu, Hong 13 August 2013 (has links)
This dissertation focuses on developing algorithms and systems to improve the efficiency of operating mega datacenters with hundreds of thousands of servers. In particular, it seeks to address two challenges: First, how to distribute the workload among the set of datacenters geographically deployed across the wide area? Second, how to manage the server resources of datacenters using virtualization technology?
In the first part, we consider the workload management problem in geo-distributed datacenters. We first present a novel distributed workload management algorithm that jointly considers request mapping, which determines how to direct user requests to an appropriate datacenter for processing, and response routing, which decides how to select a path among the set of ISP links of a datacenter to route the response packets back to a user. In the next chapter, we study some key aspects of cost and workload in geo-distributed datacenters that have not been fully understood before. Through extensive empirical studies of climate data and cooling systems, we make a case for temperature aware workload management, where the geographical diversity of temperature and its impact on cooling energy efficiency can be used to reduce the overall cooling energy. Moreover, we advocate for holistic workload management for both interactive and batch jobs, where the delay-tolerant elastic nature of batch jobs can be exploited to further reduce the energy cost. A consistent 15% to 20% cooling energy reduction, and a 5% to 20% overall cost reduction are observed from extensive trace-driven simulations.
In the second part of the thesis, we consider the resource management problem in virtualized datacenters. We design Anchor, a scalable and flexible architecture that efficiently supports a variety of resource management policies. We implement a prototype of Anchor on a small-scale in-house datacenter with 20 servers. Experimental results and trace-driven simulations show that Anchor is effective in realizing various resource management policies, and its simple algorithms are practical to solve virtual machine allocation with thousands of VMs and servers in just ten seconds.
|
25 |
New abstractions and mechanisms for virtualizing future many-core systemsKumar, Sanjay 08 July 2008 (has links)
To abstract physical into virtual computing infrastructures is a longstanding goal. Efforts in the computing industry started with early work on virtual machines in IBM's VM370 operating system and architecture, continued with extensive developments in distributed systems in the context of grid computing, and now involve investments by key hardware and software vendors to efficiently virtualize common hardware platforms. Recent efforts in virtualization technology are driven by two facts: (i) technology push -- new hardware support for virtualization in multi- and many-core hardware platforms and in the interconnects and networks used to connect them, and (ii) technology
pull -- the need to efficiently manage large-scale data-centers used for utility computing and extending from there, to also manage more loosely coupled virtual execution environments like those used in cloud computing. Concerning (i), platform virtualization is proving to be an effective way
to partition and then efficiently use the ever-increasing number of cores in many-core chips. Further, I/O Virtualization enables I/O device sharing with increased device throughput, providing required I/O functionality to the many virtual machines (VMs) sharing a single platform. Concerning (ii), through server consolidation and VM migration, for instance, virtualization
increases the flexibility of modern enterprise systems and creates opportunities for improvements in operational efficiency, power consumption, and the ability to meet time-varying application needs.
This thesis contributes (i) new technologies that further increase system flexibility, by addressing some key problems of existing virtualization
infrastructures, and (ii) it then directly addresses the issue of how to exploit the resulting increased levels of flexibility to improve data-center operations, e.g., power management, by providing lightweight, efficient
management technologies and techniques that operate across the range of individual many-core platforms to data-center systems. Concerning (i), the thesis contributes, for large many-core systems, insights into how to better structure virtual machine monitors (VMMs) to provide more efficient utilization of cores, by implementing and evaluating the novel Sidecore approach that permits VMMs to exploit the computational power of parallel cores to improve overall VMM and I/O performance. Further, I/O virtualization still lacks the ability to provide complete transparency between virtual and physical devices, thereby limiting VM mobility and flexibility in accessing devices. In response, this thesis defines and implements the novel Netchannel abstraction that provides complete location transparency between virtual and physical I/O devices, thereby decoupling device access from device location
and enabling live VM migration and device hot-swapping. Concerning (ii), the vManage set of abstractions, mechanisms, and methods developed in this work are shown to substantially
improve system manageability, by providing a lightweight, system-level architecture for implementing and running the management applications required in data-center and cloud computing environments. vManage simplifies management by making it possible and easier
to coordinate the management actions taken by the many management applications and subsystems present in data-center and cloud computing systems. Experimental evaluations of the Sidecore approach to VMM structure, Netchannel, and of vManage are conducted on representative platforms and server systems, with consequent improvements in flexibility, in I/O performance, and in
management efficiency, including power management.
|
26 |
Next generation state-machine replication protocols for data centers / Protocoles de réplication de machines à états de prochaine génération pour les centres de donnéesNehme, Mohamad Jaafar 05 December 2017 (has links)
De nombreux protocoles Total Order Broadcast uniformes ont été conçus au cours des 30 dernières années. Ils peuvent être classés en deux catégories: ceux qui visent une faible latence, et ceux qui visent à haut débit. Latence mesure le temps nécessaire pour effectuer un seul message diffusé sans prétention, alors que le débit mesure le nombre d'émissions que les processus peuvent compléter par unité de temps quand il y a discorde. Tous les protocoles qui ont été conçus pour autant faire l'hypothèse que le réseau sous-jacent ne sont pas partagées par d'autres applications en cours d'exécution. Ceci est une préoccupation majeure à condition que dans les centres de données modernes (aka Clouds), l'infrastructure de mise en réseau est partagée par plusieurs applications. La conséquence est que, dans de tels environnements, le total des protocoles afin de diffusion uniformes présentent des comportements instables.Dans cette thèse, j'ai conçu et mis en œuvre un nouveau protocole pour la Total Order Broadcast uniforme qui optimise la performance lorsqu'il est exécuté dans des environnements multi-Data Centers et le comparer avec plusieurs algorithmes de l'état de l'art.Dans cette thèse, je présente deux contributions. La première contribution est MDC-Cast un nouveau protocole pour Total Order Broadcast dans lesquelles il optimise les performances des systèmes distribués lorsqu'ils sont exécutés dans des environnements multi-centres de données. MDC-Cast combine les avantages de la multidiffusion IP dans les environnements de cluster et unicast TCP/IP pour obtenir un algorithme hybride qui fonctionne parfaitement entre les centres de données.La deuxième contribution est un algorithme conçu pour déboguer les performances dans les systèmes distribués en boîte noire. L'algorithme n'est pas encore publié car il nécessite plus de tests pour une meilleure généralisation. / Many uniform total order broadcast protocols have been designed in the last 30 years. They can be classified into two categories: those targeting low latency, and those targeting high throughput. Latency measures the time required to complete a single message broadcast without contention, whereas throughput measures the number of broadcasts that the processes can complete per time unit when there is contention. All the protocols that have been designed so far make the assumption that the underlying network is not shared by other applications running. This is a major concern provided that in modern data centers (aka Clouds), the networking infrastructure is shared by several applications. The consequence is that, in such environments, uniform total order broadcast protocols exhibit unstable behaviors.In this thesis, I provide two contributions. The first contribution is MDC-Cast a new protocol for total order broadcasts in which it optimizes the performance of distributed systems when executed in multi-data center environments. MDC-Cast combines the benefits of IP-multicast in cluster environments and TCP/IP unicast to get a hybrid algorithm that works perfectly in between datacenters.The second contribution is an algorithm designed for debugging performance in black-box distributed systems. The algorithm is not published yet due to the fact that it needs more tests for a better generalization.
|
27 |
Sustainable Cloud ComputingJanuary 2014 (has links)
abstract: Energy consumption of the data centers worldwide is rapidly growing fueled by ever-increasing demand for Cloud computing applications ranging from social networking to e-commerce. Understandably, ensuring energy-efficiency and sustainability of Cloud data centers without compromising performance is important for both economic and environmental reasons. This dissertation develops a cyber-physical multi-tier server and workload management architecture which operates at the local and the global (geo-distributed) data center level. We devise optimization frameworks for each tier to optimize energy consumption, energy cost and carbon footprint of the data centers. The proposed solutions are aware of various energy management tradeoffs that manifest due to the cyber-physical interactions in data centers, while providing provable guarantee on the solutions' computation efficiency and energy/cost efficiency. The local data center level energy management takes into account the impact of server consolidation on the cooling energy, avoids cooling-computing power tradeoff, and optimizes the total energy (computing and cooling energy) considering the data centers' technology trends (servers' power proportionality and cooling system power efficiency). The global data center level cost management explores the diversity of the data centers to minimize the utility cost while satisfying the carbon cap requirement of the Cloud and while dealing with the adversity of the prediction error on the data center parameters. Finally, the synergy of the local and the global data center energy and cost optimization is shown to help towards achieving carbon neutrality (net-zero) in a cost efficient manner. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
|
28 |
Model Based Safety Analysis and Verification of Cyber-Physical SystemsJanuary 2012 (has links)
abstract: Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties. / Dissertation/Thesis / Ph.D. Computer Science 2012
|
29 |
One Pass Packet Steering (OPPS) for Multi-Subscriber Software Defined Networking EnvironmentsChukwu, Julian January 2017 (has links)
In this thesis, we address the problem of service function chaining in a network. Currently, problems of chaining services in a network (i.e. service function chaining) can be broadly categorised into middlebox placement in a network and packet steering through middleboxes.
In this work, we present a packet steering approach - One Pass Packet Steering (OPPS) - for use in multi-subscriber environments, with the aim that subscribers having similar policy chain composition should experience the same network performance. We develop and show algorithms with a proof of concept implementation using emulations performed with Mininet. We identify challenges and examine how OPPS could benefit from the Software Defined Data Center architecture to overcome these challenges.
Our results show that, given a fixed topology and different sets of policy chains containing the same middleboxes, the end-to-end delay and throughput performance of subscribers using similar policy chains remains approximately the same. Also, we show how OPPS can use a smaller number of middleboxes and yet, achieve the same hop count as that of a reference model described in a previous work as ideal, without violating the subscribers' policy chains.
|
30 |
A New Look at Designing Electrical Construction Processes A Case Study of Cable Pulling and Termination Process on Data Center Construction SitesJanuary 2020 (has links)
abstract: At least 30 datacenters either broke ground or hit the planning stages around the United States over the past two years. On such technically complex projects, Mechanical, Electrical and Plumbing (MEP) systems make up a huge portion of the construction work which makes data center market very promising for MEP subcontractors in the next years. However, specialized subcontractors such as electrical subcontractors are struggling to keep crews motivated. Due to the hard work involved in the construction industry, it is not appealing for young workers. According to The Center for Construction Research and Training, the percentages of workers aged between 16 to 19 years decreased by 67%, 20 to 24 years decreased by 49% and 25 to 34 age decreased by 32% from 1985 to 2015. Furthermore, the construction industry has been lagging other industries in combatting its decline in productivity. Electrical activities, especially cable pulling, are some of the most physically unsafe, tedious, and labor-intensive electrical process on data center projects. The motivation of this research is the need to take a closer look at how this process is being done and find improvement opportunities. This thesis focuses on one potential restructuring of the cable pulling and termination process; the goal of this restructuring is optimization for automation. Through process mapping, this thesis presents a proposed cable pulling and termination process that utilizes automation to make use of the best abilities of human and robots/machines. It will also provide a methodology for process improvement that is applicable to the electrical scope of work as well as that of other construction trades. / Dissertation/Thesis / Masters Thesis Construction Management 2020
|
Page generated in 0.5406 seconds