• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient Workload and Resource Management in Datacenters

Xu, Hong 13 August 2013 (has links)
This dissertation focuses on developing algorithms and systems to improve the efficiency of operating mega datacenters with hundreds of thousands of servers. In particular, it seeks to address two challenges: First, how to distribute the workload among the set of datacenters geographically deployed across the wide area? Second, how to manage the server resources of datacenters using virtualization technology? In the first part, we consider the workload management problem in geo-distributed datacenters. We first present a novel distributed workload management algorithm that jointly considers request mapping, which determines how to direct user requests to an appropriate datacenter for processing, and response routing, which decides how to select a path among the set of ISP links of a datacenter to route the response packets back to a user. In the next chapter, we study some key aspects of cost and workload in geo-distributed datacenters that have not been fully understood before. Through extensive empirical studies of climate data and cooling systems, we make a case for temperature aware workload management, where the geographical diversity of temperature and its impact on cooling energy efficiency can be used to reduce the overall cooling energy. Moreover, we advocate for holistic workload management for both interactive and batch jobs, where the delay-tolerant elastic nature of batch jobs can be exploited to further reduce the energy cost. A consistent 15% to 20% cooling energy reduction, and a 5% to 20% overall cost reduction are observed from extensive trace-driven simulations. In the second part of the thesis, we consider the resource management problem in virtualized datacenters. We design Anchor, a scalable and flexible architecture that efficiently supports a variety of resource management policies. We implement a prototype of Anchor on a small-scale in-house datacenter with 20 servers. Experimental results and trace-driven simulations show that Anchor is effective in realizing various resource management policies, and its simple algorithms are practical to solve virtual machine allocation with thousands of VMs and servers in just ten seconds.
22

Efficient Workload and Resource Management in Datacenters

Xu, Hong 13 August 2013 (has links)
This dissertation focuses on developing algorithms and systems to improve the efficiency of operating mega datacenters with hundreds of thousands of servers. In particular, it seeks to address two challenges: First, how to distribute the workload among the set of datacenters geographically deployed across the wide area? Second, how to manage the server resources of datacenters using virtualization technology? In the first part, we consider the workload management problem in geo-distributed datacenters. We first present a novel distributed workload management algorithm that jointly considers request mapping, which determines how to direct user requests to an appropriate datacenter for processing, and response routing, which decides how to select a path among the set of ISP links of a datacenter to route the response packets back to a user. In the next chapter, we study some key aspects of cost and workload in geo-distributed datacenters that have not been fully understood before. Through extensive empirical studies of climate data and cooling systems, we make a case for temperature aware workload management, where the geographical diversity of temperature and its impact on cooling energy efficiency can be used to reduce the overall cooling energy. Moreover, we advocate for holistic workload management for both interactive and batch jobs, where the delay-tolerant elastic nature of batch jobs can be exploited to further reduce the energy cost. A consistent 15% to 20% cooling energy reduction, and a 5% to 20% overall cost reduction are observed from extensive trace-driven simulations. In the second part of the thesis, we consider the resource management problem in virtualized datacenters. We design Anchor, a scalable and flexible architecture that efficiently supports a variety of resource management policies. We implement a prototype of Anchor on a small-scale in-house datacenter with 20 servers. Experimental results and trace-driven simulations show that Anchor is effective in realizing various resource management policies, and its simple algorithms are practical to solve virtual machine allocation with thousands of VMs and servers in just ten seconds.
23

New abstractions and mechanisms for virtualizing future many-core systems

Kumar, Sanjay 08 July 2008 (has links)
To abstract physical into virtual computing infrastructures is a longstanding goal. Efforts in the computing industry started with early work on virtual machines in IBM's VM370 operating system and architecture, continued with extensive developments in distributed systems in the context of grid computing, and now involve investments by key hardware and software vendors to efficiently virtualize common hardware platforms. Recent efforts in virtualization technology are driven by two facts: (i) technology push -- new hardware support for virtualization in multi- and many-core hardware platforms and in the interconnects and networks used to connect them, and (ii) technology pull -- the need to efficiently manage large-scale data-centers used for utility computing and extending from there, to also manage more loosely coupled virtual execution environments like those used in cloud computing. Concerning (i), platform virtualization is proving to be an effective way to partition and then efficiently use the ever-increasing number of cores in many-core chips. Further, I/O Virtualization enables I/O device sharing with increased device throughput, providing required I/O functionality to the many virtual machines (VMs) sharing a single platform. Concerning (ii), through server consolidation and VM migration, for instance, virtualization increases the flexibility of modern enterprise systems and creates opportunities for improvements in operational efficiency, power consumption, and the ability to meet time-varying application needs. This thesis contributes (i) new technologies that further increase system flexibility, by addressing some key problems of existing virtualization infrastructures, and (ii) it then directly addresses the issue of how to exploit the resulting increased levels of flexibility to improve data-center operations, e.g., power management, by providing lightweight, efficient management technologies and techniques that operate across the range of individual many-core platforms to data-center systems. Concerning (i), the thesis contributes, for large many-core systems, insights into how to better structure virtual machine monitors (VMMs) to provide more efficient utilization of cores, by implementing and evaluating the novel Sidecore approach that permits VMMs to exploit the computational power of parallel cores to improve overall VMM and I/O performance. Further, I/O virtualization still lacks the ability to provide complete transparency between virtual and physical devices, thereby limiting VM mobility and flexibility in accessing devices. In response, this thesis defines and implements the novel Netchannel abstraction that provides complete location transparency between virtual and physical I/O devices, thereby decoupling device access from device location and enabling live VM migration and device hot-swapping. Concerning (ii), the vManage set of abstractions, mechanisms, and methods developed in this work are shown to substantially improve system manageability, by providing a lightweight, system-level architecture for implementing and running the management applications required in data-center and cloud computing environments. vManage simplifies management by making it possible and easier to coordinate the management actions taken by the many management applications and subsystems present in data-center and cloud computing systems. Experimental evaluations of the Sidecore approach to VMM structure, Netchannel, and of vManage are conducted on representative platforms and server systems, with consequent improvements in flexibility, in I/O performance, and in management efficiency, including power management.
24

Next generation state-machine replication protocols for data centers / Protocoles de réplication de machines à états de prochaine génération pour les centres de données

Nehme, Mohamad Jaafar 05 December 2017 (has links)
De nombreux protocoles Total Order Broadcast uniformes ont été conçus au cours des 30 dernières années. Ils peuvent être classés en deux catégories: ceux qui visent une faible latence, et ceux qui visent à haut débit. Latence mesure le temps nécessaire pour effectuer un seul message diffusé sans prétention, alors que le débit mesure le nombre d'émissions que les processus peuvent compléter par unité de temps quand il y a discorde. Tous les protocoles qui ont été conçus pour autant faire l'hypothèse que le réseau sous-jacent ne sont pas partagées par d'autres applications en cours d'exécution. Ceci est une préoccupation majeure à condition que dans les centres de données modernes (aka Clouds), l'infrastructure de mise en réseau est partagée par plusieurs applications. La conséquence est que, dans de tels environnements, le total des protocoles afin de diffusion uniformes présentent des comportements instables.Dans cette thèse, j'ai conçu et mis en œuvre un nouveau protocole pour la Total Order Broadcast uniforme qui optimise la performance lorsqu'il est exécuté dans des environnements multi-Data Centers et le comparer avec plusieurs algorithmes de l'état de l'art.Dans cette thèse, je présente deux contributions. La première contribution est MDC-Cast un nouveau protocole pour Total Order Broadcast dans lesquelles il optimise les performances des systèmes distribués lorsqu'ils sont exécutés dans des environnements multi-centres de données. MDC-Cast combine les avantages de la multidiffusion IP dans les environnements de cluster et unicast TCP/IP pour obtenir un algorithme hybride qui fonctionne parfaitement entre les centres de données.La deuxième contribution est un algorithme conçu pour déboguer les performances dans les systèmes distribués en boîte noire. L'algorithme n'est pas encore publié car il nécessite plus de tests pour une meilleure généralisation. / Many uniform total order broadcast protocols have been designed in the last 30 years. They can be classified into two categories: those targeting low latency, and those targeting high throughput. Latency measures the time required to complete a single message broadcast without contention, whereas throughput measures the number of broadcasts that the processes can complete per time unit when there is contention. All the protocols that have been designed so far make the assumption that the underlying network is not shared by other applications running. This is a major concern provided that in modern data centers (aka Clouds), the networking infrastructure is shared by several applications. The consequence is that, in such environments, uniform total order broadcast protocols exhibit unstable behaviors.In this thesis, I provide two contributions. The first contribution is MDC-Cast a new protocol for total order broadcasts in which it optimizes the performance of distributed systems when executed in multi-data center environments. MDC-Cast combines the benefits of IP-multicast in cluster environments and TCP/IP unicast to get a hybrid algorithm that works perfectly in between datacenters.The second contribution is an algorithm designed for debugging performance in black-box distributed systems. The algorithm is not published yet due to the fact that it needs more tests for a better generalization.
25

Sustainable Cloud Computing

January 2014 (has links)
abstract: Energy consumption of the data centers worldwide is rapidly growing fueled by ever-increasing demand for Cloud computing applications ranging from social networking to e-commerce. Understandably, ensuring energy-efficiency and sustainability of Cloud data centers without compromising performance is important for both economic and environmental reasons. This dissertation develops a cyber-physical multi-tier server and workload management architecture which operates at the local and the global (geo-distributed) data center level. We devise optimization frameworks for each tier to optimize energy consumption, energy cost and carbon footprint of the data centers. The proposed solutions are aware of various energy management tradeoffs that manifest due to the cyber-physical interactions in data centers, while providing provable guarantee on the solutions' computation efficiency and energy/cost efficiency. The local data center level energy management takes into account the impact of server consolidation on the cooling energy, avoids cooling-computing power tradeoff, and optimizes the total energy (computing and cooling energy) considering the data centers' technology trends (servers' power proportionality and cooling system power efficiency). The global data center level cost management explores the diversity of the data centers to minimize the utility cost while satisfying the carbon cap requirement of the Cloud and while dealing with the adversity of the prediction error on the data center parameters. Finally, the synergy of the local and the global data center energy and cost optimization is shown to help towards achieving carbon neutrality (net-zero) in a cost efficient manner. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
26

Model Based Safety Analysis and Verification of Cyber-Physical Systems

January 2012 (has links)
abstract: Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties. / Dissertation/Thesis / Ph.D. Computer Science 2012
27

One Pass Packet Steering (OPPS) for Multi-Subscriber Software Defined Networking Environments

Chukwu, Julian January 2017 (has links)
In this thesis, we address the problem of service function chaining in a network. Currently, problems of chaining services in a network (i.e. service function chaining) can be broadly categorised into middlebox placement in a network and packet steering through middleboxes. In this work, we present a packet steering approach - One Pass Packet Steering (OPPS) - for use in multi-subscriber environments, with the aim that subscribers having similar policy chain composition should experience the same network performance. We develop and show algorithms with a proof of concept implementation using emulations performed with Mininet. We identify challenges and examine how OPPS could benefit from the Software Defined Data Center architecture to overcome these challenges. Our results show that, given a fixed topology and different sets of policy chains containing the same middleboxes, the end-to-end delay and throughput performance of subscribers using similar policy chains remains approximately the same. Also, we show how OPPS can use a smaller number of middleboxes and yet, achieve the same hop count as that of a reference model described in a previous work as ideal, without violating the subscribers' policy chains.
28

A New Look at Designing Electrical Construction Processes A Case Study of Cable Pulling and Termination Process on Data Center Construction Sites

January 2020 (has links)
abstract: At least 30 datacenters either broke ground or hit the planning stages around the United States over the past two years. On such technically complex projects, Mechanical, Electrical and Plumbing (MEP) systems make up a huge portion of the construction work which makes data center market very promising for MEP subcontractors in the next years. However, specialized subcontractors such as electrical subcontractors are struggling to keep crews motivated. Due to the hard work involved in the construction industry, it is not appealing for young workers. According to The Center for Construction Research and Training, the percentages of workers aged between 16 to 19 years decreased by 67%, 20 to 24 years decreased by 49% and 25 to 34 age decreased by 32% from 1985 to 2015. Furthermore, the construction industry has been lagging other industries in combatting its decline in productivity. Electrical activities, especially cable pulling, are some of the most physically unsafe, tedious, and labor-intensive electrical process on data center projects. The motivation of this research is the need to take a closer look at how this process is being done and find improvement opportunities. This thesis focuses on one potential restructuring of the cable pulling and termination process; the goal of this restructuring is optimization for automation. Through process mapping, this thesis presents a proposed cable pulling and termination process that utilizes automation to make use of the best abilities of human and robots/machines. It will also provide a methodology for process improvement that is applicable to the electrical scope of work as well as that of other construction trades. / Dissertation/Thesis / Masters Thesis Construction Management 2020
29

Energy Management System Modeling of DC Data Center with Hybrid Energy Sources Using Neural Network

Althomali, Khalid 01 February 2017 (has links)
As data centers continue to grow rapidly, engineers will face the greater challenge in finding ways to minimize the cost of powering data centers while improving their reliability. The continuing growth of renewable energy sources such as photovoltaics (PV) system presents an opportunity to reduce the long-term energy cost of data centers and to enhance reliability when used with utility AC power and energy storage. However, the inter-temporal and the intermittency nature of solar energy makes it necessary for the proper coordination and management of these energy sources. This thesis proposes an energy management system in DC data center using a neural network to coordinate AC power, energy storage, and PV system that constitutes a reliable electrical power distribution to the data center. Software modeling of the DC data center was first developed for the proposed system followed by the construction of a lab-scale model to simulate the proposed system. Five scenarios were tested on the hardware model and the results demonstrate the effectiveness and accuracy of the neural network approach. Results further prove the feasibility in utilizing renewable energy source and energy storage in DC data centers. Analysis and performance of the proposed system will be discussed in this thesis, and future improvement for improved energy system reliability will also be presented.
30

Design of Power-Efficient Optical Transceivers and Design of High-Linearity Wireless Wideband Receivers

Zhang, Yudong January 2021 (has links)
The combination of silicon photonics and advanced heterogeneous integration is promising for next-generation disaggregated data centers that demand large scale, high throughput, and low power. In this dissertation, we discuss the design and theory of power-efficient optical transceivers with System-in-Package (SiP) 2.5D integration. Combining prior arts and proposed circuit techniques, a receiver chip and a transmitter chip including two 10 Gb/s data channels and one 2.5 GHz clocking channel are designed and implemented in 28 nm CMOS technology. An innovative transimpedance amplifier (TIA) and a single-ended to differential (S2D) converter are proposed and analyzed for a low-voltage high-sensitivity receiver; a four-to-one serializer, programmable output drivers, AC coupling units, and custom pads are implemented in a low-power transmitter; an improved quadrature locked loop (QLL) is employed to generate accurate quadrature clocks. In addition, we present an analysis for inverter-based shunt-feedback TIA to explicitly depict the trade-off among sensitivity, data rate, and power consumption. At last, the research on CDR-based​ clocking schemes for optical links is also discussed. We introduce prior arts and propose a power-efficient clocking scheme based on an injection-locked phase rotator. Next, we analyze injection-locked ring oscillators (ILROs) that have been widely used for quadrature clock generators (QCGs) in multi-lane optical or wireline transceivers due to their low power, low area, and technology scalability. The asymmetrical or partial injection locking from 2 phases to 4 phases results in imbalances in amplitude and phase. We propose a modified frequency-domain analysis to provide intuitive insight into the performance design trade-offs. The analysis is validated by comparing analytical predictions with simulations for an ILRO-based QCG in 28 nm CMOS technology. This dissertation also discusses the design of high-linearity wireless wideband receivers. An out-of-band (OB) IM3 cancellation technique is proposed and analyzed. By exploiting a baseband auxiliary path (AP) with a high-pass feature, the in-band (IB) desired signal and out-of-band interferers are split. OB third-order intermodulation products (IM3) are reconstructed in the AP and cancelled in the baseband (BB). A 0.5-2.5 GHz frequency-translational noise-cancelling (FTNC) receiver is implemented in 65nm CMOS to demonstrate the proposed approach. It consumes 36 mW without cancellation at 1 GHz LO frequency and 1.2 V supply, and it achieves 8.8 MHz baseband bandwidth, 40dB gain, 3.3dB NF, 5dBm OB IIP3, and −6.5dBm OB B1dB. After IM3 cancellation, the effective OB-IIP3 increases to 32.5 dBm with an extra 34 mW for narrow-band interferers (two tones). For wideband interferers, 18.8 dB cancellation is demonstrated over 10 MHz with two −15 dBm modulated interferers. The local oscillator (LO) leakage is −92 dBm and −88 dB at 1 GHz and 2 GHz LO respectively. In summary, this technique achieves both high OB linearity and good LO isolation.

Page generated in 0.1148 seconds