Spelling suggestions: "subject:"anda data centers."" "subject:"anda mata centers.""
21 |
New abstractions and mechanisms for virtualizing future many-core systemsKumar, Sanjay 08 July 2008 (has links)
To abstract physical into virtual computing infrastructures is a longstanding goal. Efforts in the computing industry started with early work on virtual machines in IBM's VM370 operating system and architecture, continued with extensive developments in distributed systems in the context of grid computing, and now involve investments by key hardware and software vendors to efficiently virtualize common hardware platforms. Recent efforts in virtualization technology are driven by two facts: (i) technology push -- new hardware support for virtualization in multi- and many-core hardware platforms and in the interconnects and networks used to connect them, and (ii) technology
pull -- the need to efficiently manage large-scale data-centers used for utility computing and extending from there, to also manage more loosely coupled virtual execution environments like those used in cloud computing. Concerning (i), platform virtualization is proving to be an effective way
to partition and then efficiently use the ever-increasing number of cores in many-core chips. Further, I/O Virtualization enables I/O device sharing with increased device throughput, providing required I/O functionality to the many virtual machines (VMs) sharing a single platform. Concerning (ii), through server consolidation and VM migration, for instance, virtualization
increases the flexibility of modern enterprise systems and creates opportunities for improvements in operational efficiency, power consumption, and the ability to meet time-varying application needs.
This thesis contributes (i) new technologies that further increase system flexibility, by addressing some key problems of existing virtualization
infrastructures, and (ii) it then directly addresses the issue of how to exploit the resulting increased levels of flexibility to improve data-center operations, e.g., power management, by providing lightweight, efficient
management technologies and techniques that operate across the range of individual many-core platforms to data-center systems. Concerning (i), the thesis contributes, for large many-core systems, insights into how to better structure virtual machine monitors (VMMs) to provide more efficient utilization of cores, by implementing and evaluating the novel Sidecore approach that permits VMMs to exploit the computational power of parallel cores to improve overall VMM and I/O performance. Further, I/O virtualization still lacks the ability to provide complete transparency between virtual and physical devices, thereby limiting VM mobility and flexibility in accessing devices. In response, this thesis defines and implements the novel Netchannel abstraction that provides complete location transparency between virtual and physical I/O devices, thereby decoupling device access from device location
and enabling live VM migration and device hot-swapping. Concerning (ii), the vManage set of abstractions, mechanisms, and methods developed in this work are shown to substantially
improve system manageability, by providing a lightweight, system-level architecture for implementing and running the management applications required in data-center and cloud computing environments. vManage simplifies management by making it possible and easier
to coordinate the management actions taken by the many management applications and subsystems present in data-center and cloud computing systems. Experimental evaluations of the Sidecore approach to VMM structure, Netchannel, and of vManage are conducted on representative platforms and server systems, with consequent improvements in flexibility, in I/O performance, and in
management efficiency, including power management.
|
22 |
Next generation state-machine replication protocols for data centers / Protocoles de réplication de machines à états de prochaine génération pour les centres de donnéesNehme, Mohamad Jaafar 05 December 2017 (has links)
De nombreux protocoles Total Order Broadcast uniformes ont été conçus au cours des 30 dernières années. Ils peuvent être classés en deux catégories: ceux qui visent une faible latence, et ceux qui visent à haut débit. Latence mesure le temps nécessaire pour effectuer un seul message diffusé sans prétention, alors que le débit mesure le nombre d'émissions que les processus peuvent compléter par unité de temps quand il y a discorde. Tous les protocoles qui ont été conçus pour autant faire l'hypothèse que le réseau sous-jacent ne sont pas partagées par d'autres applications en cours d'exécution. Ceci est une préoccupation majeure à condition que dans les centres de données modernes (aka Clouds), l'infrastructure de mise en réseau est partagée par plusieurs applications. La conséquence est que, dans de tels environnements, le total des protocoles afin de diffusion uniformes présentent des comportements instables.Dans cette thèse, j'ai conçu et mis en œuvre un nouveau protocole pour la Total Order Broadcast uniforme qui optimise la performance lorsqu'il est exécuté dans des environnements multi-Data Centers et le comparer avec plusieurs algorithmes de l'état de l'art.Dans cette thèse, je présente deux contributions. La première contribution est MDC-Cast un nouveau protocole pour Total Order Broadcast dans lesquelles il optimise les performances des systèmes distribués lorsqu'ils sont exécutés dans des environnements multi-centres de données. MDC-Cast combine les avantages de la multidiffusion IP dans les environnements de cluster et unicast TCP/IP pour obtenir un algorithme hybride qui fonctionne parfaitement entre les centres de données.La deuxième contribution est un algorithme conçu pour déboguer les performances dans les systèmes distribués en boîte noire. L'algorithme n'est pas encore publié car il nécessite plus de tests pour une meilleure généralisation. / Many uniform total order broadcast protocols have been designed in the last 30 years. They can be classified into two categories: those targeting low latency, and those targeting high throughput. Latency measures the time required to complete a single message broadcast without contention, whereas throughput measures the number of broadcasts that the processes can complete per time unit when there is contention. All the protocols that have been designed so far make the assumption that the underlying network is not shared by other applications running. This is a major concern provided that in modern data centers (aka Clouds), the networking infrastructure is shared by several applications. The consequence is that, in such environments, uniform total order broadcast protocols exhibit unstable behaviors.In this thesis, I provide two contributions. The first contribution is MDC-Cast a new protocol for total order broadcasts in which it optimizes the performance of distributed systems when executed in multi-data center environments. MDC-Cast combines the benefits of IP-multicast in cluster environments and TCP/IP unicast to get a hybrid algorithm that works perfectly in between datacenters.The second contribution is an algorithm designed for debugging performance in black-box distributed systems. The algorithm is not published yet due to the fact that it needs more tests for a better generalization.
|
23 |
Sustainable Cloud ComputingJanuary 2014 (has links)
abstract: Energy consumption of the data centers worldwide is rapidly growing fueled by ever-increasing demand for Cloud computing applications ranging from social networking to e-commerce. Understandably, ensuring energy-efficiency and sustainability of Cloud data centers without compromising performance is important for both economic and environmental reasons. This dissertation develops a cyber-physical multi-tier server and workload management architecture which operates at the local and the global (geo-distributed) data center level. We devise optimization frameworks for each tier to optimize energy consumption, energy cost and carbon footprint of the data centers. The proposed solutions are aware of various energy management tradeoffs that manifest due to the cyber-physical interactions in data centers, while providing provable guarantee on the solutions' computation efficiency and energy/cost efficiency. The local data center level energy management takes into account the impact of server consolidation on the cooling energy, avoids cooling-computing power tradeoff, and optimizes the total energy (computing and cooling energy) considering the data centers' technology trends (servers' power proportionality and cooling system power efficiency). The global data center level cost management explores the diversity of the data centers to minimize the utility cost while satisfying the carbon cap requirement of the Cloud and while dealing with the adversity of the prediction error on the data center parameters. Finally, the synergy of the local and the global data center energy and cost optimization is shown to help towards achieving carbon neutrality (net-zero) in a cost efficient manner. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
|
24 |
Model Based Safety Analysis and Verification of Cyber-Physical SystemsJanuary 2012 (has links)
abstract: Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties. / Dissertation/Thesis / Ph.D. Computer Science 2012
|
25 |
One Pass Packet Steering (OPPS) for Multi-Subscriber Software Defined Networking EnvironmentsChukwu, Julian January 2017 (has links)
In this thesis, we address the problem of service function chaining in a network. Currently, problems of chaining services in a network (i.e. service function chaining) can be broadly categorised into middlebox placement in a network and packet steering through middleboxes.
In this work, we present a packet steering approach - One Pass Packet Steering (OPPS) - for use in multi-subscriber environments, with the aim that subscribers having similar policy chain composition should experience the same network performance. We develop and show algorithms with a proof of concept implementation using emulations performed with Mininet. We identify challenges and examine how OPPS could benefit from the Software Defined Data Center architecture to overcome these challenges.
Our results show that, given a fixed topology and different sets of policy chains containing the same middleboxes, the end-to-end delay and throughput performance of subscribers using similar policy chains remains approximately the same. Also, we show how OPPS can use a smaller number of middleboxes and yet, achieve the same hop count as that of a reference model described in a previous work as ideal, without violating the subscribers' policy chains.
|
26 |
A New Look at Designing Electrical Construction Processes A Case Study of Cable Pulling and Termination Process on Data Center Construction SitesJanuary 2020 (has links)
abstract: At least 30 datacenters either broke ground or hit the planning stages around the United States over the past two years. On such technically complex projects, Mechanical, Electrical and Plumbing (MEP) systems make up a huge portion of the construction work which makes data center market very promising for MEP subcontractors in the next years. However, specialized subcontractors such as electrical subcontractors are struggling to keep crews motivated. Due to the hard work involved in the construction industry, it is not appealing for young workers. According to The Center for Construction Research and Training, the percentages of workers aged between 16 to 19 years decreased by 67%, 20 to 24 years decreased by 49% and 25 to 34 age decreased by 32% from 1985 to 2015. Furthermore, the construction industry has been lagging other industries in combatting its decline in productivity. Electrical activities, especially cable pulling, are some of the most physically unsafe, tedious, and labor-intensive electrical process on data center projects. The motivation of this research is the need to take a closer look at how this process is being done and find improvement opportunities. This thesis focuses on one potential restructuring of the cable pulling and termination process; the goal of this restructuring is optimization for automation. Through process mapping, this thesis presents a proposed cable pulling and termination process that utilizes automation to make use of the best abilities of human and robots/machines. It will also provide a methodology for process improvement that is applicable to the electrical scope of work as well as that of other construction trades. / Dissertation/Thesis / Masters Thesis Construction Management 2020
|
27 |
Energy Management System Modeling of DC Data Center with Hybrid Energy Sources Using Neural NetworkAlthomali, Khalid 01 February 2017 (has links)
As data centers continue to grow rapidly, engineers will face the greater challenge in finding ways to minimize the cost of powering data centers while improving their reliability. The continuing growth of renewable energy sources such as photovoltaics (PV) system presents an opportunity to reduce the long-term energy cost of data centers and to enhance reliability when used with utility AC power and energy storage. However, the inter-temporal and the intermittency nature of solar energy makes it necessary for the proper coordination and management of these energy sources.
This thesis proposes an energy management system in DC data center using a neural network to coordinate AC power, energy storage, and PV system that constitutes a reliable electrical power distribution to the data center. Software modeling of the DC data center was first developed for the proposed system followed by the construction of a lab-scale model to simulate the proposed system. Five scenarios were tested on the hardware model and the results demonstrate the effectiveness and accuracy of the neural network approach. Results further prove the feasibility in utilizing renewable energy source and energy storage in DC data centers. Analysis and performance of the proposed system will be discussed in this thesis, and future improvement for improved energy system reliability will also be presented.
|
28 |
Design of Power-Efficient Optical Transceivers and Design of High-Linearity Wireless Wideband ReceiversZhang, Yudong January 2021 (has links)
The combination of silicon photonics and advanced heterogeneous integration is promising for next-generation disaggregated data centers that demand large scale, high throughput, and low power. In this dissertation, we discuss the design and theory of power-efficient optical transceivers with System-in-Package (SiP) 2.5D integration. Combining prior arts and proposed circuit techniques, a receiver chip and a transmitter chip including two 10 Gb/s data channels and one 2.5 GHz clocking channel are designed and implemented in 28 nm CMOS technology.
An innovative transimpedance amplifier (TIA) and a single-ended to differential (S2D) converter are proposed and analyzed for a low-voltage high-sensitivity receiver; a four-to-one serializer, programmable output drivers, AC coupling units, and custom pads are implemented in a low-power transmitter; an improved quadrature locked loop (QLL) is employed to generate accurate quadrature clocks. In addition, we present an analysis for inverter-based shunt-feedback TIA to explicitly depict the trade-off among sensitivity, data rate, and power consumption. At last, the research on CDR-based clocking schemes for optical links is also discussed. We introduce prior arts and propose a power-efficient clocking scheme based on an injection-locked phase rotator. Next, we analyze injection-locked ring oscillators (ILROs) that have been widely used for quadrature clock generators (QCGs) in multi-lane optical or wireline transceivers due to their low power, low area, and technology scalability. The asymmetrical or partial injection locking from 2 phases to 4 phases results in imbalances in amplitude and phase. We propose a modified frequency-domain analysis to provide intuitive insight into the performance design trade-offs. The analysis is validated by comparing analytical predictions with simulations for an ILRO-based QCG in 28 nm CMOS technology.
This dissertation also discusses the design of high-linearity wireless wideband receivers. An out-of-band (OB) IM3 cancellation technique is proposed and analyzed. By exploiting a baseband auxiliary path (AP) with a high-pass feature, the in-band (IB) desired signal and out-of-band interferers are split. OB third-order intermodulation products (IM3) are reconstructed in the AP and cancelled in the baseband (BB). A 0.5-2.5 GHz frequency-translational noise-cancelling (FTNC) receiver is implemented in 65nm CMOS to demonstrate the proposed approach. It consumes 36 mW without cancellation at 1 GHz LO frequency and 1.2 V supply, and it achieves 8.8 MHz baseband bandwidth, 40dB gain, 3.3dB NF, 5dBm OB IIP3, and −6.5dBm OB B1dB. After IM3 cancellation, the effective OB-IIP3 increases to 32.5 dBm with an extra 34 mW for narrow-band interferers (two tones). For wideband interferers, 18.8 dB cancellation is demonstrated over 10 MHz with two −15 dBm modulated interferers. The local oscillator (LO) leakage is −92 dBm and −88 dB at 1 GHz and 2 GHz LO respectively. In summary, this technique achieves both high OB linearity and good LO isolation.
|
29 |
Toward Next-generation Data Centers : Principles of Software-Defined “Hardware” Infrastructures and Resource DisaggregationRoozbeh, Amir January 2019 (has links)
The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Therefore, next-generation data centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, we examine the concept of software-defined “hardware” infrastructures (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. We start with an overview of the functional architecture of a cloud based on SDHI. Following this, we discuss a series of use-cases and deployment scenarios enabled by SDHI and explore the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications. Next, we propose a framework to evaluate the impact of SDHI on techno-economic efficiency of DCs, specifically focusing on application profiling, hardware dimensioning, and total cost of ownership (TCO). Our study shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence they can rapidly follow the exponential demand growth. Additionally, we elaborate on technologies behind SDHI, its challenges, and its potential future directions. Finally, to identify a suitable memory management scheme for SDHI and show its advantages, we focus on the management of Last Level Cache (LLC) in currently available Intel processors. Aligned with this, we investigate how better management of LLC can provide higher performance, more predictable response time, and improved isolation between threads. More specifically, we take advantage of LLC’s non-uniform cache architecture (NUCA) in which the LLC is divided into “slices,” where access by the core to which it closer is faster than access to other slices. Based upon this, we introduce a new memory management scheme, called slice-aware memory management, which carefully maps the allocated memory to LLC slices based on their access time latency rather than the de facto scheme that maps them uniformly. Many applications can benefit from our memory management scheme with relatively small changes. As an example, we show the potential benefits that Key-Value Store (KVS) applications gain by utilizing our memory management scheme. Moreover, we discuss how this scheme could be used to provide explicit CPU slicing – which is one of the expectations of SDHI and hardware resource disaggregation. / <p>QC 20190415</p>
|
30 |
Development of Strategies in Finding the Optimal Cooling of Systems of Integrated CircuitsMinter, Dion Len 11 June 2004 (has links)
The task of thermal management in electrical systems has never been simple and has only become more difficult in recent years as the power electronics industry pushes towards devices with higher power densities. At the Center for Power Electronic Systems (CPES), a new approach to power electronic design is being implemented with the Integrated Power Electronic Module (IPEM). It is believed that an IPEM-based design approach will significantly enhance the competitiveness of the U.S. electronics industry, revolutionize the power electronics industry, and overcome many of the technology limits in today's industry by driving down the cost of manufacturing and design turnaround time. But with increased component integration comes the increased risk of component failure due to overheating. This thesis addresses the issues associated with the thermal management of integrated power electronic devices.
Two studies are presented in this thesis. The focus of these studies is on the thermal design of a DC-DC front-end power converter developed at CPES with an IPEM-based approach. The first study investigates how the system would respond when the fan location and heat sink fin arrangement are varied in order to optimize the effects of conduction and forced-convection heat transfer to cool the system. The set-up of an experimental test is presented, and the results are compared to the thermal model. The second study presents an improved methodology for the thermal modeling of large-scale electrical systems and their many subsystems. A zoom-in/zoom-out approach is used to overcome the computational limitations associated with modeling large systems. The analysis performed in this paper was completed using I-DEAS©,, a three-dimensional finite element analysis (FEA) program which allows the thermal designer to simulate the affects of conduction and convection heat transfer in a forced-air cooling environment. / Master of Science
|
Page generated in 0.1783 seconds