• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 11
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cost-Based Automatic Recovery Policy in Data Centers

Luo, Yi 19 May 2011 (has links)
Today's data centers either provide critical applications to organizations or host computing clouds used by huge Internet populations. Their size and complex structure make management difficult, causing high operational cost. The large number of servers with various different hardware and software components cause frequent failures and need continuous recovery work. Much of the operational cost is from this recovery work. While there is significant research related to automatic recovery, from automatic error detection to different automatic recovery techniques, there is currently no automatic solution that can determine the exact fault, and hence the preferred recovery action. There is some study on how to automatically select a suitable recovery action without knowing the fault behind the error. In this thesis we propose an estimated-total-cost model based on analysis of the cost and the recovery-action-success probability. Our recovery-action selection is based on minimal estimated-total-cost; we implement three policies to use this model under different considerations of failed recovery attempts. The preferred policy is to reduce the recovery action-success probability when it failed to fix the error; we also study different reduction coefficients in this policy. To evaluate the various policies, we design and implement a simulation environment. Our simulation experiments demonstrate significant cost improvement over previous research based on simple heuristic models.
12

It Was Raining in the Data Center

Pipkin, Everest R. 05 May 2018 (has links)
Stemming from a 2011 incident inside of a Facebook data facility in which hyper-cooled air formed a literal (if somewhat transient) rain cloud in the stacks, It was raining in the data center examines ideas of non-places and supermodernity applied to contemporary network infrastructure. It was raining in the data center argues that the problem of the rain cloud is as much a problem of psychology as it is a problem of engineering. Although humidity-management is a predictable snag for any data center, the cloud was a surprise; a self-inflicted side-effect of a strategy of distance. The rain cloud was a result of the same rhetoric of ephemerality that makes it easy to imagine the inside of a data center to be both everywhere and nowhere. This conceit of internet data being placeless shares roots with Marc Augé’s idea of non-places (airports, highways, malls), which are predicated on the qualities of excess and movement. Without long-term inhabitants, these places fail to tether themselves to their locations, instead existing as a markers of everywhere. Such a premise allows the internet to exist as an other-space that is not conceptually beholden to the demands of energy and landscape. It also liberates the idea of ‘the network’ from a similar history of industry. However, the network is deeply rooted in place, as well as in industry and transit. Examining the prevalence of network overlap in American fiber-optic cabling, it becomes easy to trace routes of cables along major US freight train lines and the US interstate highway system. The historical origin of this network technology is in weaponization and defense, from highways as a nuclear-readiness response to ARPANET’s Pentagon-based funding. Such a linkage with the military continues today, with data centers likely to be situated near military installations— sharing similar needs electricity, network connectivity, fair climate, space, and invisibility. We see the repetition of militarized tropes across data structures. Fiber-optic network locations are kept secret; servers are housed in cold-war bunkers; data centers nest next to military black-sites. Similarly, Augé reminds us that non-places are a particular target of terrorism, populated as they are with cars, trains, drugs and planes that turn into weapons. When the network itself is at threat of weaponization, the effect is an ambient and ephemeral fear; a paranoia made of over-connection.
13

Virtualization of Data Centers : study on Server Energy Consumption Performance

Padala, Praneel Reddy January 2018 (has links)
Due to various reasons data centers have become ubiquitous in our society. Energy costs are significant portion of data centers total lifetime costs which also makes financial sense to operators. This increases huge concern towards the energy costs and environmental impacts of data center. Power costs and energy efficiency are the major challenges front of us.From overall cyber energy used, 15% is used by networking portion ofa data center. Its estimated that the energy used by network infrastructure in a data center world wide is 15.6 billion kWh and is expected to increase to around 50%. Power costs and Energy Consumption plays a major role through out the life time of a data center, which also leads to the increase in financial costs for data center operators and increased usage of power resources. So, resource utilization has become a major issue in the data centers. The main aim of this thesis study is to find the efficient way for utilization of resources and decrease the energy costs to the operators in the data centers using virtualization. Virtualization technology is used to deploy virtual servers on physical servers which uses the same resources and helps to decrease the energy consumption of a data center.
14

Analysis of Total Cost of Ownership for Medium Scale Cloud Service Provider with emphasis on Technology and Security

Dagala, Wadzani Jabani January 2017 (has links)
Total cost of ownership is a great factor to consider when deciding to deploy cloud computing. The cost to own a data centre or run a data centre outweighs the thought of IT manager or owner of the business organisation.The research work is concerned with specifying the factors that sum the TCO for medium scale service providers with respect to technology and security. A valid analysis was made with respect to the cloud service providers expenses and how to reduce the cost of ownership.In this research work, a review of related articles was used from a wide source, reading through the abstract and overview of the articles to find its relevance to the subject. A further interview was conducted with two medium scale cloud service providers and one cloud user.In this study, an average calculation of the TCO was made and we implemented a proposed cost reduction method. We made a proposal on which and how to decide as to which cloud services users should deploy in terms of cost and security.We conclude that many articles have focused their TCO calculation on the building without making emphasis on the security. The security accumulates huge amount under hidden cost and this research work identified the hidden cost, made an average calculation and proffer a method of reducing the TCO. / <p><em></em></p>
15

Development of Silicon Photonic Multi Chip Module Transceivers

Abrams, Nathan Casey January 2020 (has links)
The exponential growth of data generation–driven in part by the proliferation of applications such as high definition streaming, artificial intelligence, and the internet of things–presents an impending bottleneck for electrical interconnects to fulfill data center bandwidth demands. Links now require bandwidths in excess of multiple Tbps while operating on the order of picojoules per bit, in addition to constraints on areal bandwidth densities and pin I/O bandwidth densities. Optical communications built on a silicon photonic platform offers a potential solution to develop power efficient, high bandwidth, low attenuation, small footprint links, all while building off the mature CMOS ecosystem. The development of silicon photonic foundries supporting multi project wafer runs with associated process design kit components supports a path towards widespread commercial production by increasing production volume while reducing fabrication and development costs. While silicon photonics can always be improved in terms of performance and yield, one of the central challenges is the integration of the silicon photonic integrated circuits with the driving electronic integrated circuits and data generating compute nodes such as CPUs, FPGAs, and ASICs. The co-packaging of the photonics with the electronics is crucial for adoption of silicon photonics in datacenters, as improper integration negates all the potential benefits of silicon photonics. The work in this dissertation is centered around the development of silicon photonic multi chip module transceivers to aid in the deployment of silicon photonics within data centers. Section one focuses on silicon photonic integration and highlights multiple integrated transceiver prototypes. The central prototype features a photonic integrated circuit with bus waveguides with WDM microdisk modulators for the transmitter and WDM demuxes with drop ports to photodiodes for the receiver. The 2.5D integrated prototype utilizes a thinned silicon interposer and TIA electronic integrated circuits. The architecture, integration, characterization, performance, and scalability of the prototype are discussed. The development of this first prototype identified key design considerations necessary for designing multi chip module silicon photonic prototypes, which will be addressed in this section. Finally, other multi chip module silicon photonic prototypes will be overviewed. These include a 2.5D integrated transceiver with a different electronic integrated circuit TIA, a 3D integrated receiver, an active interposer network on chip, and a 2.5D integrated transceiver with custom electronic integrated circuits. Section two focuses on research that supports the development of silicon photonic transceivers. The thermal crosstalk from neighboring microdisk modulators as a function of modulator pitch is investigated. As modulators are placed at denser pitches to accommodate areal bandwidth density requirements in transceivers, this thermal crosstalk will become significant. In this section, designs and results from several iterations of custom microring modulators are reported. Custom microring modulators allow for scaling up the number of channels in microring transceivers by offering the ability to fabricate variable resonances and provide a platform for further innovation in bandwidth, free spectral range, and energy efficiency. The designs and results of higher order modulation format modulators, both microring based and Mach Zehnder based, are discussed. High order modulators offer a path towards scaling transceiver total throughput without having to increase the channel counts or component bandwidth. Together, the work in these two sections supports the development of silicon photonic transceivers to aid in the adoption of silicon photonics into data generating systems.
16

Energy-Efficient Power Management Architectures for Emerging Needs from the Internet of Things Devices to Data Centers

Kim, Dongkwun January 2022 (has links)
The Internet of Things (IoT) is now permeating our daily lives, providing critical data for every decision. IoT architecture consists of multiple layers with unique functions and independent components. Each layer of IoT architecture requires different power sources and power delivery schemes. Therefore, different types of power management architectures are required for individual IoT components. Fortunately, advances in metal oxide semiconductor (MOS) technology have made it possible to implement a variety of high-performance power management architectures. These power management architectures should not only create the power rails required for IoT components but also serve additional functions depending on the application. The power management architecture of IoT devices needs to support sub-mW- or mW-scale power consumption. In addition, the power management architecture should be either fully integrated on a chip or miniaturized with few passive components to minimize the size of IoT devices. Building-scale data centers, on the other hand, need various power conversion stages. In this scenario, power conversion from an intermediate DC bus to many point of loads (PoL) requires a high conversion ratio DC-DC converter. Because each PoL draws enormous amounts of power, the power management architecture should withstand high currents and include protection circuitry to prevent damage. This thesis presents research on the design of power management architectures required by IoT devices and data centers. Chapter 2 presents the design and circuit techniques of power management architectures for IoT devices. Chapter 2 outlines a new methodology for co-designing an integrated switched-capacitor (SC) DC-DC converter and a load circuit in ultra-low-power IoT devices. This methodology enables the implementation of an area-efficient fully integrated IoT system-on-chip (SoC) while maintaining high power conversion efficiency (PCE). Chapter 3 presents a 10-output ultra-low-power single-inductor-multiple-output (SIMO) DC-DC buck converter with integrated output capacitors for sub-mW IoT SoCs. Featuring a continuous comparator-based output switch controller and a digital pulse-width modulation (PWM) controller for ultra-low feedback latency, this SIMO converter produces ten independent output voltages with high PCE. Lastly, in Chapter 4, an integrated programmable gate timing control and gate driver chip for an active-clamp forward converter (ACFC) Power Block for data center applications is developed. While the ACFC Power Block converts 12-48 V intermediate DC bus voltage to a digital PoL voltage, the gate timing control and driver chip can optimize PCE and reduce the system form factor.
17

Reliability Characterization and Performance Analysis of Solid State Drives in Data Centers

Liang, Shuwen (Computer science and engineering researcher) 12 1900 (has links)
NAND flash-based solid state drives (SSDs) have been widely adopted in data centers and high performance computing (HPC) systems due to their better performance compared with hard disk drives. However, little is known about the reliability characteristics of SSDs in production systems. Existing works that study the statistical distributions of SSD failures in the field lack insights into distinct characteristics of SSDs. In this dissertation, I explore the SSD-specific SMART (Self-Monitoring, Analysis, and Reporting Technology) attributes and conduct in-depth analysis of SSD reliability in a production environment with a focus on the unique error types and health dynamics. QLC SSD delivers better performance in a cost-effective way. I study QLC SSDs in terms of their architecture and performance. In addition, I apply thermal stress tests to QLC SSDs and quantify their performance degradation processes. Various types of big data and machine learning workloads have been executed on SSDs under varying temperatures. The SSD throughput and application performance are analyzed and characterized.
18

Powering the Information Age: Metrics, Social Cost Optimization Strategies, and Indirect Effects Related to Data Center Energy Use

Horner, Nathaniel Charles 01 August 2016 (has links)
This dissertation contains three studies examining aspects of energy use by data centers and other information and communication technology (ICT) infrastructure necessary to support the electronic services that now form such a pervasive aspect of daily life. The energy consumption of ICT in general and data centers in particular has been of growing interest to both industry and the public, with continued calls for increased efficiency and greater focus on environmental impacts. The first study examines the metrics used to assess data center energy performance and finds that power usage effectiveness (PUE), the de facto industry standard, only accounts for one of four critical aspects of data center energy performance. PUE measures the overhead of the facility infrastructure but does not consider the efficiency of the IT equipment, its utilization, or the emissions profile of the power source. As a result, PUE corresponds poorly with energy and carbon efficiency, as demonstrated using a small set of empirical data center energy use measurements. The second study lays out a taxonomy of indirect energy impacts to help assess whether ICT’s direct energy consumption is offset by its energy benefits, and concludes that ICT likely has a large potential net energy benefit, but that there is no consensus on the sign or magnitude of actual savings, which are largely dependent upon implementation details. The third study estimates the potential of dynamic load shifting in a content distribution network to reduce both private costs and emissions-related externalities associated with electricity consumption. Utilizing variable marginal retail prices based on wholesale electricity markets and marginal damages estimated from emissions data in a cost-minimization model, the analysis finds that load shifting can either reduce data center power bills by approximately 25%–33% or avoid 30%–40% of public damages, while a range of joint cost minimization strategies enables simultaneous reduction of both private and public costs. The vast majority of these savings can be achieved even under existing bandwidth and network distance constraints, although current industry trends towards virtualization, energy efficiency, and green powermay make load shifting less appealing.
19

Online Social Network Data Placement over Clouds

Jiao, Lei 10 July 2014 (has links)
No description available.
20

New abstractions and mechanisms for virtualizing future many-core systems

Kumar, Sanjay 08 July 2008 (has links)
To abstract physical into virtual computing infrastructures is a longstanding goal. Efforts in the computing industry started with early work on virtual machines in IBM's VM370 operating system and architecture, continued with extensive developments in distributed systems in the context of grid computing, and now involve investments by key hardware and software vendors to efficiently virtualize common hardware platforms. Recent efforts in virtualization technology are driven by two facts: (i) technology push -- new hardware support for virtualization in multi- and many-core hardware platforms and in the interconnects and networks used to connect them, and (ii) technology pull -- the need to efficiently manage large-scale data-centers used for utility computing and extending from there, to also manage more loosely coupled virtual execution environments like those used in cloud computing. Concerning (i), platform virtualization is proving to be an effective way to partition and then efficiently use the ever-increasing number of cores in many-core chips. Further, I/O Virtualization enables I/O device sharing with increased device throughput, providing required I/O functionality to the many virtual machines (VMs) sharing a single platform. Concerning (ii), through server consolidation and VM migration, for instance, virtualization increases the flexibility of modern enterprise systems and creates opportunities for improvements in operational efficiency, power consumption, and the ability to meet time-varying application needs. This thesis contributes (i) new technologies that further increase system flexibility, by addressing some key problems of existing virtualization infrastructures, and (ii) it then directly addresses the issue of how to exploit the resulting increased levels of flexibility to improve data-center operations, e.g., power management, by providing lightweight, efficient management technologies and techniques that operate across the range of individual many-core platforms to data-center systems. Concerning (i), the thesis contributes, for large many-core systems, insights into how to better structure virtual machine monitors (VMMs) to provide more efficient utilization of cores, by implementing and evaluating the novel Sidecore approach that permits VMMs to exploit the computational power of parallel cores to improve overall VMM and I/O performance. Further, I/O virtualization still lacks the ability to provide complete transparency between virtual and physical devices, thereby limiting VM mobility and flexibility in accessing devices. In response, this thesis defines and implements the novel Netchannel abstraction that provides complete location transparency between virtual and physical I/O devices, thereby decoupling device access from device location and enabling live VM migration and device hot-swapping. Concerning (ii), the vManage set of abstractions, mechanisms, and methods developed in this work are shown to substantially improve system manageability, by providing a lightweight, system-level architecture for implementing and running the management applications required in data-center and cloud computing environments. vManage simplifies management by making it possible and easier to coordinate the management actions taken by the many management applications and subsystems present in data-center and cloud computing systems. Experimental evaluations of the Sidecore approach to VMM structure, Netchannel, and of vManage are conducted on representative platforms and server systems, with consequent improvements in flexibility, in I/O performance, and in management efficiency, including power management.

Page generated in 0.1348 seconds