• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 12
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Energy Management System Modeling of DC Data Center with Hybrid Energy Sources Using Neural Network

Althomali, Khalid 01 February 2017 (has links)
As data centers continue to grow rapidly, engineers will face the greater challenge in finding ways to minimize the cost of powering data centers while improving their reliability. The continuing growth of renewable energy sources such as photovoltaics (PV) system presents an opportunity to reduce the long-term energy cost of data centers and to enhance reliability when used with utility AC power and energy storage. However, the inter-temporal and the intermittency nature of solar energy makes it necessary for the proper coordination and management of these energy sources. This thesis proposes an energy management system in DC data center using a neural network to coordinate AC power, energy storage, and PV system that constitutes a reliable electrical power distribution to the data center. Software modeling of the DC data center was first developed for the proposed system followed by the construction of a lab-scale model to simulate the proposed system. Five scenarios were tested on the hardware model and the results demonstrate the effectiveness and accuracy of the neural network approach. Results further prove the feasibility in utilizing renewable energy source and energy storage in DC data centers. Analysis and performance of the proposed system will be discussed in this thesis, and future improvement for improved energy system reliability will also be presented.
32

Design of Power-Efficient Optical Transceivers and Design of High-Linearity Wireless Wideband Receivers

Zhang, Yudong January 2021 (has links)
The combination of silicon photonics and advanced heterogeneous integration is promising for next-generation disaggregated data centers that demand large scale, high throughput, and low power. In this dissertation, we discuss the design and theory of power-efficient optical transceivers with System-in-Package (SiP) 2.5D integration. Combining prior arts and proposed circuit techniques, a receiver chip and a transmitter chip including two 10 Gb/s data channels and one 2.5 GHz clocking channel are designed and implemented in 28 nm CMOS technology. An innovative transimpedance amplifier (TIA) and a single-ended to differential (S2D) converter are proposed and analyzed for a low-voltage high-sensitivity receiver; a four-to-one serializer, programmable output drivers, AC coupling units, and custom pads are implemented in a low-power transmitter; an improved quadrature locked loop (QLL) is employed to generate accurate quadrature clocks. In addition, we present an analysis for inverter-based shunt-feedback TIA to explicitly depict the trade-off among sensitivity, data rate, and power consumption. At last, the research on CDR-based​ clocking schemes for optical links is also discussed. We introduce prior arts and propose a power-efficient clocking scheme based on an injection-locked phase rotator. Next, we analyze injection-locked ring oscillators (ILROs) that have been widely used for quadrature clock generators (QCGs) in multi-lane optical or wireline transceivers due to their low power, low area, and technology scalability. The asymmetrical or partial injection locking from 2 phases to 4 phases results in imbalances in amplitude and phase. We propose a modified frequency-domain analysis to provide intuitive insight into the performance design trade-offs. The analysis is validated by comparing analytical predictions with simulations for an ILRO-based QCG in 28 nm CMOS technology. This dissertation also discusses the design of high-linearity wireless wideband receivers. An out-of-band (OB) IM3 cancellation technique is proposed and analyzed. By exploiting a baseband auxiliary path (AP) with a high-pass feature, the in-band (IB) desired signal and out-of-band interferers are split. OB third-order intermodulation products (IM3) are reconstructed in the AP and cancelled in the baseband (BB). A 0.5-2.5 GHz frequency-translational noise-cancelling (FTNC) receiver is implemented in 65nm CMOS to demonstrate the proposed approach. It consumes 36 mW without cancellation at 1 GHz LO frequency and 1.2 V supply, and it achieves 8.8 MHz baseband bandwidth, 40dB gain, 3.3dB NF, 5dBm OB IIP3, and −6.5dBm OB B1dB. After IM3 cancellation, the effective OB-IIP3 increases to 32.5 dBm with an extra 34 mW for narrow-band interferers (two tones). For wideband interferers, 18.8 dB cancellation is demonstrated over 10 MHz with two −15 dBm modulated interferers. The local oscillator (LO) leakage is −92 dBm and −88 dB at 1 GHz and 2 GHz LO respectively. In summary, this technique achieves both high OB linearity and good LO isolation.
33

Application of Data-driven Techniques for Thermal Management in Data Centers

Jiang, Kai January 2021 (has links)
This thesis mainly addresses the problems of thermal management in data centers (DCs) through data-driven techniques. For thermal management, a temperature prediction model in the facility is very important, while the thermal modeling based on first principles in DCs is quite difficult due to the complicated air flow and heat transfer. Therefore, we employ multiple data-driven techniques including statistical methods and deep neural networks (DNNs) to represent the thermal dynamics. Then based on such data-driven models, temperature estimation and control are implemented to optimize the thermal management in DCs. The contributions of this study are summarized in the following four aspects: 1) A data-driven model constructed through multiple linear Autoregression exogenous (ARX) models is adopted to describe the thermal behaviors in DCs. On the basis of such data-driven model, an observer of adaptive Kalman filter is proposed to estimate the temperature distribution in DC. 2) Based on the data-driven model proposed in the first work, a data-driven fault tolerant predictive controller considering different actuator faults is developed to regulate the temperature in DC. 3) To improve the modeling accuracy, a deep input convex neural network (ICNN) is adopted to implement thermal modeling in DCs, which is also specifically designed for further control design. Besides, the algorithm of elastic weight consolidation (EWC) is employed to overcome the catastrophic forgetting in continual learning. 4) A novel example reweighting algorithm is utilized to enhance the robustness of ICNN against noisy data and avoid overfitting in the training process. Finally, all the proposed approaches are validated in real experiments or experimental-data-based simulations. / Dissertation / Doctor of Philosophy (PhD) / This thesis mainly investigates the applications of data-driven techniques for thermal management in data centers. The implementations of thermal modeling, temperature estimation and temperature control in data centers are the key contributions in this work. First, we design a data-driven statistical model to describe the complicated thermal dynamics of data center. Then based on the data-driven model, efficient observer and controller are developed respectively to optimize the thermal management in data centers. Moreover, to improve the nonlinear modeling performance in data centers, specific deep input convex neural networks capable of good representation capability and control tractability are adopted. This thesis also proposes two novel strategies to avoid the influence of catastrophic forgetting and noisy data respectively during the training processes. Finally, all the proposed techniques are validated in real experiments or experimental-data-based simulations.
34

Data centers : The influence of big tech on urban planning in Sweden

Maas, Julie January 2022 (has links)
This thesis aimed to describe what (planning for) data centers reveal about the power relations between big tech companies and Sweden’s municipalities and national government. Data centers owned by large IT companies serve global interests but are dependent on and have an impact on local infrastructures, as demonstrated by for instance the large amount of energy they require. A Microsoft data center in Staffanstorp, located in Skåne, served as a case study. Based on various types of documents, the study analyzed what this hyperscale data center uncovers about the influence of big tech on urban planning in Sweden. For this, theoretical concepts such as cloud infrastructures, the hidden materiality of the cloud, and clouding have been used. The thesis explored the motivations behind choosing Staffanstorp to establish a hyperscale data center. Sweden is an attractive data center location for big tech companies. The image corporations have of Sweden is an important contributing factor here, as it is not just factual characteristics of a location that determine its attractiveness, but first and foremost how that location is perceived. The analysis therefore also highlights the promotional strategies that the government and the municipality of Staffanstorp have employed to attract data centers, in which Business Sweden appeared to have played a key role. Among other significant factors that contribute to big tech’s interest in Sweden are cheap renewable energy, a 98% electricity tax reduction, and a business-friendly environment. Processes behind the planning of Microsoft’s data center in Staffanstorp have also been studied by looking at the developments in the implementation of the data center. Reflecting on the outcomes of Microsoft’s data center by comparing these developments to plans and visions for Sweden and Staffanstorp shows that the promise of jobs for a data center location is paradoxical and that hyperscale data centers potentially endanger the energy supply. The research concludes that rather than corporations directly influencing Swedish planning, Sweden indirectly allows them to have a large influence.
35

Toward Next-generation Data Centers : Principles of Software-Defined “Hardware” Infrastructures and Resource Disaggregation

Roozbeh, Amir January 2019 (has links)
The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Therefore, next-generation data centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, we examine the concept of software-defined “hardware” infrastructures (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. We start with an overview of the functional architecture of a cloud based on SDHI. Following this, we discuss a series of use-cases and deployment scenarios enabled by SDHI and explore the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications. Next, we propose a framework to evaluate the impact of SDHI on techno-economic efficiency of DCs, specifically focusing on application profiling, hardware dimensioning, and total cost of ownership (TCO). Our study shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence they can rapidly follow the exponential demand growth. Additionally, we elaborate on technologies behind SDHI, its challenges, and its potential future directions. Finally, to identify a suitable memory management scheme for SDHI and show its advantages, we focus on the management of Last Level Cache (LLC) in currently available Intel processors. Aligned with this, we investigate how better management of LLC can provide higher performance, more predictable response time, and improved isolation between threads. More specifically, we take advantage of LLC’s non-uniform cache architecture (NUCA) in which the LLC is divided into “slices,” where access by the core to which it closer is faster than access to other slices. Based upon this, we introduce a new memory management scheme, called slice-aware memory management, which carefully maps the allocated memory to LLC slices based on their access time latency rather than the de facto scheme that maps them uniformly. Many applications can benefit from our memory management scheme with relatively small changes. As an example, we show the potential benefits that Key-Value Store (KVS) applications gain by utilizing our memory management scheme. Moreover, we discuss how this scheme could be used to provide explicit CPU slicing – which is one of the expectations of SDHI  and hardware resource disaggregation. / <p>QC 20190415</p>
36

Development of Strategies in Finding the Optimal Cooling of Systems of Integrated Circuits

Minter, Dion Len 11 June 2004 (has links)
The task of thermal management in electrical systems has never been simple and has only become more difficult in recent years as the power electronics industry pushes towards devices with higher power densities. At the Center for Power Electronic Systems (CPES), a new approach to power electronic design is being implemented with the Integrated Power Electronic Module (IPEM). It is believed that an IPEM-based design approach will significantly enhance the competitiveness of the U.S. electronics industry, revolutionize the power electronics industry, and overcome many of the technology limits in today's industry by driving down the cost of manufacturing and design turnaround time. But with increased component integration comes the increased risk of component failure due to overheating. This thesis addresses the issues associated with the thermal management of integrated power electronic devices. Two studies are presented in this thesis. The focus of these studies is on the thermal design of a DC-DC front-end power converter developed at CPES with an IPEM-based approach. The first study investigates how the system would respond when the fan location and heat sink fin arrangement are varied in order to optimize the effects of conduction and forced-convection heat transfer to cool the system. The set-up of an experimental test is presented, and the results are compared to the thermal model. The second study presents an improved methodology for the thermal modeling of large-scale electrical systems and their many subsystems. A zoom-in/zoom-out approach is used to overcome the computational limitations associated with modeling large systems. The analysis performed in this paper was completed using I-DEAS©,, a three-dimensional finite element analysis (FEA) program which allows the thermal designer to simulate the affects of conduction and convection heat transfer in a forced-air cooling environment. / Master of Science
37

Towards more scalability and flexibility for distributed storage systems / Vers un meilleur passage à l'échelle et une plus grande flexibilité pour les systèmes de stockage distribué

Ruty, Guillaume 15 February 2019 (has links)
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche. / The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects.
38

Advanced thermal management strategies for energy-efficient data centers

Somani, Ankit 02 January 2009 (has links)
A simplified computational fluid dynamics/heat transfer (CFD/HT) model for a unit cell of a data center with a hot aisle-cold aisle (HACA) layout is simulated. Inefficiencies dealing with the mixing of hot air present in the room, with the cold inlet air leading to a loss of cooling potential are identified. For existing facilities, an algorithm called the Ambient Intelligence based Load Management (AILM) is developed which enhances the net data center heat dissipation capacity for given energy consumption at the facilities end. It gives a scheme to determine how much and where the computer loads should be allocated, based on the differential loss in cooling potential per unit increase in server workload. While the gains predicted are validated numerically initially, experimental validation is conducted using server simulators. For new facilities, a novel layout of the data center is designed, which uses scalable pods (S-Pod) based cabinet arrangement and air delivery. For the same floor space, the S-Pod and HACA facilities are simulated for different velocities, and the results are compared. An approach to incorporate heterogeneity in data centers, both for lower heat dissipation and liquid cooled racks has been established. Various performance metrics for data centers have been analyzed and sorted on the basis of their applicability. Finally, a roadmap for the transformation of the existing facilities to a state of higher cognizance of Facilities/IT performance is laid out.
39

Improving Resource Management in Virtualized Data Centers using Application Performance Models

Kundu, Sajib 01 April 2013 (has links)
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
40

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.

Page generated in 0.0846 seconds