• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Application of Data-driven Techniques for Thermal Management in Data Centers

Jiang, Kai January 2021 (has links)
This thesis mainly addresses the problems of thermal management in data centers (DCs) through data-driven techniques. For thermal management, a temperature prediction model in the facility is very important, while the thermal modeling based on first principles in DCs is quite difficult due to the complicated air flow and heat transfer. Therefore, we employ multiple data-driven techniques including statistical methods and deep neural networks (DNNs) to represent the thermal dynamics. Then based on such data-driven models, temperature estimation and control are implemented to optimize the thermal management in DCs. The contributions of this study are summarized in the following four aspects: 1) A data-driven model constructed through multiple linear Autoregression exogenous (ARX) models is adopted to describe the thermal behaviors in DCs. On the basis of such data-driven model, an observer of adaptive Kalman filter is proposed to estimate the temperature distribution in DC. 2) Based on the data-driven model proposed in the first work, a data-driven fault tolerant predictive controller considering different actuator faults is developed to regulate the temperature in DC. 3) To improve the modeling accuracy, a deep input convex neural network (ICNN) is adopted to implement thermal modeling in DCs, which is also specifically designed for further control design. Besides, the algorithm of elastic weight consolidation (EWC) is employed to overcome the catastrophic forgetting in continual learning. 4) A novel example reweighting algorithm is utilized to enhance the robustness of ICNN against noisy data and avoid overfitting in the training process. Finally, all the proposed approaches are validated in real experiments or experimental-data-based simulations. / Dissertation / Doctor of Philosophy (PhD) / This thesis mainly investigates the applications of data-driven techniques for thermal management in data centers. The implementations of thermal modeling, temperature estimation and temperature control in data centers are the key contributions in this work. First, we design a data-driven statistical model to describe the complicated thermal dynamics of data center. Then based on the data-driven model, efficient observer and controller are developed respectively to optimize the thermal management in data centers. Moreover, to improve the nonlinear modeling performance in data centers, specific deep input convex neural networks capable of good representation capability and control tractability are adopted. This thesis also proposes two novel strategies to avoid the influence of catastrophic forgetting and noisy data respectively during the training processes. Finally, all the proposed techniques are validated in real experiments or experimental-data-based simulations.
32

Data centers : The influence of big tech on urban planning in Sweden

Maas, Julie January 2022 (has links)
This thesis aimed to describe what (planning for) data centers reveal about the power relations between big tech companies and Sweden’s municipalities and national government. Data centers owned by large IT companies serve global interests but are dependent on and have an impact on local infrastructures, as demonstrated by for instance the large amount of energy they require. A Microsoft data center in Staffanstorp, located in Skåne, served as a case study. Based on various types of documents, the study analyzed what this hyperscale data center uncovers about the influence of big tech on urban planning in Sweden. For this, theoretical concepts such as cloud infrastructures, the hidden materiality of the cloud, and clouding have been used. The thesis explored the motivations behind choosing Staffanstorp to establish a hyperscale data center. Sweden is an attractive data center location for big tech companies. The image corporations have of Sweden is an important contributing factor here, as it is not just factual characteristics of a location that determine its attractiveness, but first and foremost how that location is perceived. The analysis therefore also highlights the promotional strategies that the government and the municipality of Staffanstorp have employed to attract data centers, in which Business Sweden appeared to have played a key role. Among other significant factors that contribute to big tech’s interest in Sweden are cheap renewable energy, a 98% electricity tax reduction, and a business-friendly environment. Processes behind the planning of Microsoft’s data center in Staffanstorp have also been studied by looking at the developments in the implementation of the data center. Reflecting on the outcomes of Microsoft’s data center by comparing these developments to plans and visions for Sweden and Staffanstorp shows that the promise of jobs for a data center location is paradoxical and that hyperscale data centers potentially endanger the energy supply. The research concludes that rather than corporations directly influencing Swedish planning, Sweden indirectly allows them to have a large influence.
33

Toward Next-generation Data Centers : Principles of Software-Defined “Hardware” Infrastructures and Resource Disaggregation

Roozbeh, Amir January 2019 (has links)
The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Therefore, next-generation data centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, we examine the concept of software-defined “hardware” infrastructures (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. We start with an overview of the functional architecture of a cloud based on SDHI. Following this, we discuss a series of use-cases and deployment scenarios enabled by SDHI and explore the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications. Next, we propose a framework to evaluate the impact of SDHI on techno-economic efficiency of DCs, specifically focusing on application profiling, hardware dimensioning, and total cost of ownership (TCO). Our study shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence they can rapidly follow the exponential demand growth. Additionally, we elaborate on technologies behind SDHI, its challenges, and its potential future directions. Finally, to identify a suitable memory management scheme for SDHI and show its advantages, we focus on the management of Last Level Cache (LLC) in currently available Intel processors. Aligned with this, we investigate how better management of LLC can provide higher performance, more predictable response time, and improved isolation between threads. More specifically, we take advantage of LLC’s non-uniform cache architecture (NUCA) in which the LLC is divided into “slices,” where access by the core to which it closer is faster than access to other slices. Based upon this, we introduce a new memory management scheme, called slice-aware memory management, which carefully maps the allocated memory to LLC slices based on their access time latency rather than the de facto scheme that maps them uniformly. Many applications can benefit from our memory management scheme with relatively small changes. As an example, we show the potential benefits that Key-Value Store (KVS) applications gain by utilizing our memory management scheme. Moreover, we discuss how this scheme could be used to provide explicit CPU slicing – which is one of the expectations of SDHI  and hardware resource disaggregation. / <p>QC 20190415</p>
34

Development of Strategies in Finding the Optimal Cooling of Systems of Integrated Circuits

Minter, Dion Len 11 June 2004 (has links)
The task of thermal management in electrical systems has never been simple and has only become more difficult in recent years as the power electronics industry pushes towards devices with higher power densities. At the Center for Power Electronic Systems (CPES), a new approach to power electronic design is being implemented with the Integrated Power Electronic Module (IPEM). It is believed that an IPEM-based design approach will significantly enhance the competitiveness of the U.S. electronics industry, revolutionize the power electronics industry, and overcome many of the technology limits in today's industry by driving down the cost of manufacturing and design turnaround time. But with increased component integration comes the increased risk of component failure due to overheating. This thesis addresses the issues associated with the thermal management of integrated power electronic devices. Two studies are presented in this thesis. The focus of these studies is on the thermal design of a DC-DC front-end power converter developed at CPES with an IPEM-based approach. The first study investigates how the system would respond when the fan location and heat sink fin arrangement are varied in order to optimize the effects of conduction and forced-convection heat transfer to cool the system. The set-up of an experimental test is presented, and the results are compared to the thermal model. The second study presents an improved methodology for the thermal modeling of large-scale electrical systems and their many subsystems. A zoom-in/zoom-out approach is used to overcome the computational limitations associated with modeling large systems. The analysis performed in this paper was completed using I-DEAS©,, a three-dimensional finite element analysis (FEA) program which allows the thermal designer to simulate the affects of conduction and convection heat transfer in a forced-air cooling environment. / Master of Science
35

Towards more scalability and flexibility for distributed storage systems / Vers un meilleur passage à l'échelle et une plus grande flexibilité pour les systèmes de stockage distribué

Ruty, Guillaume 15 February 2019 (has links)
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche. / The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects.
36

Advanced thermal management strategies for energy-efficient data centers

Somani, Ankit 02 January 2009 (has links)
A simplified computational fluid dynamics/heat transfer (CFD/HT) model for a unit cell of a data center with a hot aisle-cold aisle (HACA) layout is simulated. Inefficiencies dealing with the mixing of hot air present in the room, with the cold inlet air leading to a loss of cooling potential are identified. For existing facilities, an algorithm called the Ambient Intelligence based Load Management (AILM) is developed which enhances the net data center heat dissipation capacity for given energy consumption at the facilities end. It gives a scheme to determine how much and where the computer loads should be allocated, based on the differential loss in cooling potential per unit increase in server workload. While the gains predicted are validated numerically initially, experimental validation is conducted using server simulators. For new facilities, a novel layout of the data center is designed, which uses scalable pods (S-Pod) based cabinet arrangement and air delivery. For the same floor space, the S-Pod and HACA facilities are simulated for different velocities, and the results are compared. An approach to incorporate heterogeneity in data centers, both for lower heat dissipation and liquid cooled racks has been established. Various performance metrics for data centers have been analyzed and sorted on the basis of their applicability. Finally, a roadmap for the transformation of the existing facilities to a state of higher cognizance of Facilities/IT performance is laid out.
37

Improving Resource Management in Virtualized Data Centers using Application Performance Models

Kundu, Sajib 01 April 2013 (has links)
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
38

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.
39

Steady State Analysis of Load Balancing Algorithms in the Heavy Traffic Regime

January 2019 (has links)
abstract: This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time. When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
40

Optical frequency comb generation using InP based quantum-dash/ quantum-well single section mode-locked lasers / Génération de peignes de fréquences optiques à l’aide de lasers à verrouillage de modes mono-section, à base de bâtonnets et puits quantiques élaborés sur InP

Panapakkam Venkatesan, Vivek 05 December 2016 (has links)
Les interconnections optiques dans les fermes de données (data centers) nécessitent la mise au point de nouvelles approches technologiques pour répondre aux besoins grandissants en composants d’interface respectant des cahiers de charge drastiques en termes de débit, coût, encombrement et dissipation d’énergie. Les peignes de fréquences optiques sont particulièrement adaptés comme nouvelles sources optiques, à mêmes de générer un grand nombre de porteuses optiques cohérentes. Leur utilisation dans des systèmes de transmission en multiplexage de longueurs d’onde (WDM) et exploitant de nouveaux formats de modulation, peut aboutir à des capacités jamais atteintes auparavant. Ce travail de thèse s’inscrit dans le cadre du projet européen BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems) et a pour but l’étude de peignes de fréquences générés à l’aide de lasers à verrouillage de modes, à section unique, à base de bâtonnets quantiques InAs/InP et puits quantiques InGaAsP/InP. Nous avons entrepris l’étude de nouvelles couches actives et conceptions de cavités lasers en vue de répondre au cahier des charges du projet européen. Une étude systématique du bruit d’amplitude et de phase de ces sources a en particulier été menée à l’aide de nouvelles techniques de mesure afin d’évaluer leur compatibilité dans des systèmes de transmission à très haut débit. Ces peignes de fréquences optiques ont été utilisées avec succès dans des expériences de transmission sur fibre optique avec des débits records dépassant le Tbit/s par puce et une dissipation raisonnable d’énergie par bit, montrant leur fort potentiel pour les applications d’interconnections optiques dans les fermes de données / The increasing demand for high capacity, low cost, high compact and energy efficient optical transceivers for data center interconnects requires new technological solutions. In terms of transmitters, optical frequency combs generating a large number of phase coherent optical carriers are attractive solutions for next generation datacenter interconnects, and along with wavelength division multiplexing and advanced modulation formats can demonstrate unprecedented transmission capacities. In the framework of European project BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems), this thesis investigates the generation of optical frequency combs using single-section mode-locked lasers based on InAs/InP Quantum-Dash and InGaAsP/InP Quantum-Well semiconductor nanostructures. These novel light sources, based on new active layer structures and cavity designs are extensively analyzed to meet the requirements of the project. Comprehensive investigation of amplitude and phase noise of these optical frequency comb sources is performed with advanced measurement techniques, to evaluate the feasibility of their use in high data rate transmission systems. Record Multi-Terabit per second per chip capacities and reasonably low energy per bit consumption are readily demonstrated, making them well suited for next generation datacenter interconnects

Page generated in 0.0984 seconds