• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 10
  • 9
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 129
  • 29
  • 27
  • 25
  • 25
  • 23
  • 19
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Cooling storage for 5G EDGE data center

Johansson, Jennifer January 2020 (has links)
Data centers requires a lot of energy due to that data centers count as the buildings also contains servers, cooling equipment, IT-equipment and power equipment. As the cooling solution for many data centers around the world right now, the compressor-based cooling solution counts for around 40% of the total energy consumption. A non-compressor-based solution that is used in some data centers, but also is in a research phase is the free cooling application. Free cooling means that the outside air is utilized to cool down the data center and there are two main technologies that contains within free cooling: airside free cooling and waterside free cooling. The purpose of this master thesis is to analyze two types of coils; one corrugated and the other one smooth, providing from Bensby Rostfria, to investigate if it is possible to use free cooling in 5G EDGE data center in Luleå, with one of these coils. The investigation will be done during the warmest day in summer. This because, according to weather data, Luleå is one candidate where this type of cooling system could be of use. The project was done through RISE ICE Datacenter where two identical systems was built next to each other with two corrugated hoses of different diameter and two smooth tubes with different diameter. The variables that was measured was the ambient temperature within the data hall, the water temperature in both water tanks, the temperature out from the system, the temperature in to the system and the mass flow of the air that was going to go through the system. The first thing that was done was to do fan curves to easier choose which input voltages for the fans that was of interest to do further analysis on. After that was done, three point was taken where the fan curve was of most increase. The tests were done by letting the corrugated hoses and smooth tubes to be in each of the water tanks and fill it with cold water. It was thereafter the coils that should warm the water from 4,75 °C – 9,75 °C, because of that the temperature in the data center was around 15 °C. The rising in particularly these temperatures was chosen because it is seen that to use free cooling the temperature differences must be at least 5 °C. The tests were done three times to get a more reliable result. All the data was further taken in to Zabbix and to further analysis in Grafana. When one test was done the files was saved from Grafana to Excel for compilation, and thereafter to Matlab for further analysis. The first thing that was analyzed was if the three different tests with the same input voltages gave similar results in the water temperature in the tank and the temperature out from the system. Thereafter, trendlines was built to investigate the temperature differences in and out of the system, the temperature differences in and the water temperature in the tank, the mass flow and the cooling power. That trendline was further in comparison to each other, which was 2D-plots between the cooling power and the temperature differences between the inlet and the water. Thereafter the both coils could compare to each other to see which of them that gave the largest cooling power and was most efficient to install in a future 5G data center module.  The conclusion for this master thesis is that the corrugated hose will give a higher cooling power with higher temperature differences outside, but during the warmest summer day it was distinctly the smooth tube that gave the largest cooling power and therefore the best result. The smooth tube also got, through hand calculations, the larger amount of pipe that was necessary to cool down the 5G module, but the smallest water tank. It was also shown that for the warmest summer day, a temperature in the water tank of 24 °C is the best, compared to 20 °C and 18 °C. The amount of coil that is needed to cool down the data center with a temperature in the water tank at 24 °C and how large the water tank differs between the two types of coils. For the corrugated hose a length of 1.8 km and a water tank of 9.4 m3. As for the smooth tube a length of 1.7 km and a water tank volume of 12 m3.  As can be seen throughout this project is that this type of cooling equipment is not the most efficient for the warmest summer day but could easily be used for other seasons.
22

Datascapes: Envisioning a New Kind of Data Center

Pfeiffer, Jessica 15 June 2020 (has links)
No description available.
23

The Contemporary Uncanny: An Architecture for Digital Postmortem

Garrison, John 28 June 2021 (has links)
No description available.
24

Data Center Conversion: The Adaptive Reuse of a Remote Textile Mill in Augusta, Georgia

King, Bradley January 2016 (has links)
No description available.
25

Exposing the Data Center

Sergejev, Ivan 29 January 2014 (has links)
Given the rapid growth in the importance of the Internet, data centers - the buildings that store information on the web - are quickly becoming the most critical infrastructural objects in the world. However, so far they have received very little, if any, architectural attention. This thesis proclaims data centers to be the 'churches' of the digital society and proposes a new type of a publicly accessible data center. The thesis starts with a brief overview of the history of data centers and the Internet in general, leading to a manifesto for making data centers into public facilities with an architecture of their own. After, the paper proposes a roadmap for the possible future development of the building type with suggestions for placing future data centers in urban environments, incorporating public programs as a part of the building program, and optimizing the inside workings of a typical data center. The final part of the work, concentrates on a design for an exemplary new data center, buildable with currently available technologies. This thesis aims to: 1) change the public perception of the internet as a non-physical thing, and data centers as purely functional infrastructural objects without any deeper cultural significance and 2) propose a new architectural language for the type. / Master of Architecture
26

Resource Management in Virtualized Data Center

Rabbani, Md January 2014 (has links)
As businesses are increasingly relying on the cloud to host their services, cloud providers are striving to offer guaranteed and highly-available resources. To achieve this goal, recent proposals have advocated to offer both computing and networking resources in the form of Virtual Data Centers (VDCs). However, to offer VDCs, cloud providers have to overcome several technical challenges. In this thesis, we focus on two key challenges: (1) the VDC embedding problem: how to efficiently allocate resources to VDCs such that energy costs and bandwidth consumption are minimized, and (2) the availability-aware VDC embedding and backup provisioning problem which aims at allocating resources to VDCs with hard guarantees on their availability. The first part of this thesis is primarily concerned with the first challenge. The goal of the VDC embedding problem is to allocate resources to VDCs while minimizing the bandwidth usage in the data center and maximizing the cloud provider's revenue. Existing proposals have focused only on the placement of VMs and ignored mapping of other types of resources like switches. Hence, we propose a new VDC embedding solution that explicitly considers the embedding of virtual switches in addition to virtual machines and communication links. Simulations show that our solution results in high acceptance rate of VDC requests, less bandwidth consumption in the data center network, and increased revenue for the cloud provider. In the second part of this thesis, we study the availability-aware VDC embedding and backup provisioning problem. The goal is to provision virtual backup nodes and links in order to achieve the desired availability for each VDC. Existing solutions addressing this challenge have overlooked the heterogeneity of the data center equipment in terms of failure rates and availability. To address this limitation, we propose a High-availability Virtual Infrastructure (Hi-VI) management framework that jointly allocates resources for VDCs and their backups while minimizing total energy costs. Hi-VI uses a novel technique to compute the availability of a VDC that considers both (1) the heterogeneity of the data center networking and computing equipment, and (2) the number of redundant virtual nodes and links provisioned as backups. Simulations demonstrate the effectiveness of our framework compared to heterogeneity-oblivious solutions in terms of revenue and the number of physical servers used to embed VDCs.
27

TI verde – o armazenamento de dados e a eficiência energética no data center de um banco brasileiro / Green IT – the data storage and the energy efficiency in a brazilian bank data center

Silva, Newton Rocha da 04 March 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-27T16:22:43Z No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) / Made available in DSpace on 2015-07-27T16:22:43Z (GMT). No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) Previous issue date: 2015-03-04 / The Green IT focuses on the study and design practice, manufacturing, use and disposal of computers, servers, and associated subsystems, efficiently and effectively, with less impact to the environment. It´s major goal is to improve performance computing and reduce energy consumption and carbon footprint. Thus, the green information technology is the practice of environmentally sustainable computing and aims to minimize the negative impact of IT operations to the environment. On the other hand, the exponential growth of digital data is a reality for most companies, making them increasingly dependent on IT to provide sufficient and real-time information to support the business. This growth trend causes changes in the infrastructure of data centers giving focus on the capacity of the facilities issues due to energy, space and cooling for IT activities demands. In this scenario, this research aims to analyze whether the main data storage solutions such as consolidation, virtualization, deduplication and compression, together with the solid state technologies SSD or Flash Systems are able to contribute to an efficient use of energy in the main data center organization. The theme was treated using qualitative and exploratory research method, based on the case study, empirical and documentary research such as technique to data collect, and interviews with IT key suppliers solutions. The case study occurred in the main Data Center of a large Brazilian bank. As a result, we found that energy efficiency is sensitized by technological solutions presented. Environmental concern was evident and showed a shared way between partners and organization studied. The maintaining of PUE - Power Usage Effectiveness, as energy efficiency metric, at a level of excellence reflects the combined implementation of solutions, technologies and best practices. We conclude that, in addition to reducing the consumption of energy, solutions and data storage technologies promote efficiency improvements in the Data Center, enabling more power density for the new equipment installation. Therefore, facing the digital data demand growth is crucial that the choice of solutions, technologies and strategies must be appropriate not only by the criticality of information, but by the efficient use of resources, contributing to a better understanding of IT importance and its consequences for the environment. / A TI Verde concentra-se em estudo e prática de projeto, fabricação, utilização e descarte de computadores, servidores e subsistemas associados, de forma eficiente e eficaz, com o mínimo ou nenhum impacto ao meio ambiente. Seu objetivo é melhorar o desempenho da computação e reduzir o consumo de energia e a pegada de carbono. Nesse sentido, a tecnologia da informação verde é a prática da computação ambientalmente sustentável e tem como objetivo minimizar o impacto negativo das operações de TI no meio ambiente. Por outro lado, o crescimento exponencial de dados digitais é uma realidade para a maioria das empresas, tornando-as cada vez mais dependentes da TI para disponibilizar informações em tempo real e suficiente para dar suporte aos negócios. Essa tendência de crescimento provoca mudanças na infraestrutura dos Data Centers dando foco na questão da capacidade das instalações devido à demanda de energia, espaço e refrigeração para as atividades de TI. Nesse cenário, esta pesquisa objetiva analisar se as principais soluções de armazenamento de dados, como a consolidação, a virtualização, a deduplicação e a compactação, somadas às tecnologias de discos de estado sólido do tipo SSD ou Flash são capazes de colaborar para um uso eficiente de energia elétrica no principal Data Center da organização. A metodologia de pesquisa foi qualitativa, de caráter exploratório, fundamentada em estudo de caso, levantamento de dados baseado na técnica de pesquisa bibliográfica e documental, além de entrevista com os principais fornecedores de soluções de TI. O estudo de caso foi o Data Center de um grande banco brasileiro. Como resultado, foi possível verificar que a eficiência energética é sensibilizada pelas soluções tecnológicas apresentadas. A preocupação ambiental ficou evidenciada e mostrou um caminho compartilhado entre parceiros e organização estudada. A manutenção do PUE - Power Usage Effectiveness (eficiência de uso de energia) como métrica de eficiência energética mantida em um nível de excelência é reflexo da implementação combinada de soluções, tecnologias e melhores práticas. Conclui-se que, além de reduzir o consumo de energia elétrica, as soluções e tecnologias de armazenamento de dados favorecem melhorias de eficiência no Data Center, viabilizando mais densidade de potência para a instalação de novos equipamentos. Portanto, diante do crescimento da demanda de dados digitais é crucial que a escolha das soluções, tecnologias e estratégias sejam adequadas, não só pela criticidade da informação, mas pela eficiência no uso dos recursos, contribuindo para um entendimento mais evidente sobre a importância da TI e suas consequências para o meio ambiente.
28

Performance Evaluation of Virtualization in Cloud Data Center

Zhuang, Hao January 2012 (has links)
Amazon Elastic Compute Cloud (EC2) has been adopted by a large number of small and medium enterprises (SMEs), e.g. foursquare, Monster World, and Netflix, to provide various kinds of services. There has been some existing work in the current literature investigating the variation and unpredictability of cloud services. These work demonstrated interesting observations regarding cloud offerings. However, they failed to reveal the underlying essence of the various appearances for the cloud services. In this thesis, we looked into the underlying scheduling mechanisms, and hardware configurations, of Amazon EC2, and investigated their impact on the performance of virtual machine instances running atop. Specifically, several instances with the standard and high-CPU instance families are covered to shed light on the hardware upgrade and replacement of Amazon EC2. Then large instance from the standard family is selected to conduct focus analysis. To better understand the various behaviors of the instances, a local cluster environment is set up, which consists of two Intel Xeon servers, using different scheduling algorithms. Through a series of benchmark measurements, we observed the following findings: (1) Amazon utilizes highly diversified hardware to provision different instances. It results in significant performance variation, which can reach up to 30%. (2) Two different scheduling mechanisms were observed, one is similar to Simple Earliest Deadline Fist (SEDF) scheduler, whilst the other one analogies Credit scheduler in Xen hypervisor. These two scheduling mechanisms also arouse variations in performance. (3) By applying a simple "trial-and-failure" instance selection strategy, the cost saving is surprisingly significant. Given certain distribution of fast-instances and slow-instances, the achievable cost saving can reach 30%, which is attractive to SMEs which use Amazon EC2 platform. / Amazon Elastic Compute Cloud (EC2) har antagits av ett stort antal små och medelstora företag (SMB), t.ex. foursquare, Monster World, och Netflix, för att ge olika typer av tjänster. Det finns en del tidigare arbeten i den aktuella litteraturen som undersöker variationen och oförutsägbarheten av molntjänster. Dessa arbetenhar visat intressanta iakttagelser om molnerbjudanden, men de har misslyckats med att avslöja den underliggande kärnan hos de olika utseendena för molntjänster. I denna avhandling tittade vi på de underliggande schemaläggningsmekanismerna och maskinvarukonfigurationer i Amazon EC2, och undersökte deras inverkan på resultatet för de virtuella maskiners instanser som körs ovanpå. Närmare bestämt är det flera fall med standard- och hög-CPU instanser som omfattas att belysa uppgradering av hårdvara och utbyte av Amazon EC2. Stora instanser från standardfamiljen är valda för att genomföra en fokusanalys. För att bättre förstå olika beteenden av de olika instanserna har lokala kluster miljöer inrättas, dessa klustermiljöer består av två Intel Xeonservrar och har inrättats med hjälp av olika schemaläggningsalgoritmer. Genom en serie benchmarkmätningar observerade vi följande slutsatser: (1) Amazon använder mycket diversifierad hårdvara för att tillhandahållandet olika instanser. Från de olika instans-sub-typernas perspektiv leder hårdvarumångfald till betydande prestationsvariation som kan nå upp till 30%. (2) Två olika schemaläggningsmekanismer observerades, en liknande Simple Earliest Deadline Fist(SEDF) schemaläggare, medan den andra mer liknar Credit-schemaläggaren i Xenhypervisor. Dessa två schemaläggningsmekanismer ger även upphov till variationer i prestanda. (3) Genom att tillämpa en enkel "trial-and-failure" strategi för val av instans, är kostnadsbesparande förvånansvärt stor. Med tanke på fördelning av snabba och långsamma instanser kan kostnadsbesparingen uppgå till 30%, vilket är attraktivt för små och medelstora företag som använder Amazon EC2 plattform.
29

A Novel Architecture, Topology, and Flow Control for Data Center Networks

Yuan, Tingqiu 23 February 2022 (has links)
With the advent of new applications such as Cloud Computing, Blockchain, Big Data, and Machine Learning, modern data center network (DCN) architecture has been evolving to meet numerous challenging requirements such as scalability, agility, energy efficiency, and high performance. Among the new applications ones are expediting the convergence of high-performance computing and Data Centers. This convergence has prompted research into a single, converged data center architecture that unites computing, storage, and interconnect network in a synthetic system designed to reduce the total cost of ownership and result in greater efficiency and productivity. The interconnect network is a critical aspect of Data Centers, as it sets performance bounds and determines most of the total cost of ownership. The design of an interconnect network consists of three factors: topology, routing, and congestion control, and this thesis aims to satisfy the above challenging requirements. To address the challenges noted above, the communication patterns for emerging applications are investigated, and it is shown that the dynamic and diverse traffic patterns (denoted as *-cast), especially multi-cast, in-cast, broadcast (one-to-all), and all-to-all-cast, play a significant impact in the performance of emerging applications. Inspired by hypermesh topologies, this thesis presents a novel cost-efficient topology for large-scale Data Center Networks (DCNs), which is called HyperOXN. HyperOXN takes advantage of high-radix switch components leveraging state-of-the-art colorless wavelength division multiplexing technologies, effectively supports *-cast traffic, and at the same time meets the demands for high throughput, low latency, and lossless delivery. HyperOXN provides a non-blocking interconnect network with a relatively low overhead-cost. Through theoretical analysis, this thesis studies the topological properties of the proposed HyperOXN and compares it with other different types of interconnect networks such as Fat-Tree, Flattened Butterfly, and Hypercube-like topologies. Passive optical cross-connection networks are used in the HyperOXN topology, enabling economical, power-efficient, and reliable communication within DCNs. It is shown that HyperOXN outperforms a comparable Fat-Tree topology in cost, throughput, power consumption and cabling under a variety of workload conditions. A HyperOXN network provides multiple paths between the source and its destination to obtain high bandwidth and achieve fault tolerance. Inspired by a power-of-two-choices technique, a novel stochastic global congestion-aware load balancing algorithm, which can be used to achieve relatively optimal load balances amongst multiple shared paths is designed. It also guarantees low latency for short-lived mouse flows and high throughput for long-lasting elephant flows. Furthermore, the stability of the flow-scheduling algorithm is formally proven. Experimental results show that the algorithm successfully eliminated the interactions of the elephant and mouse DC flows, and ensured high network bandwidth utilization.
30

AI-assisted analysis of ICT-centre cooling : Using K-means clustering to identify cooling patterns in water-cooled ICT rooms

Wallin, Oliver, Jigsved, Johan January 2023 (has links)
Information and communications technology (ICT) is an important part in today’s society and around 60% of the world's population are connected to the internet. Processing and storing ICT data corresponds to approximately 1% of the global electricity demand. Locations that store ICT data produce a lot of heat that needs to be cooled, and the cooling systems stand for up to 40% of the total energy used in ICT-centre locations. Investigating the efficiency of the cooling in ICT-centres is important to make the whole ICT-centre more energy efficient, and possibly saving operational costs. Unwanted operational behaviour in the cooling system can be analysed by using unsupervised machine learning and clustering of data. The purpose of this thesis is to characterise cooling patterns, using K-means clustering, in two water-cooled ICT rooms. The rooms are located at Ericsson’s facilities in Linköping Sweden. This will be fulfilled answering the research questions: RQ1. What is the cooling power per m2 delivered by the cooling equipment in the two different ICT rooms at Ericsson?  RQ2. What operational patterns can be found using a suitable clustering algorithm to process and compare data for LCP at two ICT-rooms?   RQ3. Based on information from RQ1 and patterns from RQ2 what undesired operational behaviours can be identified for the cooling system? The K-means clustering is applied to time series data collected during the year of 2022 which include temperatures of water and air; electric power and cooling power; as well as waterflow in the system. The two rooms use Liquid Cooling Packages (LCP)s, also known as in-row cooling units, and room 1 (R1) also include computer room air handlers (CRAHs). K-means clusters each observation into a group that share characteristics and represent different operating scenarios. The elbow-method is used to determine the number of clusters, it created four clusters for R1 and three clusters for room 2 (R2).  Results show that the operational patterns differ between R1 and R2. The cooling power produced per m2 is 1.36 kW/m2 for R1 and 2.14 kW/m2 for R2. Cooling power per m3 is 0.39 kW/m3 for R1 and 0.61 kW/m3 for R2. Undesirable operational behaviours were identified through clustering and visual representation of the data. Some LCPs operate very differently even when sharing the same hot aisle. There are disturbances such as air flow and setpoints that create these differences, which results in that some LCPs operate with high cooling power and others that operate with low cooling power. The cluster with the highest cooling power is cluster 4 and 3 for R1 and R2 respectively. Cluster 2 has the lowest cooling power in R1 and R2. For LCPs operating in cluster 2 where waterflow mostly at 0 l/min and therefore where not contributing to the cooling of the rooms. Lastly, the supplied electrical power and produced cooling power match in R1 but do not in R2. Implying that heat leave the rooms by other means than via the cooling system or faulty measurements. There is a possibility to investigate this further. Water in R1 and R2 is found to, at occasions, exit the room with temperature below the ambient room temperature. It is also concluded that the method functions to identify unwanted operational behaviours, knowledge that can be used to improve ICT operations.  To summarize, undesired operational behaviours can be identified using the unsupervised machine learning technique K-means clustering.

Page generated in 0.0428 seconds