31 |
Data Corpus - An analysis and proposal for a rural data centerWinroth, Torbjörn January 2022 (has links)
The idea of this project is to investigate the nature and the innate aesthetic expressions of a data center in order to draw a proposal for one in a rural town called Horndal. A data center is an infrastructure, implying that it is essential for the society we live in. As a method I have tried to expose and aestheticise the technical aspects of this type of building as a means to show how it works. The project also deals with the fact that data is often conceptualised as something fleeting and soft while it in fact has severe material consequences. The project deals with the fact that data centers often are being placed in rural areas and proposes some explanations and thoughts on that subject. There is a healthy discussion regarding whether data centers are too heavy a load on the power grid, whether they employ as many as they advertise and how much Swedens economy gain from their presence. But this project is not meant to show any explicit political opinions but is rather a morphological exploration of the program. However, I would hope that it provokes some thoughts on these questions in the reader.
|
32 |
Datascapes: Envisioning a New Kind of Data CenterPfeiffer, Jessica 15 June 2020 (has links)
No description available.
|
33 |
The Contemporary Uncanny: An Architecture for Digital PostmortemGarrison, John 28 June 2021 (has links)
No description available.
|
34 |
Data Center Conversion: The Adaptive Reuse of a Remote Textile Mill in Augusta, GeorgiaKing, Bradley January 2016 (has links)
No description available.
|
35 |
LEMoNet: Low Energy Wireless Sensor Network Design for Data Center MonitoringLi, Chenhe Jr 08 1900 (has links)
Today’s data centers (DCs) consume up to 3% of the energy produced worldwide, much of which is wasted due to over-cooling and underutilization of IT equipment. This wastage in part stems from the lack of real-time visibility of fine-grained thermal distribution in DCs. Wireless sensing is an ideal candidate for DC monitoring as it is cost-effective, facility-friendly, and can be easily re-purposed. In this thesis, we develop LEMoNet, a novel low-energy wireless sensor network design for monitoring co-location DCs. It employs a two-tier network architecture and a multi-mode data exchange protocol to balance the trade-offs between low power consumption and high data reliability. We have evaluated the performance of LEMoNet by deploying custom-designed sensor and gateway nodes in a SHARCNET DC at A.N. Bourns Science Building. We show experimentally that LEMoNet achieves an average data yield of over 98%. Under normal operations with one temperature and one humidity reading every thirty seconds, the battery lifetime of LEMoNet sensor nodes is projected to be 14.9 years on a single lithium coin battery. / Thesis / Master of Science (MSc)
|
36 |
Rack-based Data Center Temperature Regulation Using Data-driven Model Predictive ControlShi, Shizhu January 2019 (has links)
Due to the rapid and prosperous development of information technology, data centers are widely used in every aspect of social life, such as industry, economy or even our daily life. This work considers the idea of developing a data-driven model based model predictive control (MPC) to regulate temperature for a class of single-rack data centers (DCs). An auto-regressive exogenous (ARX) model is identified for our DC system using partial least square (PLS) to predict the behavior of multi-inputs-single-output (MISO) thermal system. Then an MPC controller is designed to control the temperature inside IT rack based on the identified ARX model. Moreover, fuzzy c-means (FCM) is employed to cluster the measured data set. Based on the clustered data sets, PLS is adopted to identify multiple locally linear ARX models which will be combined by appropriate weights in order to capture the nonlinear behavior of the highly-nonlinear thermal system inside the IT rack. The effectiveness of the proposed method is illustrated through experiments on our single-rack DC and it is also compared with proportional-integral (PI) control. / Thesis / Master of Applied Science (MASc)
|
37 |
Exposing the Data CenterSergejev, Ivan 29 January 2014 (has links)
Given the rapid growth in the importance of the Internet, data centers - the buildings that store information on the web - are quickly becoming the most critical infrastructural objects in the world. However, so far they have received very little, if any, architectural attention. This thesis proclaims data centers to be the 'churches' of the digital society and proposes a new type of a publicly accessible data center.
The thesis starts with a brief overview of the history of data centers and the Internet in general, leading to a manifesto for making data centers into public facilities with an architecture of their own. After, the paper proposes a roadmap for the possible future development of the building type with suggestions for placing future data centers in urban environments, incorporating public programs as a part of the building program, and optimizing the inside workings of a typical data center. The final part of the work, concentrates on a design for an exemplary new data center, buildable with currently available technologies.
This thesis aims to:
1) change the public perception of the internet as a non-physical thing, and data centers as purely functional infrastructural objects without any deeper cultural significance and
2) propose a new architectural language for the type. / Master of Architecture
|
38 |
TI verde – o armazenamento de dados e a eficiência energética no data center de um banco brasileiro / Green IT – the data storage and the energy efficiency in a brazilian bank data centerSilva, Newton Rocha da 04 March 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-27T16:22:43Z
No. of bitstreams: 1
Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) / Made available in DSpace on 2015-07-27T16:22:43Z (GMT). No. of bitstreams: 1
Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5)
Previous issue date: 2015-03-04 / The Green IT focuses on the study and design practice, manufacturing, use and disposal of computers, servers, and associated subsystems, efficiently and effectively, with less impact to the environment. It´s major goal is to improve performance computing and reduce energy consumption and carbon footprint. Thus, the green information technology is the practice of environmentally sustainable computing and aims to minimize the negative impact of IT operations to the environment. On the other hand, the exponential growth of digital data is a reality for most companies, making them increasingly dependent on IT to provide sufficient and real-time information to support the business. This growth trend causes changes in the infrastructure of data centers giving focus on the capacity of the facilities issues due to energy, space and cooling for IT activities demands. In this scenario, this research aims to analyze whether the main data storage solutions such as consolidation, virtualization, deduplication and compression, together with the solid state technologies SSD or Flash Systems are able to contribute to an efficient use of energy in the main data center organization. The theme was treated using qualitative and exploratory research method, based on the case study, empirical and documentary research such as technique to data collect, and interviews with IT key suppliers solutions. The case study occurred in the main Data Center of a large Brazilian bank. As a result, we found that energy efficiency is sensitized by technological solutions presented. Environmental concern was evident and showed a shared way between partners and organization studied. The maintaining of PUE - Power Usage Effectiveness, as energy efficiency metric, at a level of excellence reflects the combined implementation of solutions, technologies and best practices. We conclude that, in addition to reducing the consumption of energy, solutions and data storage technologies promote efficiency improvements in the Data Center, enabling more power density for the new equipment installation. Therefore, facing the digital data demand growth is crucial that the choice of solutions, technologies and strategies must be appropriate not only by the criticality of information, but by the efficient use of resources, contributing to a better understanding of IT importance and its consequences for the environment. / A TI Verde concentra-se em estudo e prática de projeto, fabricação, utilização e descarte de computadores, servidores e subsistemas associados, de forma eficiente e eficaz, com o mínimo ou nenhum impacto ao meio ambiente. Seu objetivo é melhorar o desempenho da computação e reduzir o consumo de energia e a pegada de carbono. Nesse sentido, a tecnologia da informação verde é a prática da computação ambientalmente sustentável e tem como objetivo minimizar o impacto negativo das operações de TI no meio ambiente. Por outro lado, o crescimento exponencial de dados digitais é uma realidade para a maioria das empresas, tornando-as cada vez mais dependentes da TI para disponibilizar informações em tempo real e suficiente para dar suporte aos negócios. Essa tendência de crescimento provoca mudanças na infraestrutura dos Data Centers dando foco na questão da capacidade das instalações devido à demanda de energia, espaço e refrigeração para as atividades de TI. Nesse cenário, esta pesquisa objetiva analisar se as principais soluções de armazenamento de dados, como a consolidação, a virtualização, a deduplicação e a compactação, somadas às tecnologias de discos de estado sólido do tipo SSD ou Flash são capazes de colaborar para um uso eficiente de energia elétrica no principal Data Center da organização. A metodologia de pesquisa foi qualitativa, de caráter exploratório, fundamentada em estudo de caso, levantamento de dados baseado na técnica de pesquisa bibliográfica e documental, além de entrevista com os principais fornecedores de soluções de TI. O estudo de caso foi o Data Center de um grande banco brasileiro. Como resultado, foi possível verificar que a eficiência energética é sensibilizada pelas soluções tecnológicas apresentadas. A preocupação ambiental ficou evidenciada e mostrou um caminho compartilhado entre parceiros e organização estudada. A manutenção do PUE - Power Usage Effectiveness (eficiência de uso de energia) como métrica de eficiência energética mantida em um nível de excelência é reflexo da implementação combinada de soluções, tecnologias e melhores práticas. Conclui-se que, além de reduzir o consumo de energia elétrica, as soluções e tecnologias de armazenamento de dados favorecem melhorias de eficiência no Data Center, viabilizando mais densidade de potência para a instalação de novos equipamentos. Portanto, diante do crescimento da demanda de dados digitais é crucial que a escolha das soluções, tecnologias e estratégias sejam adequadas, não só pela criticidade da informação, mas pela eficiência no uso dos recursos, contribuindo para um entendimento mais evidente sobre a importância da TI e suas consequências para o meio ambiente.
|
39 |
Performance Evaluation of Virtualization in Cloud Data CenterZhuang, Hao January 2012 (has links)
Amazon Elastic Compute Cloud (EC2) has been adopted by a large number of small and medium enterprises (SMEs), e.g. foursquare, Monster World, and Netflix, to provide various kinds of services. There has been some existing work in the current literature investigating the variation and unpredictability of cloud services. These work demonstrated interesting observations regarding cloud offerings. However, they failed to reveal the underlying essence of the various appearances for the cloud services. In this thesis, we looked into the underlying scheduling mechanisms, and hardware configurations, of Amazon EC2, and investigated their impact on the performance of virtual machine instances running atop. Specifically, several instances with the standard and high-CPU instance families are covered to shed light on the hardware upgrade and replacement of Amazon EC2. Then large instance from the standard family is selected to conduct focus analysis. To better understand the various behaviors of the instances, a local cluster environment is set up, which consists of two Intel Xeon servers, using different scheduling algorithms. Through a series of benchmark measurements, we observed the following findings: (1) Amazon utilizes highly diversified hardware to provision different instances. It results in significant performance variation, which can reach up to 30%. (2) Two different scheduling mechanisms were observed, one is similar to Simple Earliest Deadline Fist (SEDF) scheduler, whilst the other one analogies Credit scheduler in Xen hypervisor. These two scheduling mechanisms also arouse variations in performance. (3) By applying a simple "trial-and-failure" instance selection strategy, the cost saving is surprisingly significant. Given certain distribution of fast-instances and slow-instances, the achievable cost saving can reach 30%, which is attractive to SMEs which use Amazon EC2 platform. / Amazon Elastic Compute Cloud (EC2) har antagits av ett stort antal små och medelstora företag (SMB), t.ex. foursquare, Monster World, och Netflix, för att ge olika typer av tjänster. Det finns en del tidigare arbeten i den aktuella litteraturen som undersöker variationen och oförutsägbarheten av molntjänster. Dessa arbetenhar visat intressanta iakttagelser om molnerbjudanden, men de har misslyckats med att avslöja den underliggande kärnan hos de olika utseendena för molntjänster. I denna avhandling tittade vi på de underliggande schemaläggningsmekanismerna och maskinvarukonfigurationer i Amazon EC2, och undersökte deras inverkan på resultatet för de virtuella maskiners instanser som körs ovanpå. Närmare bestämt är det flera fall med standard- och hög-CPU instanser som omfattas att belysa uppgradering av hårdvara och utbyte av Amazon EC2. Stora instanser från standardfamiljen är valda för att genomföra en fokusanalys. För att bättre förstå olika beteenden av de olika instanserna har lokala kluster miljöer inrättas, dessa klustermiljöer består av två Intel Xeonservrar och har inrättats med hjälp av olika schemaläggningsalgoritmer. Genom en serie benchmarkmätningar observerade vi följande slutsatser: (1) Amazon använder mycket diversifierad hårdvara för att tillhandahållandet olika instanser. Från de olika instans-sub-typernas perspektiv leder hårdvarumångfald till betydande prestationsvariation som kan nå upp till 30%. (2) Två olika schemaläggningsmekanismer observerades, en liknande Simple Earliest Deadline Fist(SEDF) schemaläggare, medan den andra mer liknar Credit-schemaläggaren i Xenhypervisor. Dessa två schemaläggningsmekanismer ger även upphov till variationer i prestanda. (3) Genom att tillämpa en enkel "trial-and-failure" strategi för val av instans, är kostnadsbesparande förvånansvärt stor. Med tanke på fördelning av snabba och långsamma instanser kan kostnadsbesparingen uppgå till 30%, vilket är attraktivt för små och medelstora företag som använder Amazon EC2 plattform.
|
40 |
A Novel Architecture, Topology, and Flow Control for Data Center NetworksYuan, Tingqiu 23 February 2022 (has links)
With the advent of new applications such as Cloud Computing, Blockchain, Big Data, and Machine Learning, modern data center network (DCN) architecture has been evolving to meet numerous challenging requirements such as scalability, agility, energy efficiency, and high performance. Among the new applications ones are expediting the convergence of high-performance computing and Data Centers. This convergence has prompted research into a single, converged data center architecture that unites computing, storage, and interconnect network in a synthetic system designed to reduce the total cost of ownership and result in greater efficiency and productivity. The interconnect network is a critical aspect of Data Centers, as it sets performance bounds and determines most of the total cost of ownership. The design of an interconnect network consists of three factors: topology, routing, and congestion control, and this thesis aims to satisfy the above challenging requirements.
To address the challenges noted above, the communication patterns for emerging applications are investigated, and it is shown that the dynamic and diverse traffic patterns (denoted as *-cast), especially multi-cast, in-cast, broadcast (one-to-all), and all-to-all-cast, play a significant impact in the performance of emerging applications. Inspired by hypermesh topologies, this thesis presents a novel cost-efficient topology for large-scale Data Center Networks (DCNs), which is called HyperOXN. HyperOXN takes advantage of high-radix switch components leveraging state-of-the-art colorless wavelength division multiplexing technologies, effectively supports *-cast traffic, and at the same time meets the demands for high throughput, low latency, and lossless delivery. HyperOXN provides a non-blocking interconnect network with a relatively low overhead-cost. Through theoretical analysis, this thesis studies the topological properties of the proposed HyperOXN and compares it with other different types of interconnect networks such as Fat-Tree, Flattened Butterfly, and Hypercube-like topologies. Passive optical cross-connection networks are used in the HyperOXN topology, enabling economical, power-efficient, and reliable communication within DCNs. It is shown that HyperOXN outperforms a comparable Fat-Tree topology in cost, throughput, power consumption and cabling under a variety of workload conditions.
A HyperOXN network provides multiple paths between the source and its destination to obtain high bandwidth and achieve fault tolerance. Inspired by a power-of-two-choices technique, a novel stochastic global congestion-aware load balancing algorithm, which can be used to achieve relatively optimal load balances amongst multiple shared paths is designed. It also guarantees low latency for short-lived mouse flows and high throughput for long-lasting elephant flows. Furthermore, the stability of the flow-scheduling algorithm is formally proven. Experimental results show that the algorithm successfully eliminated the interactions of the elephant and mouse DC flows, and ensured high network bandwidth utilization.
|
Page generated in 0.0297 seconds