Spelling suggestions: "subject:"data center,"" "subject:"mata center,""
41 |
AI-assisted analysis of ICT-centre cooling : Using K-means clustering to identify cooling patterns in water-cooled ICT roomsWallin, Oliver, Jigsved, Johan January 2023 (has links)
Information and communications technology (ICT) is an important part in today’s society and around 60% of the world's population are connected to the internet. Processing and storing ICT data corresponds to approximately 1% of the global electricity demand. Locations that store ICT data produce a lot of heat that needs to be cooled, and the cooling systems stand for up to 40% of the total energy used in ICT-centre locations. Investigating the efficiency of the cooling in ICT-centres is important to make the whole ICT-centre more energy efficient, and possibly saving operational costs. Unwanted operational behaviour in the cooling system can be analysed by using unsupervised machine learning and clustering of data. The purpose of this thesis is to characterise cooling patterns, using K-means clustering, in two water-cooled ICT rooms. The rooms are located at Ericsson’s facilities in Linköping Sweden. This will be fulfilled answering the research questions: RQ1. What is the cooling power per m2 delivered by the cooling equipment in the two different ICT rooms at Ericsson? RQ2. What operational patterns can be found using a suitable clustering algorithm to process and compare data for LCP at two ICT-rooms? RQ3. Based on information from RQ1 and patterns from RQ2 what undesired operational behaviours can be identified for the cooling system? The K-means clustering is applied to time series data collected during the year of 2022 which include temperatures of water and air; electric power and cooling power; as well as waterflow in the system. The two rooms use Liquid Cooling Packages (LCP)s, also known as in-row cooling units, and room 1 (R1) also include computer room air handlers (CRAHs). K-means clusters each observation into a group that share characteristics and represent different operating scenarios. The elbow-method is used to determine the number of clusters, it created four clusters for R1 and three clusters for room 2 (R2). Results show that the operational patterns differ between R1 and R2. The cooling power produced per m2 is 1.36 kW/m2 for R1 and 2.14 kW/m2 for R2. Cooling power per m3 is 0.39 kW/m3 for R1 and 0.61 kW/m3 for R2. Undesirable operational behaviours were identified through clustering and visual representation of the data. Some LCPs operate very differently even when sharing the same hot aisle. There are disturbances such as air flow and setpoints that create these differences, which results in that some LCPs operate with high cooling power and others that operate with low cooling power. The cluster with the highest cooling power is cluster 4 and 3 for R1 and R2 respectively. Cluster 2 has the lowest cooling power in R1 and R2. For LCPs operating in cluster 2 where waterflow mostly at 0 l/min and therefore where not contributing to the cooling of the rooms. Lastly, the supplied electrical power and produced cooling power match in R1 but do not in R2. Implying that heat leave the rooms by other means than via the cooling system or faulty measurements. There is a possibility to investigate this further. Water in R1 and R2 is found to, at occasions, exit the room with temperature below the ambient room temperature. It is also concluded that the method functions to identify unwanted operational behaviours, knowledge that can be used to improve ICT operations. To summarize, undesired operational behaviours can be identified using the unsupervised machine learning technique K-means clustering.
|
42 |
Optimización de data center móviles para accesibilidad y capacidades de procesamiento en lugares urbanosPezo Castañeda, Ronald Paul, De La Cruz Ninapaitan, Steve Jasson, Torres Rozas, Bruno Alexis January 2015 (has links)
Actualmente, los sectores empresariales optimizan sus costos sin perder eficiencia productiva en Data Center Móviles tercer izando el servicio y adquiriendo esta tecnología ya que la infraestructura montada es de utilidad para todo ámbito empresarial.
El presente proyecto trata sobre el servicio de implementación de un nuevo Data Center Contingente tipo Container, en adelante CPD, como consecuencia de las necesidades de mejora centro de los procesos de negocio. El diseño y dimensionamiento de los diferentes componentes cumplen los estándares requeridos en el mercado.
Los Outdoor Enclosure Electric Shelter Prefabricados en sí y todos los equipamientos eléctricos utilizados en sus sistemas para la protección, control y supervisión, están construidos de acuerdo a las Normas vigentes de ANSI, NEMA, ASTM, IEEE, ISA, OSHA, los cuales además cuentan con Aprobaciones y Certificación de Calidad de Laboratorios como UL, CSA, SEC o Laboratorios de Control de Producción y de Certificación de Calidad equivalentes.
Actualmente las empresas vienen usando enlaces propios y dedicados para manejar su información y conexión con su data center principal y de contingencia. En muchos casos desaprovechando la comunicación entre ellos por lo que se requiere la necesidad de ser más efectivos en el uso de sus recursos de comunicación, por lo cual se debe considerar los siguientes aspectos:
• Modelamiento de Tráfico.
• Seguridad.
• Conmutación de enlaces.
Currently, the business sectors optimize their costs without losing production efficiency in Mobile Data Center outsourcing service and acquiring this technology because the infrastructure is mounted useful for all business field.
This project deals with the service of implementing a new quota type Data Center Container, hereinafter CPD, following needs improvement center business processes. The design and dimensioning of the different components meet the required standards in the market.
The Outdoor Enclosure Electric Shelter Prefabricated itself and all electrical equipment used in systems for the protection, control and monitoring are built according to current standards of ANSI, NEMA, ASTM, IEEE, ISA, OSHA, which also have with Approvals and Quality Certification Laboratories as UL, CSA, SEC or Control Laboratories Production and Quality Certification equivalents.
Currently companies are using own links and dedicated to manage their information and connection with your main data center and contingency. In many cases missing the communication between them so the need to be more effective in their use of communication resources is required, so you should consider the following:
• Traffic Modeling.
• Security.
• Switching links.
|
43 |
Assessment to support the planning of sustainable data centers with high availabilityCALLOU, Gustavo Rau de Almeida 12 November 2013 (has links)
The advent of services such as cloud computing, social networks and e-commerce has led
to an increased demand for computer resources from data centers. Prominent issues for
data center designers are sustainability, cost, and dependability, which are significantly
affected by the redundant architectures required to support these services. Within this
context, models are important tools for designers when attempting to quantify these
issues before implementing the final architecture.
This thesis proposes a set of models for the integrated quantification of the sustainability
impact, cost, and dependability of data center power and cooling infrastructures.
This is achieved with the support of an evaluation environment which is composed of
ASTRO, Mercury and Optimization tools. The approach taken to perform the system
dependability evaluation employs a hybrid modeling strategy which recognizes the advantages
of both stochastic Petri nets and reliability block diagrams. Besides that, a model
is proposed to verify that the energy flow does not exceed the maximum power capacity
that each component can provide (considering electrical devices) or extract (assuming
cooling equipment). Additionally, an optimization method is proposed for improving the
results obtained by Reliability Block Diagrams, Stochastic Petri nets and Energy Flow
models through the automatic selection of the appropriate devices from a list of candidate
components. This list corresponds to a set of alternative components that may compose
the data center architecture.
Several case studies are presented that analyze the environmental impact and dependability
metrics as well as the operational energy cost of real-world data center power and
cooling architectures. / Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T18:54:10Z
No. of bitstreams: 2
Tese Gustavo Callou.pdf: 4626749 bytes, checksum: 336a34ffc39f6ac623fa1144de2a66bf (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-12T18:54:10Z (GMT). No. of bitstreams: 2
Tese Gustavo Callou.pdf: 4626749 bytes, checksum: 336a34ffc39f6ac623fa1144de2a66bf (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-11-12 / O surgimento de servi¸cos como computa¸c˜ao nas nuvens, redes sociais e com´ercio eletrˆonico
tem aumentado a demanda por recursos computacionais dos data centers. Preocupa¸c˜oes
decorrentes para os projetistas de data center s˜ao sustentabilidade, custo, e dependabilidade,
os quais s˜ao significativamente afetados pelas arquiteturas redundantes requeridas
para suportar tais servi¸cos. Nesse contexto, modelos s˜ao ferramentas importantes para
projetistas quanto a tentativa de quantificar esses problemas antes mesmo de implementar
a arquitetura final.
Nessa tese, um conjunto de modelos ´e proposto para a quantifica¸c˜ao integrada do impacto
na sustentabilidade, custo e dependabilidade das infraestruturas de refrigeramento
e potˆencia de data centers. Isso ´e obtido com o suporte do ambiente de avalia¸c˜ao que ´e
composto pelas ferramentas ASTRO, Mercury e o m´odulo de otimiza¸c˜ao. A avalia¸c˜ao de
dependabilidade faz uso de uma estrat´egia de modelagem h´ıbrida que usa as vantagens
tanto das redes de Petri estoc´asticas como dos diagramas de blocos de confiabilidade.
Al´em disso, um modelo ´e proposto para realizar a verifica¸c˜ao se fluxo de energia n˜ao excede
a capacidade m´axima de potˆencia que cada equipamento pode prover (considerando
dispositivos el´etricos) ou extrair (assumindo equipamentos de refrigera¸c˜ao). Adicionalmente,
um m´etodo de otimiza¸c˜ao ´e proposto para melhorar os resultados obtidos atrav´es
dos diagramas de blocos de confiabilidade, das redes de Petri estoc´asticas e do modelo
de fluxo de energia pela sele¸c˜ao autom´atica dos dispositivos apropriados a partir da lista
de componentes candidatos. Essa lista corrresponde a um conjunto de componentes que
podem ser utilizados para compor a arquitetura de data center.
V´arios estudos de casos s˜ao apresentados para analisar o impacto ambiental, a dependabilidade
e o custo operacional de energia el´etrica de arquiteturas reais de potˆencia
e refrigera¸c˜ao de data centers.
|
44 |
Contribution à l'étude de nouvelles technologies de co-packaging et de co-design appliquées à la réalisation de modules photorécepteurs pour les systèmes de télécommunications de prochaine génération / Study of new co-packaging and co-design technologies applied to photoreceiver modules for next generation telecommunication systemsAngelini, Philippe 29 June 2017 (has links)
Les travaux présentés dans cette thèse s'inscrivent dans le cadre des télécommunications optiques à haut-débit sur courtes distances. L'ère numérique dans laquelle nous vivons pousse les architectures actuelles à évoluer aussi rapidement que le besoin en débit. Les réseaux d'accès et data-centers doivent d'ores et déjà évoluer, notamment au niveau des composants et sous-systèmes chargés de détecter les signaux optiques après leur transmission : les photorécepteurs. La montée en débit à 40 Gb/s et au-delà est limitée par l'architecture actuelle des photorécepteurs dont l'interfaçage entre ses deux fonctions primaires (photodétection[PD]/amplification[TIA]) limite la bande passante. Les solutions présentées, visant à limiter la parallélisation multi-longueurs d'onde des composants et ainsi les coût de déploiement, proposent d'augmenter la rapidité des modules photorécepteurs en optimisant leur architecture. Deux axes d'optimisation sont alors proposés au niveau du photorécepteur : Une approche co-packaging ayant pour objectif de considérer les deux fonctions clés du photorécepteur comme des boîtes noires auxquelles il convient d'ajouter un circuit externe permettant d'augmenter la bande passante, et une approche co-design visant à concevoir un nouveau circuit amplificateur transimpédance (TIA) intégrant directement une fonction de pré-égalisation adaptée à la photodiode permettant de repousser la fréquence de coupure du récepteur. / This thesis falls within the scope of high-speed short-reach optical communication where the growing need in data transfer forces the current architectures to evolve as quickly. Acces network and data-center components and subsystems must follow this growth, especially on the photoreceiver side. 40 Gb{s and beyond high-speed communications are limited by the current photoreceiver architecture, which, due to the integration of both of its main functions (photodetection[PD]/amplification[TIA]), limits the maximum achievable bandwitdh. In order to reduce the amount of components and price caused by multi-architectures, photoreceivers bandwidth must be increased. Two solutions are proposed so that the photoreceiver performances can be optimized : A co-packaging approach in which both main functions of the photoreceiver are considered as black boxes to which must be added an external circuit allowing to increase the bandwidth, and a co-design approach in which a new transimpedance amplifier (TIA) is designed, integrating a pre-equalizing function based on the photodiode characteristics, allowing an enhancement of the photoreceiver bandwitdh.
|
45 |
Applying fuel cells to data centers for power and cogenerationCarlson, Amy L. January 1900 (has links)
Master of Science / Department of Architectural Engineering and Construction Science / Fred Hasler / Data center space and power densities are increasing as today’s society becomes more dependent on computer systems for processing and storing data. Most existing data centers were designed with a power density between 40 and 70 watts per square foot (W/SF), while new facilities require up to 200W/SF. Because increased power loads, and consequently cooling loads, are unable to be met in existing facilities, new data centers need to be built. Building new data centers gives owners the opportunity to explore more energy efficient options in order to reduce costs. Fuel cells are such an option, opposed to the typical electric grid connection with UPS and generator for backup power.
Fuel cells are able to supply primary power with backup power provided by generators and/or the electric grid. Secondary power could also be supplied to servers from rack mounted fuel cells. Another application that can benefit from fuel cells is the HVAC system. Steam or high-temperature water generated from the fuel cell can serve absorption chillers for a combined heat and power (CHP) system. Using the waste heat for a CHP system, the efficiency of a fuel cell system can reach up to 90%. Supplying power alone, a fuel cell is between 35 and 60% efficient. Data centers are an ideal candidate for a CHP application since they have constant power and cooling loads.
Fuel cells are a relatively new technology to be applied to commercial buildings. They offer a number of advantages, such as low emissions, quiet operation, and high reliability. The drawbacks of a fuel cell system include high initial cost, limited lifetime of the fuel cell stacks, and a relatively unknown failure mode. Advances in engineering and materials used, as well as higher production levels, need to occur for prices to decrease. However, there are several incentive programs that can decrease the initial investment.
With a prediction that nearly 75% of all 10 year old data centers will need to be replaced, it is recommended that electrical and HVAC designer engineers become knowledgeable about fuel cells and how they can be applied to these high demand facilities.
|
46 |
Systems and applications for persistent memoryDulloor, Subramanya R. 07 January 2016 (has links)
Performance-hungry data center applications demand increasingly higher performance from their storage in addition to larger capacity memory at lower cost. While the existing storage technologies (e.g., HDD and flash-based SSD) are limited in their performance, the most prevalent memory technology (DRAM) is unable to address the capacity and cost requirements of these applications. Emerging byte-addressable, non-volatile memory technologies (such as PCM and RRAM) offer performance within an order of magnitude of DRAM, prompting their inclusion in the processor memory subsystem. Such load/store accessible non-volatile or persistent memory (referred to as NVM or PM) introduces an interesting new tier that bridges the performance gap between DRAM and PM, and serves the role of fast storage or slower memory. However, PM has several implications on system design, both hardware and software: (i) the hardware caching mechanisms, while necessary for acceptable performance, complicate the ordering and durability of stores to PM, (ii) the high performance of PM (compared to NAND) and the fact that it is byte-addressable necessitate rethinking of the system software to manage PM and the interfaces to expose PM to the applications, and (iii) the future memory-based applications that will likely employ systems coupling PM with DRAM (for cost and capacity reasons) must be extremely conscious of the performance characteristics of PM and the challenges of using fast vs. slow memory in ways that best meet their performance demands.
The key contribution of our research is a set of technologies that addresses these challenges in a bottom-up fashion. Since the real hardware is not yet available, we first implement a hardware emulator that can faithfully emulate the relative performance characteristics of DRAM and PM in a system with separate DRAM and emulated PM regions. We use this emulator to perform all of our evaluations. Next we explore system software support to enable low-overhead PM access by new and legacy applications. Towards this end, we implement PMFS, an optimized light-weight POSIX file system that exploits PM's byte-addressability to avoid overheads of block-oriented storage and enable direct PM access by applications (with memory-mapped I/O). To provide strong consistency guarantees, PMFS requires only a simple hardware primitive that provides software enforceable guarantees of durability and ordering of stores to PM. We demonstrate that PMFS achieves significant (up to an order of magnitude) gains over traditional file systems (such as ext4) on a RAMDISK-like PM block device.
Finally, we address the problem of designing memory-based applications for systems with both DRAM and PM by extending our system software to manage both the tiers. We demonstrate for several representative large in-memory applications that it is possible to use a small amount of fast DRAM and large amounts of slower PM without a proportional impact to an application's performance, provided the placement of data structures is done in a careful fashion. To simplify the application programming, we implement a set of libraries and automatic tools (called X-Mem) that enables programmers to achieve optimal data placement with minimal effort on their part. Finally, we demonstrate the potentially large benefits of application-driven memory tiering with X-Mem across a range of applications.
|
47 |
Integrated Approach to Dynamic and Distributed Cloud Data Center Managementde Carvalho, Tiago Filipe Rodrigues 01 December 2016 (has links)
Management solutions for current and future Infrastructure-as-a-Service (IaaS) Data Centers (DCs) face complex challenges. First, DCs are now very large infrastructures holding hundreds of thousands if not millions of servers and applications. Second, DCs are highly heterogeneous. DC infrastructures consist of servers and network devices with different capabilities from various vendors and different generations. Cloud applications are owned by different tenants and have different characteristics and requirements. Third, most DC elements are highly dynamic. Applications can change over time. During their lifetime, their logical architectures evolve and change according to workload and resource requirements. Failures and bursty resource demand can lead to unstable states affecting a large number of services. Global and centralized approaches limit scalability and are not suitable for large dynamic DC environments with multiple tenants with different application requirements. We propose a novel fully distributed and dynamic management paradigm for highly diverse and volatile DC environments. We develop LAMA, a novel framework for managing large scale cloud infrastructures based on a multi-agent system (MAS). Provider agents collaborate to advertise and manage available resources, while app agents provide integrated and customized application management. Distributing management tasks allows LAMA to scale naturally. Integrated approach improves its efficiency. The proximity to the application and knowledge of the DC environment allow agents to quickly react to changes in performance and to pre-plan for potential failures. We implement and deploy LAMA in a testbed server cluster. We demonstrate how LAMA improves scalability of management tasks such as provisioning and monitoring. We evaluate LAMA in light of state-of-the-art open source frameworks. LAMA enables customized dynamic management strategies to multi-tier applications. These strategies can be configured to respond to failures and workload changes within the limits of the desired SLA for each application.
|
48 |
Multipath approaches to avoiding TCP IncastSong, Lin 01 May 2017 (has links)
TCP was conceived to ensure reliable node-to-node communication in moderate-bandwidth, moderate-latency, WANs. As it is now a mature Internet standard, it is the default connection-oriented protocol in networks built from commodity components, including Internet data centers. Data centers, however, rely on high-bandwidth, low-latency networks for communication. Moreover, their communication patterns, especially those generated by distributed applications such as MapReduce, often take the form of synchronous multi-node to node bursts. Under the right conditions, the network switch buffer overflow losses induced by these bursts confuse TCP's feedback mechanisms to the point that TCP throughput collapses. This collapse, termed TCP Incast, results in gross underutilization of link capacities, significantly degrading application performance.
Conventional approaches to mitigating Incast have focused on single-path solutions, for instance, adjusting TCP's receive windows and timers, modifying the protocol itself, or adopting explicit congestion notifications. This thesis explores complementary multi-path approaches to avoiding Incast's onset. The principal idea is to use the regularity and high connectivity of typical data center networks, such as the increasingly popular fat-tree topology, to better distribute multi-node to node bursts across the available paths, thereby avoiding the switch buffer overflows that induce TCP Incast.
The thesis's main contributions are: (1) development of new oblivious, multi-path, routing schemes for fat-tree networks, (2) derivation of relations between the schemes and Incast's onset, and (3) investigation of a novel "front-back" approach to minimizing the packet reordering introduced by multipath routing. Formal analyses are focused on relating schemes' worst-case loading of certain network resources - expressed as oblivious performance ratios (OPRs) - to Incast's onset. Potential benefits are assessed through ns-3 simulations on fat-trees under a variety of communication patterns. Results indicate that over a variety of experimental conditions, the proposed schemes reduce the incidence of TCP Incast compared to standard routing schemes.
|
49 |
Desarrollo de un software de monitoreo y predicción en tiempo real de incidencias ambientales para data center de telecomunicacionesGallegos Sánchez, Carlos Alberto, Huachin Herrera, Carlos Edson 26 February 2018 (has links)
Durante las últimas dos décadas aparecieron equipos con avances tecnológicos que han mejorado los servicios de telecomunicaciones. Estos trabajan las 24 horas del día dentro de un Data Center. La caída de un equipo representaría la caída de un servicio con pérdidas económicas. Una de las principales causas de los incidentes es por variables ambientales (temperatura, humedad o punto de rocío). El proyecto trata sobre una aplicación de predicción de fallos en tiempo real. Este es capaz de predecir fallos y proporcionar un tiempo de reacción ante un incidente.
El software implementado tiene dos modos: online y simulación. El modo online es una
aplicación para extraer la información de los sensores de ambientales. Este fue implementado en Bitel Telecom donde empresa manifestó su aprobación con una carta de recomendación. Por otro lado, el modo simulación funciona utilizando un servidor web, para subir los resultados en internet y poder monitorearla desde un móvil o PC. El modo simulación tiene dos modos: aleatorio y manual. En estos modos se puede modificar parámetros del algoritmo según exigencias de la empresa. Luego de varias pruebas, se llegó a la conclusión de que la predicción de fallos ayudara a evitar incidentes y generar ahorros económicos. / During last two decades, equipments with technological advancements that have improved telecommunication services appeared. These work 24 hours a day inside a Data Center. Fall of an equipment would represent fall of a service, with economic losses. One of the main causes of incidents is because of environmental variables (temperature, humidity or dew point). The project is about a real time failure prediction application. This is capable of predict failures and give a reaction time upon an incident.
The implemented software has two modes: online and simulation. Online mode is an application for extracting information of environmental sensors. This was implemented at Bitel Telecom, which manifested her approval with a recommendation letter. Simulation mode has two modes: random and manual. In these modes you can modify algorithm parameters according to enterprise requirements. After many tests, it came to the conclusion that failure prediction will help to avoid incidents and generate economic savings. / Tesis
|
50 |
Optics and virtualization as data center network infrastructureJanuary 2012 (has links)
The emerging cloud services have motivated a fresh look at the design of data center network infrastructure in multiple layers. To transfer the huge amount of data generated by many data intensive applications, data center network has to be fast, scalable and power efficient. To support flexible and efficient sharing in cloud services, service providers deploy a virtualization layer as part of the data center infrastructure.
This thesis explores the design and performance analysis of data center network infrastructure in both physical network and virtualization layer. On the physical network design front, we present a hybrid packet/circuit switched network architecture which uses circuit switched optics to augment traditional packet-switched Ethernet in modern data centers. We show that this technique has substantial potential to improve bisection bandwidth and application performance in a cost-effective manner. To push the adoption of optical circuits in real cloud data centers, we further explore and address the circuit control issues in shared data center environments. On the virtualization layer, we present an analytical study on the network performance of virtualized data centers. Using Amazon EC2 as an experiment platform, we quantify the impact of virtualization on network performance in commercial cloud. Our findings provide valuable insights to both cloud users in moving legacy application into cloud and service providers in improving the virtualization infrastructure to support better cloud services.
|
Page generated in 0.0741 seconds