• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 11
  • 9
  • 7
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 34
  • 33
  • 30
  • 28
  • 27
  • 24
  • 23
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Transient & steady-state thermodynamic modeling of modular data centers

Khalid, Rehan 27 May 2016 (has links)
The data center industry currently focuses on initiatives to reduce its enormous energy consumption and minimize its adverse environmental impact. Modular data centers provide considerable operational flexibility in that they are mobile, and are manufactured using standard containers. This thesis aims at developing steady-state energy and exergy destruction models for modular data centers using four different cooling approaches: direct expansion cooling, direct and indirect evaporative cooling, and free air cooling. Furthermore, transient thermal response of these data centers to dynamic loads, such as varying server load through change in user requirement over the cloud, and/or to changes in outside weather conditions has been studied. The effect of server thermal mass has also been accounted for in developing the transient regime. The change in performance of the data center is reported through changes in the Power Usage Effectiveness (PUE) metric, and through change in the exergy destruction in the individual hot and cold aisles. The core simulation software used for this work is EnergyPlus, an open source software from the U.S. Department of Energy. Moreover, EnergyPlus is used as the simulation engine within the in-house developed software package Data Center EnergyPlus (DCE+).
22

The role of integrated photonics in datacenter networks

Glick, Madeleine 28 January 2017 (has links)
Datacenter networks are not only larger but with new applications increasing the east-west traffic and the introduction of the spine leaf architecture there is an urgent need for high bandwidth, low cost, energy efficient interconnects. This paper will discuss the role integrated photonics can have in achieving datacenter requirements. We will review the state of the art and then focus on advances in optical switch fabrics and systems. The optical switch is of particular interest from the integration point of view. Current MEMS and LCOS commercial solutions are relatively large with relatively slow reconfiguration times limiting their use in packet based datacenter networks. This has driven the research and development of more highly integrated silicon photonic switch fabrics, including micro ring, Mach-Zehnder and MEMS device designs each with its own energy, bandwidth and scalability, challenges and trade-offs. Micro rings show promise for their small footprint, however they require an energy efficient means to maintain wavelength and thermal control. Latency requirements have been traditionally less stringent in datacenter networks compared to high performance computing applications, however with the increasing numbers of servers communicating within applications and the growing size of the warehouse datacenter, latency is becoming more critical. Although the transparent optical switch fabric itself has a minimal additional latency, we must also take account of any additional latency of the optically switched architecture. Proposed optically switched architectures will be reviewed.
23

Diseño de Red Power Over Ethernet con Categoría 6A para Aplicación en Data Center

López Córdova, Luis Alfredo January 2008 (has links)
La presente tesis describe el Proyecto de Diseño de una Red Over Ethernet sobre una red física DE 10 Gigabit Ethernet utilizando categoría 6A, demostrando la convergencia de estas dos tecnologías logrando el mayor rendimiento y cumpliendo con todas las normas internacionales y nacionales aplicados en un Data Center.
24

Free cooling in data centers : Experimental test of direct airside economization with direct evaporative cooling

Liikamaa, Rickard January 2019 (has links)
The backbone of the expanding Information and Communication Technology (ICT)-sector are data centers. In these, Information Technology (IT) equipment is housed which provides computational power for e.g. cloud computing and internet services. Data centers consume massive amount of electricity, estimated to 1% of the global demand. All this power is however not used directly by the IT equipment, to keep the operating conditions in the desired range 21-61% of the electricity is used by the cooling solution. This is mainly due to the extensive use of vapor-compression refrigeration systems (VCRS) which provide a dependable cooling solution that works independent of climate conditions. To get around VCRS the concept of free cooling has been utilized in data centers, this can be done in many ways but the main idea is to introduce a natural cooling source without compromising the operating environment. Previous studies have showed that direct airside economization, i.e. using outdoor air directly in the data center have potential to reduce the energy demand of the cooling solution. This is however directly dependent on the outdoor conditions, by combining direct airside economization with direct evaporative cooling and recirculation of hot air from the IT equipment the cooling solution can handle a wider range of weather conditions and still keep the operating environment in desired conditions. Simulations of similar cooling solutions have been been done by Endo et al. and Ham et al. and showed promising results, but no study of an experimental setup have been published. To test how direct airside economization with direct evaporative cooling technology performs and find its characteristics an experimental setup was constructed, coolers with direct airside economziation and direct evaporative cooling was installed in a data center module at RISE SICS North data center ICE. The setup consisted of 12 racks of OCP Winterfell servers in a hot and cold aisle setup with containment, ducts on the ceiling connected the hot aisle to the coolers and made recirculation of hot air possible. A test schedule was developed to test the cooling solution in two of its four operating modes where the IT-load and setpoint temperatures where adjusted in predefined steps. The IT equipment consumed between 60 - 100kW and the facility power varied between 1.5 - 7kW, which results in a power usage effectiveness (PUE) value between 1.02 and 1.08. Compared to traditional VCRS systems these are very low values. By running the coolers in evaporative cooling mode the PUE was consistently lower compared to ventilation mode, the supply air temperature drop was up to 10°C while in cooling mode. The water consumption, and the corresponding water usage effectiveness (WUE) value was not measured or calculated due to limitations of the test rig that made long tests unstable. Direct airside economization with direct evaporative cooling is not the cooling solution for all data centers in all climates. But if the right conditions are present it is a simple cooling solution that without VCRS or heat exchangers (HEX) shows impressive PUE capabilities. Due to the psychical limitations of the system it can not handle high temperature and/or humidity levels, the data center either needs to be shut down, operated in undesirable conditions or complemented with a separate cooling system to operate in these conditions. To find the limits for this system supply air alteration and removal of exhaust air needs to be implemented. Due to the natural limitations of evaporative cooling combined with the ASHRAE guidelines the technology needs to be further researched to find what climate conditions it can handle. The water consumption which according to previous studies can be substantial also needs to be further studied.
25

Rural Datascapes: A Data Farm Network for Rural North Dakota

Hieb, Sara 05 September 2012 (has links)
This thesis attempts to render architectural agency and aesthetics within the typological discussion of the data center in the rural American landscape. The disciplinary question of the role of architecture and aesthetics in data center design is related to earlier examples of factories and warehouses during modernity. The data center alters the traditional representative role of architecture; they are massive, horizontal buildings that are only conceivable from an aerial perspective, driven by logistics and efficiency. This thesis engages these issues by focusing on the point at which the architectural and programmatic problems of the data center converge, the building form and envelope. This thesis engages the building envelope as an expanded surface that considers not only logistical and environmental issues, but also engages the social and political architectural questions related to the identity of the data center in the rural landscape.
26

An analysis of U.S. Air Force pilot separation decisions

Gültekin, Zeki Canpolat, Ömer January 2010 (has links) (PDF)
Thesis (M.S. in Management)--Naval Postgraduate School, March 2010. / Thesis Advisor(s): Mehay, Stephen L. ; Hudgens, Bryan. "March 2010." Description based on title screen as viewed on April 26, 2010. Author(s) subject terms: Attrition, U.S. Air Force Pilots, Commissioning Source, Logit Model, Retention. Includes bibliographical references (p. 51-52). Also available in print.
27

Validation of Computational Fluid Dynamics Based Data Center Cyber-Physical Models

January 2012 (has links)
abstract: Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource management schemes for efficient workload distribution and sustainable operation for improving the energy efficiency, be developed and tested before implementation on an actual data center. The BlueTool project, provides such a state-of-the-art platform, both software and hardware, to design and analyze energy efficiency of data centers. The software platform, namely GDCSim uses cyber-physical approach to study the physical behavior of the data center in response to the management decisions by taking into account the heat recirculation patterns in the data center room. Such an approach yields best possible energy savings owing to the characterization of cyber-physical interactions and the ability of the resource management to take decisions based on physical behavior of data centers. The GDCSim mainly uses two Computational Fluid Dynamics (CFD) based cyber-physical models namely, Heat Recirculation Matrix (HRM) and Transient Heat Distribution Model (THDM) for thermal predictions based on different management schemes. They are generated using a model generator namely BlueSim. To ensure the accuracy of the thermal predictions using the GDCSim, the models, HRM and THDM and the model generator, BlueSim need to be validated experimentally. For this purpose, the hardware platform of the BlueTool project, namely the BlueCenter, a mini data center, can be used. As a part of this thesis, the HRM and THDM were generated using the BlueSim and experimentally validated using the BlueCenter. An average error of 4.08% was observed for BlueSim, 5.84% for HRM and 4.24% for THDM. Further, a high initial error was observed for transient thermal prediction, which is due to the inability of BlueSim to account for the heat retained by server components. / Dissertation/Thesis / M.S. Mechanical Engineering 2012
28

Investigação de aspectos verdes na implantação de um data Center na área industrial de Suape-PE

Warlet Reis, Ivan 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:56:09Z (GMT). No. of bitstreams: 2 arquivo2741_1.pdf: 2448903 bytes, checksum: 599fb23d0e11967dbc261ad018b22c3b (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Quando o conceito de responsabilidade social começou a se popularizar, nos anos 90, muitos enxergavam a adoção dessas práticas apenas como uma ferramenta para promover uma imagem positiva das corporações na sociedade. Com o amadurecimento dessa política, veio a percepção dos empresários de que a utilização de meios de produção socialmente corretos também contribuía para elevar o desempenho dos negócios. Exemplos dessa tendência podem ser notados em empresas de, praticamente, todos os segmentos. No caso da informática, surgiu a Green Information Technology ou Tecnologia da Informação Verde, que chamaremos de TI Verde. Resumidamente, essa abordagem traz uma série de práticas para tornar o uso da computação mais sustentável e menos nocivo ao meio ambiente. Sabe-se que antes o desempenho e a disponibilidade eram as questões primordiais, hoje o ideal é encontrar um meio termo entre performance e eficiência no consumo. Nos Centros de Processamento de Dados (Data Centers), esse equilíbrio é um ponto crucial, dada a magnitude dos gastos necessários para o seu funcionamento. A proposta deste trabalho é apresentar uma visão das melhores práticas para a construção de um DC verde no complexo industrial de SUAPE. Com essa proposta em mente, independente de qual for a intenção dos gestores (redução de gastos ou real preocupação ambiental), o resultado da adoção das práticas verdes de TI aqui descritas parece caminhar sempre para a redução na emissão de poluentes e do consumo de recursos naturais
29

Host and Network Optimizations for Performance Enhancement and Energy Efficiency in Data Center Networks

Jin, Hao 07 November 2012 (has links)
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
30

Cooling storage for 5G EDGE data center

Johansson, Jennifer January 2020 (has links)
Data centers requires a lot of energy due to that data centers count as the buildings also contains servers, cooling equipment, IT-equipment and power equipment. As the cooling solution for many data centers around the world right now, the compressor-based cooling solution counts for around 40% of the total energy consumption. A non-compressor-based solution that is used in some data centers, but also is in a research phase is the free cooling application. Free cooling means that the outside air is utilized to cool down the data center and there are two main technologies that contains within free cooling: airside free cooling and waterside free cooling. The purpose of this master thesis is to analyze two types of coils; one corrugated and the other one smooth, providing from Bensby Rostfria, to investigate if it is possible to use free cooling in 5G EDGE data center in Luleå, with one of these coils. The investigation will be done during the warmest day in summer. This because, according to weather data, Luleå is one candidate where this type of cooling system could be of use. The project was done through RISE ICE Datacenter where two identical systems was built next to each other with two corrugated hoses of different diameter and two smooth tubes with different diameter. The variables that was measured was the ambient temperature within the data hall, the water temperature in both water tanks, the temperature out from the system, the temperature in to the system and the mass flow of the air that was going to go through the system. The first thing that was done was to do fan curves to easier choose which input voltages for the fans that was of interest to do further analysis on. After that was done, three point was taken where the fan curve was of most increase. The tests were done by letting the corrugated hoses and smooth tubes to be in each of the water tanks and fill it with cold water. It was thereafter the coils that should warm the water from 4,75 °C – 9,75 °C, because of that the temperature in the data center was around 15 °C. The rising in particularly these temperatures was chosen because it is seen that to use free cooling the temperature differences must be at least 5 °C. The tests were done three times to get a more reliable result. All the data was further taken in to Zabbix and to further analysis in Grafana. When one test was done the files was saved from Grafana to Excel for compilation, and thereafter to Matlab for further analysis. The first thing that was analyzed was if the three different tests with the same input voltages gave similar results in the water temperature in the tank and the temperature out from the system. Thereafter, trendlines was built to investigate the temperature differences in and out of the system, the temperature differences in and the water temperature in the tank, the mass flow and the cooling power. That trendline was further in comparison to each other, which was 2D-plots between the cooling power and the temperature differences between the inlet and the water. Thereafter the both coils could compare to each other to see which of them that gave the largest cooling power and was most efficient to install in a future 5G data center module.  The conclusion for this master thesis is that the corrugated hose will give a higher cooling power with higher temperature differences outside, but during the warmest summer day it was distinctly the smooth tube that gave the largest cooling power and therefore the best result. The smooth tube also got, through hand calculations, the larger amount of pipe that was necessary to cool down the 5G module, but the smallest water tank. It was also shown that for the warmest summer day, a temperature in the water tank of 24 °C is the best, compared to 20 °C and 18 °C. The amount of coil that is needed to cool down the data center with a temperature in the water tank at 24 °C and how large the water tank differs between the two types of coils. For the corrugated hose a length of 1.8 km and a water tank of 9.4 m3. As for the smooth tube a length of 1.7 km and a water tank volume of 12 m3.  As can be seen throughout this project is that this type of cooling equipment is not the most efficient for the warmest summer day but could easily be used for other seasons.

Page generated in 0.0916 seconds