• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 11
  • 9
  • 7
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 34
  • 33
  • 30
  • 28
  • 27
  • 24
  • 23
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cost Optimization of Modular Data Centers

Nayak, Suchitra January 2018 (has links)
During the past two decades, the increasing demand for digital telecommunications, data storage and data processing coupled with simultaneous advances in computer and electronic technology have resulted in a dramatic growth rate in the data center (DC) industry. It has been estimated that almost 2% of US total energy consumption and 1.5% of worlds total power consumption is by DCs. With the fossil fuels and earth’s natural energy sources depleting every day, greater efforts have to be made to save energy and improve efficiencies. As yet, most of the DCs are highly inefficient in energy usage. A significant part of this inherent inefficiency comes from poor design and rudimentary operation of current DCs. Thus, there is an urgent need to optimize the power consumption of DCs. This has led to the advent of modular DCs, newer scalable DC architectures, that reduces cost and increases efficiency by eliminating overdesign and allowing for scalable growth. This concept has been particularly appealing for small businesses who find it difficult to commit to setting up a traditional DC with huge upfront capital investment. However, their adoption and implementation is still limited because of a systematic approach of quickly identifying a module DC design. Considering many different choices for subcomponents, such as cooling systems, enclosures and power systems, this is a non-trivial exercise, especially, considering the complex multiphysics interactions among components that drive system efficiency. For designing such DCs, there is no research available. Therefore, most of the time, the engineers and designers rely on experience, to avoid lengthy elaborate engineering analysis, particularly during the conception stages of a DC deployment project. Here, we are developing a design tool that will not only optimize the design of modular DCs but also make the design process much faster than manually done by engineers. One of the major problem in designing modular DCs is finding optimum placement of the cooling unit to keep the temperature under ASHRAE guidelines (recommended safe temperature threshold). In addition to finding the optimum selection and placement of the cooling units and its auxiliary components, the tool also gives an optimum design for the power connection to the cooling units and IT racks with redundancy. Also, a bill of materials and key performance index (KPI) for those designs are generated by the tool. Overall, this tool in the hands of the bidders or sales representatives can significantly increase their chance of winning the project. / Thesis / Master of Applied Science (MASc)
2

Modelling and Evaluation of Distributed Airflow Control in Data Centers

Lindberg, Therese January 2015 (has links)
In this work a suggested method to reduce the energy consumption of the cooling system in a data center is modelled and evaluated. Introduced is different approaches to distributed airflow control, in which different amounts of airflow can be supplied in different parts of the data center (instead of an even airflow distribution). Two different kinds of distributed airflow control are compared to a traditional approach without airflow control. The difference between the two control approaches being the type of server rack used, either traditional ones or a new kind of rack with vertically placed servers. A model capable of describing the power consumption of the data center cooling system for these different approaches to airflow control was constructed. Based on the model, MATLAB simulations of three different server work load scenarios were then carried out. It was found that introducing distributed airflow control reduced the power consumption for all scenarios and that the control approach with the new kind of rack had the largest reduction. For this case the power consumption of the cooling system could be reduced to 60% - 69% of the initial consumption, depending on the workload scenario. Also examined was the effect on the data center of different parameters and process variables (parameters held fixed with the help of feedback loops), as well as optimal set point values.
3

Reducing Peak Power Consumption in Data Centers

Green, George Michael January 2013 (has links)
No description available.
4

Título da dissertação: Uma Abordagem OpenFlow para Tratamento de Falhas na Topologia Hipercubo com Compactação de Tabelas de Encaminhamento.

LIMA, D. S. A. 16 May 2016 (has links)
Made available in DSpace on 2016-08-29T15:33:25Z (GMT). No. of bitstreams: 1 tese_9891_Ata de Defesa.pdf: 681165 bytes, checksum: e90a96fd774c8d4b5e7a4315a623688c (MD5) Previous issue date: 2016-05-16 / Em data centers centrados em servidores estes não somente participam no processamento dos dados, mas também no encaminhamento do tráfego de rede. Uma possível topologia de interligação dos servidores em um data center é o hipercubo. Nele, o encaminhamento de pacotes é normalmente baseado em operações de XOR, que calcula qual o vizinho mais próximo do destino de forma bastante eficiente . Porém se por um lado essa simplicidade contribui para o aumento da vazão e diminuição da latência, por outro o encaminhamento só funciona caso o hipercubo esteja completo, ou seja, na inexistência de falhas de nó ou enlace. O uso de SDN e do protocolo OpenFlow pode ser uma alternativa para garantir o encaminhamento de tráfego nessas situações. Entretanto, a adoção de tabelas de encaminhamento em topologia hipercubo possui um alto custo relacionado ao grande número de entradas nessas tabelas, que crescem em escala exponencial. Nesse contexto este trabalho apresenta uma proposta, baseada na tecnologia OpenFlow, para o tratamento de falhas em hipercubos incluindo a redução do número de entradas nas tabelas de encaminhamento, com taxa de compactação logarítmica, proporcional ao número de dimensões do hipercubo.
5

Urban Data Center: An Architectural Celebration of Data

Talarico, Gui 23 June 2011 (has links)
Throughout the last century, the popularization of the automobile and development of roads and highways has changed the way we live, and how cities develop. Bridges, aqueducts, and power plants had comparable impact in the past. I consider each of these examples to be "icons" of infrastructures that we humans build to improve our living environments and to fulfill our urge to become better.Fast forward to now. The last decades showed us the development of new sophisticated networks that connect people and continents. Communication grids, satellite communication, high speed fiber optics and many other technologies have made possible the existence of the ultimate human network - the internet. A network created by us to satisfy our needs to connect, to share, to socialize and communicate over distances never before imagined. The data center is the icon of this network.Through modern digitalization methods, text, sounds, images, and knowledge can be converted into zero's and one's and distributed almost instantly to all corners of the world. The data center is the center piece in the storage, processing, and distribution of this data.The Urban Data Center hopes to bring this icon closer to its creators and users. Let us celebrate its existence and shed some light into the inner workings of the world's largest network. Let the users that inhabit this critical network come inside of it and understand where it lives. This thesis explores the expressive potential of networks and data through the design of a data center in Washington, DC. / Master of Architecture
6

Reconnection: Establishing a Link Between Physical And Virtual Space

Miller, Daniel 27 October 2014 (has links)
No description available.
7

Resource Management in Virtualized Data Center

Rabbani, Md January 2014 (has links)
As businesses are increasingly relying on the cloud to host their services, cloud providers are striving to offer guaranteed and highly-available resources. To achieve this goal, recent proposals have advocated to offer both computing and networking resources in the form of Virtual Data Centers (VDCs). However, to offer VDCs, cloud providers have to overcome several technical challenges. In this thesis, we focus on two key challenges: (1) the VDC embedding problem: how to efficiently allocate resources to VDCs such that energy costs and bandwidth consumption are minimized, and (2) the availability-aware VDC embedding and backup provisioning problem which aims at allocating resources to VDCs with hard guarantees on their availability. The first part of this thesis is primarily concerned with the first challenge. The goal of the VDC embedding problem is to allocate resources to VDCs while minimizing the bandwidth usage in the data center and maximizing the cloud provider's revenue. Existing proposals have focused only on the placement of VMs and ignored mapping of other types of resources like switches. Hence, we propose a new VDC embedding solution that explicitly considers the embedding of virtual switches in addition to virtual machines and communication links. Simulations show that our solution results in high acceptance rate of VDC requests, less bandwidth consumption in the data center network, and increased revenue for the cloud provider. In the second part of this thesis, we study the availability-aware VDC embedding and backup provisioning problem. The goal is to provision virtual backup nodes and links in order to achieve the desired availability for each VDC. Existing solutions addressing this challenge have overlooked the heterogeneity of the data center equipment in terms of failure rates and availability. To address this limitation, we propose a High-availability Virtual Infrastructure (Hi-VI) management framework that jointly allocates resources for VDCs and their backups while minimizing total energy costs. Hi-VI uses a novel technique to compute the availability of a VDC that considers both (1) the heterogeneity of the data center networking and computing equipment, and (2) the number of redundant virtual nodes and links provisioned as backups. Simulations demonstrate the effectiveness of our framework compared to heterogeneity-oblivious solutions in terms of revenue and the number of physical servers used to embed VDCs.
8

A NEW HOME FOR WHITE SANDS MISSILE RANGE TELEMETRY DATA IN THE NEW MILLENNIUM

Newton, Henry L., Bones, Gary L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The White Sands Telemetry Data Center (TDC) is moving to a new home. The TDC, along with various range functions, is moving to the new J. W. Cox Range Control Center (CRCC). The CRCC is under construction and will replace the present control center. Construction of the new CRCC and the resulting move was prompted by the presence of asbestos in the present Range Control Center (RCC). The CRCC construction will be completed in September 1999 at which time the communications backbone will be installed. (Estimated time to complete the installation is nine months.) In early 2000, White Sands will begin transition of the TDC and other commodity functions to the CRCC. The transition must not interrupt normal support to range customers and will result in the consolidation of all range control functions. The new CRCC was designed to meet current and future mission requirements and will contain the latest in backbone network design and functionality for the range customer. The CRCC is the single point of control for all missions conducted on the 3700 square mile range. The Telemetry Data Center will be moved in two parts into the new CRCC. This will allow us to run parallel operations with the old RCC until the CRCC is proven reliable and minimize overall downtime. Associated telemetry fiber optics, microwave communications and field data relay sites will be upgraded and moved at the same time. Since the TDC is so tightly dependent upon data input from both fiber optics and microwave communications inputs, a cohesive move is critical to the overall success of the transition. This paper also provides an overview of the CRCC design, commodity transition, and lessons learned.
9

Clouded space: Internet physicality

Izumo, Naoki 01 May 2017 (has links)
On Friday October 21st, 2016, there was a large-scale hack of an Internet domain hosting provider that took several websites including Netflix, Amazon, Reddit, and Twitter offline. Dyn, a cloud-based Internet Performance Management company, announced at 9:20AM ET that it resolved an attack that began at 7AM ET that day. However, another attack happened at 11:52AM ET. The attacks raised concern among the public and directed our attention towards Internet security. This also revealed the precariousness of Internet infrastructure. The infrastructure being used today is opaque, unregulated, and incontestable. Municipally provided public utilities are built without any transparency; thus, we do not expect failure from those systems. For instance, the Flint, Michigan water crisis raised issues of water infrastructure. Not only did the crisis spark talks about the corrosion of pipes, but also larger societal issues. Flint, a poor, largely African American community, became a victim of environmental racism—a type of discrimination where communities of color or low-income residents are forced to live in environmental dangerous areas. In order for myself and the larger public to understand this opaque system, we need to understand the infrastructure and how it works. With regards to Internet infrastructure, I focus on data centers, where there are backup servers, batteries and generators built into the architectural landscape in case of failure. There is a common held thought that overshadows the possibility of imminent technological failure—it cannot happen. This sort of thinking influences other modes of our daily lives: individuals building concrete bomb shelters underground for the apocalypse, stocking food, but not preparing for data breakdown. The consciousness of loss is further perpetuated by technology and its life expectancy. Clouded Space: Internet Physicality attempts to explore the unexceptional infrastructure of the Internet and how it exists right beneath our feet. That in itself is not very cloud-like. The work questions integrity of our infrastructure as much as environmental issues, highlighting the questionable relationship we have with data and our inclination to backup data to protect ourselves from failure. This is a relatively new topic and the challenges are not well understood. There seem to be cracks in the foundation, and though they are not yet obvious, they appear to be widening.
10

A Research of Customer Potential Demand and Satisfaction of Internet Data Center

Liou, Jen-Jui 30 July 2002 (has links)
The requirements of network technologies are considered significantly important, because modern enterprises face the trend of e-commerce and widely use the conveniences of Internet communications. Internet Data Center (IDC) provides services to satisfy customers' requirements. Hence, IDC becomes a new area of markets in the world. Due to the market is so small in Taiwan, enterprises cannot get profits, even that they loss business. To alleviate the above problem, we study three frameworks of fundamental services IDC, including infrastructure, superintendence and add-value service. The goal is to find the important services for customer demand by comparing the customer satisfaction between before and after of the rent. We also compare the differences between fundamental factors and service factors, the correlation between integrated service and service factors, and the differences between satisfaction and expecting before and after of the rent. The purpose of our survey is to provide feasible suggestions to IDC managers so that they can not only survive but also get profits. The results have been shown as follows: 1.The important services of customer demand include: (a)power, air-condition, bandwidth fire suppression system under the infrastructure service. (b)anti-virus protection and DOS attacks prevention under the monitoring service. (c)storage backup, remote backup and firewall protection under the add-value service. 2.The high demands for quality of service (QoS) include: (a)power, air-condition, fire suppression system, security access control system and TV security monitoring under the infrastructure service. (b)storage backup remote backup and firewall protection under the add-value service. 3.The unsatisfaction for quality of service include: (a)security access control system, TV security monitoring, temperature & humidity control, and web hosting under infrastructure service. (b)anti-virus protection, DOS attacks prevention under the monitoring service. (c)the cache service under add-value service. 4. We compare the differences between expected QoS and satisfaction. The results are shown in Figure 5.1.2, Figure 5.1.3, and Figure 5.1.4 respectively. 5. We compare the differences between the expected service and satisfaction before and after of the rent. The results are shown that four items are ¡§significant difference¡¨, including (a)air-condition, UPS under the infrastructure. (b)application system hosting, DNS under add-value service. In contrast, six items are no ¡§significant difference¡¨, including (a)security access control and TV security monitoring under infrastructure. (b)server monitoring under the monitoring service, the contracts of various QoS, consultant, systems integration, total solution, remote backup and the whole service. 6. We compare the correlation between the whole service and service factor. The results are shown that six items are ¡§high correlation¡¨, including (a)equipment security under the factor of service demand. (b)equipment security, equipment service, monitoring service under the expected factor for QoS. (c)equipment security and integrated service under the satisfaction factor for QoS.

Page generated in 0.0871 seconds