Spelling suggestions: "subject:"[een] DATA CENTER"" "subject:"[enn] DATA CENTER""
1 |
Reducing Peak Power Consumption in Data CentersGreen, George Michael January 2013 (has links)
No description available.
|
2 |
Título da dissertação: Uma Abordagem OpenFlow para Tratamento de Falhas na Topologia Hipercubo com Compactação de Tabelas de Encaminhamento.LIMA, D. S. A. 16 May 2016 (has links)
Made available in DSpace on 2016-08-29T15:33:25Z (GMT). No. of bitstreams: 1
tese_9891_Ata de Defesa.pdf: 681165 bytes, checksum: e90a96fd774c8d4b5e7a4315a623688c (MD5)
Previous issue date: 2016-05-16 / Em data centers centrados em servidores estes não somente participam no processamento dos dados, mas também no encaminhamento do tráfego de rede. Uma possível topologia de interligação dos servidores em um data center é o hipercubo. Nele, o encaminhamento de pacotes é normalmente baseado em operações de XOR, que calcula qual o vizinho mais próximo do destino de forma bastante eficiente .
Porém se por um lado essa simplicidade contribui para o aumento da vazão e diminuição da latência, por outro o encaminhamento só funciona caso o hipercubo esteja completo, ou seja, na inexistência de falhas
de nó ou enlace. O uso de SDN e do protocolo OpenFlow pode ser uma alternativa para garantir o encaminhamento de tráfego nessas situações. Entretanto, a adoção de tabelas de encaminhamento em
topologia hipercubo possui um alto custo relacionado ao grande número de entradas nessas tabelas, que crescem em escala exponencial. Nesse contexto este trabalho apresenta uma proposta, baseada na tecnologia OpenFlow, para o tratamento de falhas em hipercubos incluindo a redução do número de entradas nas tabelas de encaminhamento, com taxa de compactação logarítmica, proporcional ao número de dimensões do hipercubo.
|
3 |
Reconnection: Establishing a Link Between Physical And Virtual SpaceMiller, Daniel 27 October 2014 (has links)
No description available.
|
4 |
A NEW HOME FOR WHITE SANDS MISSILE RANGE TELEMETRY DATA IN THE NEW MILLENNIUMNewton, Henry L., Bones, Gary L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The White Sands Telemetry Data Center (TDC) is moving to a new home. The TDC,
along with various range functions, is moving to the new J. W. Cox Range Control
Center (CRCC). The CRCC is under construction and will replace the present control
center. Construction of the new CRCC and the resulting move was prompted by the
presence of asbestos in the present Range Control Center (RCC).
The CRCC construction will be completed in September 1999 at which time the
communications backbone will be installed. (Estimated time to complete the installation
is nine months.) In early 2000, White Sands will begin transition of the TDC and other
commodity functions to the CRCC. The transition must not interrupt normal support to
range customers and will result in the consolidation of all range control functions.
The new CRCC was designed to meet current and future mission requirements and will
contain the latest in backbone network design and functionality for the range customer.
The CRCC is the single point of control for all missions conducted on the 3700 square
mile range.
The Telemetry Data Center will be moved in two parts into the new CRCC. This will
allow us to run parallel operations with the old RCC until the CRCC is proven reliable
and minimize overall downtime. Associated telemetry fiber optics, microwave
communications and field data relay sites will be upgraded and moved at the same time.
Since the TDC is so tightly dependent upon data input from both fiber optics and
microwave communications inputs, a cohesive move is critical to the overall success of
the transition.
This paper also provides an overview of the CRCC design, commodity transition, and
lessons learned.
|
5 |
Clouded space: Internet physicalityIzumo, Naoki 01 May 2017 (has links)
On Friday October 21st, 2016, there was a large-scale hack of an Internet domain hosting provider that took several websites including Netflix, Amazon, Reddit, and Twitter offline. Dyn, a cloud-based Internet Performance Management company, announced at 9:20AM ET that it resolved an attack that began at 7AM ET that day. However, another attack happened at 11:52AM ET. The attacks raised concern among the public and directed our attention towards Internet security. This also revealed the precariousness of Internet infrastructure. The infrastructure being used today is opaque, unregulated, and incontestable. Municipally provided public utilities are built without any transparency; thus, we do not expect failure from those systems. For instance, the Flint, Michigan water crisis raised issues of water infrastructure. Not only did the crisis spark talks about the corrosion of pipes, but also larger societal issues. Flint, a poor, largely African American community, became a victim of environmental racism—a type of discrimination where communities of color or low-income residents are forced to live in environmental dangerous areas. In order for myself and the larger public to understand this opaque system, we need to understand the infrastructure and how it works.
With regards to Internet infrastructure, I focus on data centers, where there are backup servers, batteries and generators built into the architectural landscape in case of failure. There is a common held thought that overshadows the possibility of imminent technological failure—it cannot happen. This sort of thinking influences other modes of our daily lives: individuals building concrete bomb shelters underground for the apocalypse, stocking food, but not preparing for data breakdown. The consciousness of loss is further perpetuated by technology and its life expectancy.
Clouded Space: Internet Physicality attempts to explore the unexceptional infrastructure of the Internet and how it exists right beneath our feet. That in itself is not very cloud-like. The work questions integrity of our infrastructure as much as environmental issues, highlighting the questionable relationship we have with data and our inclination to backup data to protect ourselves from failure. This is a relatively new topic and the challenges are not well understood. There seem to be cracks in the foundation, and though they are not yet obvious, they appear to be widening.
|
6 |
A Research of Customer Potential Demand and Satisfaction of Internet Data CenterLiou, Jen-Jui 30 July 2002 (has links)
The requirements of network technologies are considered significantly important, because modern enterprises face the trend of e-commerce and widely use the conveniences of Internet communications. Internet Data Center (IDC) provides services to satisfy customers' requirements.
Hence, IDC becomes a new area of markets in the world. Due to the market is so small in Taiwan, enterprises cannot get profits, even that they loss business.
To alleviate the above problem, we study three frameworks of fundamental services IDC, including infrastructure, superintendence and add-value service. The goal is to find the important services for customer demand by comparing the customer satisfaction between before and after of the rent.
We also compare the differences between fundamental factors and service factors, the correlation between integrated service and service factors, and the differences between satisfaction and expecting before and after of the rent. The purpose of our survey is to provide feasible suggestions to IDC managers so that they can not only survive but also get profits. The results have been shown as follows:
1.The important services of customer demand include:
(a)power, air-condition, bandwidth fire suppression system under the infrastructure service.
(b)anti-virus protection and DOS attacks prevention under the monitoring service.
(c)storage backup, remote backup and firewall protection under the add-value service.
2.The high demands for quality of service (QoS) include:
(a)power, air-condition, fire suppression system, security access control system and TV security monitoring under the infrastructure service.
(b)storage backup remote backup and firewall protection under the add-value service.
3.The unsatisfaction for quality of service include:
(a)security access control system, TV security monitoring, temperature & humidity control, and web hosting under infrastructure service.
(b)anti-virus protection, DOS attacks prevention under the monitoring service.
(c)the cache service under add-value service.
4. We compare the differences between expected QoS and satisfaction. The results are shown in Figure 5.1.2, Figure 5.1.3, and Figure 5.1.4 respectively.
5. We compare the differences between the expected service and satisfaction before and after of the rent. The results are shown that four items are ¡§significant difference¡¨, including
(a)air-condition, UPS under the infrastructure.
(b)application system hosting, DNS under add-value service.
In contrast, six items are no ¡§significant difference¡¨, including
(a)security access control and TV security monitoring under infrastructure.
(b)server monitoring under the monitoring service, the contracts of various QoS, consultant, systems integration, total solution, remote backup and the whole service.
6. We compare the correlation between the whole service and service factor. The results are shown that six items are ¡§high correlation¡¨, including
(a)equipment security under the factor of service demand.
(b)equipment security, equipment service, monitoring service under the expected factor for QoS.
(c)equipment security and integrated service under the satisfaction factor for QoS.
|
7 |
Modeling the influence of geographic variables on snowfall in Pennsylvania from 1950-2007Pier, Heather L. January 2009 (has links)
Thesis (M.S.)--University of Delaware, 2009. / Principal faculty advisor: Daniel J. Leathers, Dept. of Geography. Includes bibliographical references.
|
8 |
Avaliação de dependabilidade de infraestruturas de data centers considerando os efeitos da variação de temperaturaSouza, Rafael Roque de 30 August 2013 (has links)
Submitted by Luiz Felipe Barbosa (luiz.fbabreu2@ufpe.br) on 2015-03-12T13:23:25Z
No. of bitstreams: 2
Dissertaçao Rafael de Souza.pdf: 5024167 bytes, checksum: d0690bd2c55d5a6ca5e7bbe243777c13 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:14:31Z (GMT) No. of bitstreams: 2
Dissertaçao Rafael de Souza.pdf: 5024167 bytes, checksum: d0690bd2c55d5a6ca5e7bbe243777c13 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T13:14:31Z (GMT). No. of bitstreams: 2
Dissertaçao Rafael de Souza.pdf: 5024167 bytes, checksum: d0690bd2c55d5a6ca5e7bbe243777c13 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-08-30 / FACEPE / Os data centers estão em constante crescimento, a fim de atender às demandas
de novas tecnologias, como cloud computing e e-commerce. Em tais paradigmas,
períodos de inatividade podem levar a perdas financeiras de milhões de dólares e
danificar permanentemente a reputação de uma empresa. Vários fatores afetam
a disponibilidade de sistemas de TI em data center, entre eles, as variações de
temperatura ambiente.
Este trabalho propõe modelos para contemplar o efeito de variação de temperatura
nas infraestruturas do data center. Além destes modelos, também é proposta
uma metodologia para auxiliar na elaboração e avaliação dos diferentes cenários.
Esta metodologia permite a análise através de vários modelos intermediários que
ajudam a encontrar o efeito de variação de temperatura na disponibilidade das
infraestruturas de TI do data center.
Nesta abordagem, a avaliação é realizada com modelos de rede de Petri estocásticas,
modelo de Arrhenius, modelo de energia, e diagrama de blocos de confiabilidade.
Por fim, três estudos de casos real, bem como, exemplos são apresentados
com a finalidade de mostrar a aplicabilidade deste trabalho.
|
9 |
TRIIIAD: Uma Arquitetura para Orquestração Automônica de Redes de Data Center Centrado em Servidor.VASSOLER, G. L. 22 May 2015 (has links)
Made available in DSpace on 2018-08-02T00:01:58Z (GMT). No. of bitstreams: 1
tese_3847_Tese - Gilmar Luiz Vassoler.pdf: 9249571 bytes, checksum: 33940b4014472a53a7362b39a8c9af3a (MD5)
Previous issue date: 2015-05-22 / sta tese apresenta duas contribuições para as redes de data center centrado em
servidores. A primeira, intitulada Twin Datacenter Interconnection Topology, foca nos
aspectos topológicos e demostra como o uso de Grafos Gêmeos podem potencialmente reduzir
o custo e garantir alta escalabilidade, tolerância a falhas, resiliência e desempenho. A segunda,
intitulada TRIIIAD TRIple-Layered Intelligent and Integrated Architecture for Datacenters,
foca no acoplamento entre a orquestração da nuvem e o controle da rede. A TRIIIAD é
composta por três camadas horizontais e um plano vertical de controle, gerência e
orquestração. A camada superior representa a nuvem do data center. A camada intermediária
fornece um mecanismo leve e eficiente para roteamento e encaminhamento dos dados. A
camada inferior funciona como um comutador óptico distribuído. Finalmente, o plano vertical
alinha o funcionamento das três camadas e as mantem agnósticas entre si. Este plano foi
viabilizado por um controlador SDN aumentado, que se integrou à dinâmica da orquestração,
de forma a manter a consistência entre as informações da rede e as decisões tomadas na
camada de virtualização
|
10 |
Automated Monitoring for Data Center InfrastructureJafarizadeh, Mehdi January 2021 (has links)
Environmental monitoring using wireless sensors plays a key role in detecting hotspots or over-cooling conditions in a data center (DC). Despite a myriad of Data Center Wireless Sensor Network (DCWSN) solutions in literature, their adoption in DCs is scarce due to four challenges: low reliability, short battery lifetime, lack of adaptability, and labour intensive deployment. The main objective of this research is to address these challenges in our specifically designed hierarchical DCWSN, called Low Energy Monitoring Network (LEMoNet).
LEMoNet is a two-tier protocol, which features Bluetooth Low Energy (BLE) for sensors communication in the first tier. It leverages multi-gateway packet reception in its second tier to mitigate the unreliability of BLE. The protocol has been experimentally validated in a small DC and evaluated by simulations in a midsize DC. However, since the main application of DCWSNs is in colocation and large DCs, an affordable and fast approach is still required to assess LEMoNet in large scale. As the first contribution, we develop an analytical model to characterize its scalability and energy efficiency in a given network topology. The accuracy
of the model is validated through extensive event-driven simulations. Evaluation results show that LEMoNet can achieve high reliability in a network of 4800 nodes at a duty cycle of 15s.
To achieve the network adaptability, we introduce and design SoftBLE, a Software-Defined Networking (SDN) based framework that provides controllability to the network. It takes advantages of advanced control knobs recently available in BLE protocol stacks. SoftBLE is complemented by two orchestration algorithms to optimize gateway and sensor parameters based on run-time measurements. Evaluation results from both an experimental testbed and a large-scale simulation study show that using SoftBLE, sensors consume 70% less power in data collection compared to those in baseline approaches while achieving the Packet Reception Rate (PRR) no less than 99.9%.
One of its main steps of DCWSN commissioning is sensor localization, which is labour-intensive if is driven manually. To streamline the process, we devise a novel approach for automated sensor mapping. Since Radio Frequency (RF) alone is not a reliable data source for sensor localization in harsh and multi-path rich environments such as a DCs, we investigate using non-RF alternatives. Thermal Piloting is a classification model to correlate temperature sensor measurements with the expected thermal values at their locations. It achieves an average localization error of 0.64 meters in a modular DC testbed. The idea is further improved by a multimodal approach that incorporates pairwise Received Signal Strength (RSS) measurements of RF signals. The problem is formulated as Weighted Graph Matching
(WGM) between an analytical graph and an experimental graph. A parallel algorithm is proposed to find heuristic solutions to this NP-hard problem, which is 30% more accurate than the baselines. The evaluation in a modular DC testbed shows that the localization errors using multi-modality are less than one-third of that of using thermal data alone. / Thesis / Candidate in Philosophy
|
Page generated in 0.0537 seconds