• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 16
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Power Provisioning for Diverse Datacenter Workloads

Li, Jing 26 September 2011 (has links)
No description available.
42

Improving TCP Data Transportation for Internet of Things

Khan, Jamal Ahmad 31 August 2018 (has links)
Internet of Things (IoT) is the idea that every device around us is connected and these devices continually collect and communicate data for analysis at a large scale in order to enable better end user experience, resource utilization and device performance. Therefore, data is central to the concept of IoT and the amount being collected is growing at an unprecedented rate. Current networking systems and hardware are not fully equipped to handle influx of data at this scale which is a serious problem because it can lead to erroneous interpretation of the data resulting in low resource utilization and bad end user experience defeating the purpose of IoT. This thesis aims at improving data transportation for IoT. In IoT systems, devices are connected to one or more cloud services over the internet via an access link. The cloud processes the data sent by the devices and sends back appropriate instructions. Hence, the performance of the two ends of the network ie the access networks and datacenter network, directly impacts the performance of IoT. The first portion of the our research targets improvement of the access networks by improving access link (router) design. Among the important design aspects of routers is the size of their output buffer queue. %Selecting an appropriate size of this buffer is crucial because it impacts two key metrics of an IoT system: 1) access link utilization and 2) latency. We have developed a probabilistic model to calculate the size of the output buffer that ensures high link utilization and low latency for packets. We have eliminated limiting assumptions of prior art that do not hold true for IoT. Our results show that for TCP only traffic, buffer size calculated by the state of the art schemes results in at least 60% higher queuing delay compared to our scheme while achieving almost similar access link utilization, loss-rate, and goodput. For UDP only traffic, our scheme achieves at least 91% link utilization with very low queuing delays and aggregate goodput that is approx. 90% of link capacity. Finally, for mixed traffic scenarios our scheme achieves higher link utilization than TCP only and UDP only scenarios as well as low delays, low loss-rates and aggregate goodput that is approx 94% of link capacity. The second portion of the thesis focuses on datacenter networks. Applications that control IoT devices reside here. Performance of these applications is affected by the choice of TCP used for data communication between Virtual Machines (VM). However, cloud users have little to no knowledge about the network between the VMs and hence, lack a systematic method to select a TCP variant. We have focused on characterizing TCP Cubic, Reno, Vegas and DCTCP from the perspective of cloud tenants while treating the network as a black box. We have conducted experiments on the transport layer and the application layer. The observations from our transport layer experiments show TCP Vegas outperforms the other variants in terms of throughput, RTT, and stability. Application layer experiments show that Vegas has the worst response time while all other variants perform similarly. The results also show that different inter-request delay distributions have no effect on the throughput, RTT, or response time. / Master of Science / Internet of Things (IoT) is the idea that every electronic device around us, like watches, thermostats and even refrigerators, is connected to one another and these devices continually collect and communicate data. This data is analyzed at a large scale in order to enable better user experience and improve the utilization and performance of the devices. Therefore, data is central to the concept of IoT and because of the unprecedented increase in the number of connected devices, the amount being collected is growing at an unprecedented rate. Current computer networks over which the data is transported, are not fully equipped to handle influx of data at this scale. This is a serious problem because it can lead to erroneous analysis of the data, resulting in low device utilization and bad user experience, hence, defeating the purpose of IoT. This thesis aims at improving data transportation for IoT by improving different components involved in computer networks. In IoT systems, devices are connected to cloud computing services over the internet through a router. The router acts a gateway to send data to and receive data from the cloud services. The cloud services act as the brain of IoT i.e. they process the data sent by the devices and send back appropriate instructions for the devices to perform. Hence, the performance of the two ends of the network i.e. routers in the access networks and cloud services in datacenter network, directly impacts the performance of IoT. The first portion of our research targets the design of routers. Among the important design aspects of routers is their size of their output buffer queue which holds the data packets to be sent out. We have developed a novel probabilistic model to calculate the size of the output buffer that ensures that the link utilization stays high and the latency of the IoT devices stays low, ensuring good performance. Results show that that our scheme outperforms state-of-the-art schemes for TCP only traffic and shows very favorable results for UDP only and mixed traffic scenarios. The second portion of the thesis focuses on improving application service performance in datacenter networks. Applications that control IoT devices reside in the cloud and their performance is directly affected by the protocol chosen to send data between different machines. However, cloud users have almost no knowledge about the configuration of the network between the machines allotted to them in the cloud. Hence, they lack a systematic method to select a protocol variant that is suitable for their application. We have focused on characterizing different protocols: TCP Cubic, Reno, Vegas and DCTCP from the perspective of cloud tenants while treating the network as a black-box (unknown). We have provided in depth analysis and insights into the throughput and latency behaviors which should help the cloud tenants make a more informed choice of TCP congestion control.
43

Indicadores de desempenho e dimensionamento de recursos humanos em um centro de opera??es de redes / Performance metrics and sizing plan of Human Resources in the Network Operational Center

Oliveira, Andrey Guedes 14 February 2008 (has links)
Made available in DSpace on 2016-04-04T18:31:22Z (GMT). No. of bitstreams: 1 Andrey Guedes Oliveira.pdf: 4354812 bytes, checksum: 17825c1f558c7ff18c82b21e9935d9fb (MD5) Previous issue date: 2008-02-14 / This work aims to describe the measurement the operational indicators in services attending and support problems realized by Network Operation Center, as well as, to establish attendance metrics directed towards contract maintenance, as well as, recommending the use of the simulations in the Operational Plans of the Service Information Technology Providers. The use of the Theory of Lines, eTOM (enhanced Telecom Operations Map) and ITIL (Information Technology Infrastructure Library) assisted in the analysis of the behavior of Network Operation Center. Had been used a real data from a Network Operation Center of the Datacenter, that it pertaining to a Services Telecommunications Provider. However, it is observed that the Theory of Lines and the simulations need a maximum value of attendance to be applied to the historical data of an operational team. The simulation by this attendance parameter allowed us to map the capacity of the Datacenter team with eight analysts, within acceptable values to fulfill Service Level Agreements up to 92% precision to support problems, 85% to analyze alarm tickets and 89% of utilization in a services attendance. The others simulations had projected the behavior of the team with two new scenes: ten and twelve analysts. The use of the computational simulations can be analyzed and be compared the simulated real data and with attendance projections, making possible an operational planning adjusted the new based business-oriented modalities in convergent networks. The computer simulations using maximum parameter, allowed a analyzing of services types behavior and establishing an operational planning. / Este trabalho tem por objetivo dimensionar recursos humanos nos Centros de Opera??es de Redes e estabelecer indicadores de atendimento voltados para a manuten??o dos contratos e a melhoria dos servi?os prestados, bem como utilizar simula??es computacionais no planejamento operacional para o Centro de Opera??es de Redes. As an?lises de atendimentos deste Centro de Opera??es foram realizadas por meio das melhores pr?ticas em gest?o de Tecnologia da Informa??o - ITIL (Information Technology Infrastructure Library), pelo mapa de processos de Telecomunica??es - eTOM (enhanced Telecom Operations Map) e pela Teoria das Filas. Para isto, foram utilizados dados hist?ricos e reais de um Centro de Opera??es de Rede de um Datacenter, pertencente a um Provedor de Servi?os de Tecnologia da Informa??o e Comunica??o. Foram analisados os dados de atendimentos via Teoria das Filas e comparado com os dados simulados. Notouse a necessidade de um valor m?ximo de atendimento para o dimensionamento da equipe, e esse par?metro serviu de base em novas simula??es que possibilitaram uma an?lise comparativa com proje??es de atendimento. A capacidade do grupo com os patamares aceit?veis, ao cumprimento dos Acordos de N?veis de Servi?os com oito analistas, ocorreu dentro de uma precis?o m?dia de 92% para solu??es a problemas, 85% para suporte a alarmes nos equipamentos e 89% de utiliza??o nas presta??es de servi?os. As demais simula??es projetaram o comportamento da equipe com dois novos cen?rios: dez e doze analistas. Os resultados propiciaram o estabelecimento de um planejamento operacional com os principais indicadores de atendimento e informa??es relevantes para a tomada de decis?es empresariais.
44

Molntjänsternas miljöpåverkan: vad säger media och vad säger forskningen?

Charalambous, Elena, Widell, Louise January 2019 (has links)
Mänskliga aktiviteter bidrar till klimatförändringen som sker idag. Samtidigt växer molnet sig större, och därför är det viktigt att molntjänster är så hållbara som möjligt. Medias framställning av ett ämne kan forma allmänhetens åsikter kring det, som i sin tur formar marknaden som berörs av ämnet. Därför undersöker uppsatsen molntjänsternas miljöpåverkan samt hur media representerar molntjänsternas miljöpåverkan. Detta för att sedan jämföra dessa, och avgöra om media presenterar molntjänsternas miljöpåverkan enligt forskningen. En dokumentanalys av olika vetenskapliga artiklar och medieartiklar genomförs med hjälp av sentimentanalys. Resultatet visar att media pratar mest om utsläpp och energiförbrukning, likt forskningen. Dock saknas det djupgående detaljer. Media pratar även positivt om molntjänsterna, att användningen av dem minskar utsläppen. Forskningen drar inte slutsatsen att molntjänster i sig självt bidrar till minskade utsläpp. Enligt forskningen är molntjänster, och datorhallar idag ekologiskt ohållbara. Energieffektivisering behövs inom molntjänster för att minska datorhallarnas ekologiska fotavtryck, men kan leda till en ökning i efterfrågan. Vilket leder till byggandet av flerdatorhallar och en långvarig låsning till "smutsiga" energikällor. Däremot kan molntjänster hjälpa till i andra industriers miljöarbete. Med hjälp av dagordningsteorin och "IT:s påverkan påväxthuseffekten" dras slutsatsen att media har en lika positiv inställning mot molntjänster som deras negativa inställning, när det kommer till klimatpåverkan. Det representerar dock inte vad forskningen säger om ämnet fullt ut. Det är viktigt att medier bidrar med korrekt information,eftersom de påverkar allmänhetens åsikter och uppfattning kring molntjänsternas miljöpåverkan, så att individerna kan skapa sig en välgrundad åsikt. Det finns mycket utrymme för förbättring i området.
45

BUILDING FAST, SCALABLE, LOW-COST, AND SAFE RDMA SYSTEMS IN DATACENTERS

Shin-yeh Tsai (7027667) 16 October 2019 (has links)
<div>Remote Direct Memory Access, or RDMA, is a technology that allows one computer server to direct access the memory of another server without involving its CPU. Compared with traditional network technologies, RDMA offers several benefits including low latency, high throughput, and low CPU utilization. These features are especially attractive to datacenters, and because of this, datacenters have started to adopt RDMA in production scale in recent years.</div><div>However, RDMA was designed for confined, single-tenant, High-Performance-Computing (HPC) environments. Many of its design choices do not fit datacenters well, and it cannot be readily used by datacenter applications. To use RDMA, current datacenter applications have to build customized software stacks and fine-tune their performance. In addition, RDMA offers limited scalability and does not have good support for resource sharing or protection across different applications.</div><div>This dissertation sets out to seek solutions that can solve issues of RDMA in a systematic way and makes it more suitable for a wide range of datacenter applications.</div><div>Our first task is to make RDMA more scalable, easier to use, and have better support for safe resource sharing in datacenters. For this purpose, we propose to add an indirection layer on top of native RDMA to virtualize its low-level abstraction into a high-level one. This indirection layer safely manages RDMA resources for different datacenter applications and also provide a means for better scalability.</div><div>After making RDMA more suitable for datacenter environments, our next task is to build applications that can exploit all the benefits from (our improved) RDMA. We designed a set of systems that store data in remote persistent memory and let client machines access these data through pure one-sided RDMA communication. These systems lower monetary and energy cost compared to traditional datacenter data stores (because no processor is needed at remote persistent memory), while achieving good performance and reliability.</div><div>Our final task focuses on a completely different and so far largely overlooked one — security implications of RDMA. We discovered several key vulnerabilities in the one-sided communication pattern and in RDMA hardware. We exploited one of them to create a novel set of remote side-channel attacks, which we are able to launch on a widely used RDMA system with real RDMA hardware.</div><div>This dissertation is one of the initial efforts in making RDMA more suitable for datacenter environments from scalability, usability, cost, and security aspects. We hope that the systems we built as well as the lessons we learned can be helpful to future networking and systems researchers and practitioners.</div>
46

Energy-efficient and performance-aware virtual machine management for cloud data centers

Takouna, Ibrahim January 2014 (has links)
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten. / Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
47

Conception et optimisation d'Alimentations Sans Interruption / Design and optimization of Uninterruptible Power Supplies

Ibrahim, Mahmoud 13 July 2016 (has links)
La conception des Alimentations Sans Interruption (ASI) a fait l’objet d’améliorations successives ces dernières années afin d’atteindre des niveaux de rendement avoisinant les 95% tout en minimisant leur encombrement. L’utilisation massive de l’électronique de puissance pour ces systèmes conduit à y concentrer tous les efforts de conception pour augmenter à la fois le rendement et la densité de puissance. Les développements constants en électronique de puissance offrent au concepteur des multitudes d’options, parmi elles, les topologies de puissance multi-niveaux et/ou entrelacées pour réduire le volume des composants passifs, les nouvelles technologies des matériaux semi-conducteurs avec l’introduction des composants grand gap, ainsi que l’avancée technologique sur les matériaux utilisés dans les composants passifs. Le choix entre ces options est un compromis pour atteindre les objectifs prédéfinis, particulièrement lorsque d’autres contraintes apparaissent pour limiter l’espace des solutions possibles, notamment l’aspect thermique, les limites technologiques ou les contraintes CEM. Ces travaux proposent la mise en œuvre de dimensionnements par optimisation multi-objectifs de l’ensemble du convertisseur avec toutes ses contraintes. Ceci offre un outil rapide pour comparer les différentes possibilités de conception optimale capable de quantifier le gain apporté au convertisseur par les différentes solutions. Pour ce faire, les différents choix topologiques et technologiques ont été traités par le développement de modèles multi-physiques acceptant des paramètres d’entrée discrets. Ainsi, les convertisseurs optimisés répondent naturellement aux contraintes industrielles cadrées par des catalogues de fournisseurs spécifiques. Pour ce faire, nous avons commencé par dresser les différentes contraintes énergétiques imposées sur l’ASI dans son environnement. L’identification des solutions adaptées à sa conception est réalisée à travers un état de l’art des recherches dans le domaine de l’électronique de puissance. Des modèles génériques des structures de puissance, ainsi que des modèles multi-physiques discrets des composants sont ensuite développés à la base des approches analytiques assurant le bon compromis entre précision et rapidité de calcul. Finalement, une méthodologie d’optimisation multi-objectif et multi contraintes est réalisé sur l’ensemble des solutions pour quantifier les performances atteintes par chacune d’elles. Des travaux expérimentaux nous ont été indispensables pour valider les modèles et les solutions optimales. Sur la base des résultats d’optimisation un convertisseur PFC de 4.2kW/L a été construit est ses performances ont été validées. / The design of Uninterruptible Power Supplies (UPS) has been successively improved in recent years to achieve efficiency levels of around 95% while minimizing their footprint. The massive use of power electronics for these systems is led to focus all design efforts to increase both efficiency and power density. The constant developments in power electronics provide the designer many options, among them the multi-level and/or interleaved power topologies in order to reduce passive components size, the new technologies of semiconductor materials with the introduction of grand gap components and advanced technology on passive components materials. The choice between these options is a compromise to achieve the predefined objectives, particularly when other constraints appear to limit the space of possible solutions, including thermal aspect, technological limitations or EMI constraints. This work proposes the implementation of multi-objective optimization methodology for the design of power converters with all its constraints. This offers a quick tool to compare the different possibilities of design and to quantify the improvement provided to the converter. To do this, different topological and technological choices were studied with the development of multi-physics models. These models can take discrete variables as input. So, optimized converters could meet industrial requirements covered by real components and their datasheets. To do this, we first establish the different constraints imposed on the UPS within its environment. Identifying solutions to design is carried through a state of the art research in the field of power electronics. Generic models of power structures and discrete multi-physical models of the components are then developed based on analytical approaches by ensuring a good compromise between accuracy and speed of calculation. Finally, multi-objective and multi constraints optimization methodology is performed on the set of design choices to quantify the performances achieved by each of them. Experimental work has been essential for us to validate the models and optimal solutions. Based on the optimization results PFC converter of 4.2kW/L was built is its performance has been validated.
48

Design of Modularized Data Center with a Wooden Construction / Design av modulariserade datacenter med en träkonstruktion

Gille, Marika January 2017 (has links)
The purpose of this thesis is to investigate the possibility to build a modular data center in wood. The goals is to investigate how to build data centers using building system modules, making it easier to build more flexible data centers and expand the business later on. Investigations have been conducted to find out advantages and disadvantages for using wood in a modularized data center structure. The investigation also includes analysing the moistures effect on the material and if there are any other advantages than environmental benefits in using wood as a building material. A literature study were conducted to examine where research already have been conducted and how those studies can be applicable to this thesis. Although the ICT sector is a rapidly growing industry little research has been published in regards to how to build a data center. Most published information involves electric and cooling, not measurements of the building and how materials is affected by the special climate in a data center. As a complement to the little research interviews were conducted and site visits were made. Interviews were conducted with Hydro66, RISE SICS North, Sunet and Swedish modules, whilst site visits were made at Hydro66, RISE SICS North, Sunet and Facebook. As a result of these studies, limitations were identified with regards to maximum and minimum measurements for the building system and service spaces in a data center. These limitations were used as an input when designing a construction proposal using stated building systems and a design proposal for a data center. During the study, access have been granted to measurements of temperature and humidity for the in- and outgoing air of the Hydro66 data center. These measurements have been analyzed with the facts about HVAC systems and the climates effect on wood, for example in regards to strength and stability. This analysis has shown that more data needs to be collected during the winter and that further analysis needs to be conducted, to beable to draw conclusions if the indoor climate of a data center has an effect on the wooden structure. A design proposal for a data center have been produced with regards to the information gathered by the litterature and empirical studies. The proposal were designed to show how the information could be implemented. The result have increased the understanding on how to build data center buildings in wood and how this type of buildings could be made more flexible towards future changes through modularization.
49

Survey of VMware NSX Virtualized Network Platform : Utvärdering av VMware NSX Virtualized Network Platform

Gran, Mikael, Karlsson, Claes January 2017 (has links)
Atea Eskilstuna hade behovet av en plattform som kan förenkla och minska antalet konfigurationer vid implementation av kunder. Arbetet gick ut på att utvärdera plattformen VMware NSX och jämföra det mot traditionella nätverkslösningar. I dagens datacenter är virtualisering en viktig del av dess verksamhet. Användandet av virtualisering optimerar hanteringen av hårdvaru-resurser och kostnader. Virtualisering har hittills främst fokuserat på hantering av servrar och klienter, vilket har passerat nätverksutvecklingen, och därför har det uppstått vissa problem med traditionella datacenter gällande trafikflöden, säkerhet och implementering. Datacenter har tidigare varit optimerade för trafik som ska in eller ut ur datacentret. Detta har lett till att brandväggar och säkerhetspolicies ofta placerats vid datacentrets kant. I dagens datacenter har det däremot blivit en ökning på trafik mellan enheter inom datacentret som behöver skyddas. Denna typ av interna säkerhet kan uppnås av interna policies på samtliga nätverksenheter, dock blir det ohållbart vid implementation då antalet konfigurationspunkter i nätverket ökar. Dessa problem kan hanteras med hjälp av VMware NSX som virtualiserar nätverksenheter och centraliserar administration. NSX har en distribuerad brandväggs-funktion vilket medför att policies kan appliceras direkt på virtuella maskiner och virtuella routrar, från en central konfigurationspunkt. Detta ökar säkerheten och minskar implementationstiden jämfört med traditionella datacenter. Arbetet fokuserar på hur NSX arbetar till skillnad från fysiska nätverksenheter samt hur NSX hanterar frågor som trafikflöden, säkerhet och automation. För dessa ändamål byggdes en laborationsmiljö i Ravellos molntjänst med flertalet virtuella maskiner och en litteraturstudie utfördes. Laborationsmiljön användes för att sätta upp kunder med hjälp av virtuella nätverksenheter och virtuella maskiner. Laborationsmiljön användes som referens för hur implementation av NSX och dess funktioner går till. Litteraturstudien fokuserar på vad som är möjligt i NSX och vilka för- och nackdelar som finns med NSX jämfört med traditionella datacenter. Resultaten visade på att den enda nackdelen med NSX var dess licenskostnader. / Atea Eskilstuna had the need of a platform that simplify and reduce the number of configurations while implementing customer environments. The purpose of this thesis was to do a survey of VMware NSX networking platform and compare it to traditional networking solutions. The virtualization is an important part in data centers and its operations today. With the use of virtualization both hardware resources and costs optimizes. Virtualization has primary been focusing on servers and clients and the network evolution has been overlooked. Therefore, some problems have occurred within traditional data centers regarding traffic flows, security and management. Traditional datacenters have previously been optimized for traffic flows inbound or outbound of the datacenter. This optimization has led to implementation of firewalls and security policies at the datacenter edge. However, in the modern datacenters there’s been an increase of traffic flows between devices inside the datacenter, which needs to be secured. Securing these internal traffic flows can be accomplished through internal policies on the network devices. Implementing these policies however is not a scalable solution as the number of configuration points increases. These problems can be handled through VMware NSX which virtualize network units and centralizes administration. NSX provides a distributed firewall function that through a central management platform can be applied directly on groups of virtual machines and virtual routers. This approach increases security inside the datacenter as well as decreasing the implementation time compared to traditional datacenters. This thesis focus how NSX work unlike physical network units and how it handles issues like hairpinning, security and automation. A lab environment was built up in Ravellos cloud service with virtual machines and a literature study was made for this purpose. The lab environment was used to implement different customers with the help of virtual network components and virtual machines. This lab environment served as a reference point how implementation of NSX, its functions and components was made. The literature study focus on what is possible in NSX and which pros and cons that comes with NSX compared to traditional solutions in data centers. Results shows that the only cons with NSX is license costs.
50

HOTOM: a SDN based network virtualization for datacenters

SILVA, Lucas do Rego Barros Brasilino da 27 July 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-08-03T12:15:02Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) MSc_LucasBrasilino_Final_1.pdf: 2345785 bytes, checksum: 9f48d20f1b438a184562636ba3112aeb (MD5) / Made available in DSpace on 2017-08-03T12:15:02Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) MSc_LucasBrasilino_Final_1.pdf: 2345785 bytes, checksum: 9f48d20f1b438a184562636ba3112aeb (MD5) Previous issue date: 2015-07-27 / Although datacenter’s server hosts have embraced virtualization, the network’s core itself has not. A virtual network (VN) is an instance (slice) of network resources such as links and nodes that is built on top of a physical network. Indeed, virtual networking is of paramount importance for multi-tenant datacenters, since it makes management easier. However, VLANs continue to be used nowadays, driving virtualized datacenters to scalability constraints. VLAN isolates layer 2 (L2) address spaces and indexes them by a 12-bit value, which imposes the hard limit of only 4,096 VNs. Modern Cloud Computing-aware datacenters have being required for delivering IaaS, and are willing to go beyond these scalability restrictions. Even modern tunneling schemes, such as STT, come at a price of overhead because frames are encapsulated by higher layer protocols (UDP, IP). In addition, current virtualized datacenters demand specialized switching hardware (layer 3), increasing datacenter’s CAPEX, and require huge computing resources in order to precompute virtual link’s states. Recently, the Software-Defined Networking (SDN) appears as a potential solution for fulfilling those needs by enabling network programmability. SDN decouples the network control from the data plane, placing the former in a central controller that exposes an API for developers and vendors. As a consequence, controllers have a unified network’s view and are able to execute custom network applications, reaching an unprecedent flexibility and manageability. OpenFlow is currently the most prominent SDN technology. Even with SDN, many questions remain unanswered. For instance, how to provide scalability and dynamics to a network while preserving legacy core devices? If a datacenter operator can preserve its previous investments, surely he will adopt SDN easier. This dissertation presents HotOM (HotOatMeal), a new virtualized datacenter network approach that, by leveraging SDN, overcomes the traditional scalability constraints, enables network programmability while still using legacy network devices, therefore preserving CAPEX. The logic part of HotOM was implemented in Python programming language as a component of the POX OpenFlow controller. HotOM was deployed and evaluated in a real testbed. Analyses were done, from throughput, RTT, CPU time usage to scalability. These metric results were compared against plain VLAN Ethernet network. In addition, a validation of isolation between tenants was performed, as well as a study on protocol overhead. It was confirmed that HotOM scales up to 16.8M tenants, while achieving 47%, 44%, 41% less overhead than STT, VXLAN, and NVGRE, respectively. Finally a qualitative analysis between HotOM and state of the art datacenter virtual network (DCVN) proposals was carried out, showing by comparison that HoTOM consolidates advantages in many functional features: it fulfills almost all evaluated characteristics, more than any other presented technology. / Apesar dos hosts servidores de datacenters terem abraçado a virtualização, o núcleo de redes não o fez. Uma rede virtual (VN - virtual network) é uma instância de recursos de rede, como enlaces e nós, construída sobre uma rede física. De fato, VNs é de suma importância para datacenters multi-inquilinos porque facilitam o gerenciamento. Porém, VLANs ainda continuam sendo utilizadas, conduzindo datacenters virtualizados a restrições de escalabilidade. Uma VLAN isola espaços de endereçamento da camada 2 e os indexa através de um valor de 12 bits, o que impõe um limite de apenas 4.096 VNs. Datacenters modernos de Computação em Nuvem têm sido requisitados, cada vez mais, a dar suporte à IaaS e, portanto, devem suplantar estas restrições de escalabilidade. Mesmo nos novos esquemas de tunelamento, como STT, há um efeito colateral do overhead acrescentando ao quadro das máquinas virtuais, uma vez que estes são encapsulados por protocolos de camadas mais altas (UDP, IP) para transmissão pela rede. Além disso, os atuais datacenters virtualizados exigem dispositivos de comutação especializados, aumentando assim o CAPEX, e necessitam de enormes recursos computacionais para calcular os estados dos links virtuais. Recentemente, as Redes Definidas por Software (SDN - Software-Defined Networking) surgiram como uma solução para atender a tais requisitos ao permitir programabilidade da rede. SDN desacopla o controle da rede do plano de dados, colocando-o em um controlador central que expõe uma API para desenvolvedores e fornecedores. Como consequência, os controladores têm uma visão unificada e são capazes de executar aplicações de rede customizadas, alcançando flexibilidade e gerenciabilidade sem precedentes. OpenFlow é atualmente a tecnologia SDN mais proeminente. Mesmo com SDN, várias questões permanecem sem resposta. Por exemplo, como prover escalabilidade e dinamicidade a uma rede enquanto se mantém os dispositivos legados no núcleo? Esta dissertação apresenta o HotOM (HotOatMeal), uma nova abordagem para redes de datacenters virtualizados que, utilizando SDN, supera as restrições tradicionais de escalabilidade, permite a programabilidade da rede enquanto utiliza dispositivos de rede legados, preservando, assim, o CAPEX. A parte lógica do HotOM foi implementada em Python no controlador OpenFlow POX. O HotOM foi implantado e avaliado em um testbed real. Análises da vazão, RTT, tempo de uso de CPU e escalabilidade foram realizadas. Os resultados foram comparados com Ethernet. Adicionalmente, uma validação sobre isolamento entre inquilinos foi realizada, bem como um estudo sobre o overhead da proposta. O HotOM escala até 16.8M VNs e obtém 47%, 44% e 41% menos overhead que STT, VXLAN e NVGRE. Finalmente foi conduzida uma análise qualitativa entre HotOM e estado da arte em redes virtuais de datacenter, demonstrando-se comparativamente que o HotOM agrega vantagens: ele atende a praticamente todas as características avaliadas, mais que qualquer outra tecnologia apresentada.

Page generated in 0.4396 seconds