• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 22
  • 8
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 54
  • 50
  • 48
  • 36
  • 32
  • 25
  • 24
  • 21
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

OneSwitch Data Center Architecture

Sehery, Wile Ali 13 April 2018 (has links)
In the last two-decades data center networks have evolved to become a key element in improving levels of productivity and competitiveness for different types of organizations. Traditionally data center networks have been constructed with 3 layers of switches, Edge, Aggregation, and Core. Although this Three-Tier architecture has worked well in the past, it poses a number of challenges for current and future data centers. Data centers today have evolved to support dynamic resources such as virtual machines and storage volumes from any physical location within the data center. This has led to highly volatile and unpredictable traffic patterns. Also The emergence of "Big Data" applications that exchange large volumes of information have created large persistent flows that need to coexist with other traffic flows. The Three-Tier architecture and current routing schemes are no longer sufficient for achieving high bandwidth utilization. Data center networks should be built in a way where they can adequately support virtualization and cloud computing technologies. Data center networks should provide services such as, simplified provisioning, workload mobility, dynamic routing and load balancing, equidistant bandwidth and latency. As data center networks have evolved the Three-Tier architecture has proven to be a challenge not only in terms of complexity and cost, but it also falls short of supporting many new data center applications. In this work we propose OneSwitch: A switch architecture for the data center. OneSwitch is backward compatible with current Ethernet standards and uses an OpenFlow central controller, a Location Database, a DHCP Server, and a Routing Service to build an Ethernet fabric that appears as one switch to end devices. This allows the data center to use switches in scale-out topologies to support hosts in a plug and play manner as well as provide much needed services such as dynamic load balancing, intelligent routing, seamless mobility, equidistant bandwidth and latency. / PHD
62

Avaliação de desempenho de plataformas de virtualização de redes. / Performance evaluation of network virtualization plataforms.

Leopoldo Alexandre Freitas Mauricio 27 August 2013 (has links)
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual. Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência, flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso, apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da rede para implementar alternativas ao protocolo IP. / The aim of this work is to evaluate the performance of routing virtual environments built on x86 machines and network devices existing on the Internet today. Among the most widely used virtualization platforms, we want to identify which best meets the requirements of a virtual routing to allow programming of the core production networks. Virtualization platforms Xen and KVM were installed on modern large capacity x86 machines, and they were compared for efficiency, flexibility and isolation between networks, which are the requirements for good performance of a virtual network. The tests results show that, despite being a full virtualization platform, KVM has better performance than Xen in forwarding and routing packets when the VIRTIO is used. Furthermore, only Xen had isolation problems between networks. We also evaluate the effect of the NUMA architecture, very common in modern x86 servers, on the performance of VMs when lots of memory and processor cores are allocated to them. The results show that Input and Output (I/O) network performance can be compromised whether the amounts of virtual memory and CPU allocated to VM do not respect the size of the existing hardware NUMA nodes. Finally, we study the OpenFlow. It allows slicing networks into routers, switches and x86 machines to create virtual environments with different routing forwarding rules. We found that, when installed with Xen and KVM, it enables the migration of virtual networks among different physical nodes, without interruptions in the data streams, and allows to increase the performance of packet forwarding in the virtual networks created. Thus, it was possible to program the core network to implement alternatives to IP protocol.
63

Uma proposta de redirecionamento de fluxos de rede usando openflow para migração de aplicações entre nuvens

Moda, Carlos Spinetti 27 February 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:15Z (GMT). No. of bitstreams: 1 6215.pdf: 2705931 bytes, checksum: f13134b07bf961a0e166ae8f6fdc0bf0 (MD5) Previous issue date: 2014-02-27 / Financiadora de Estudos e Projetos / During the last decade, the advent of large scale processing and the need for rapid modification of computational structures have increased the popularity of Cloud Computing, particularly the Infrastructure as a Service model. Several companies have invested in infrastructure to become providers of this kind of service, whether for general public or only to supply their own business needs. This has increased the number of virtualized datacenters across the world and created a growing interest in interoperability between different providers. However, due to the lack of technology standardization, and to limitations in the current network s architecture, this interoperability is still an issue. Based on this, this research project presents an OpenFlow based network flow redirection architecture to support service continuity during the migration of applications between different IaaS providers. The tests performed show the applicability of the proposed architecture in a real network environment, having control only of the network edges, and without setting up any specific hardware. / Durante a ultima década, o advento do processamento em larga escala e a necessidade de rápida modificação de estruturas computacionais fez com que a computação em nuvem se popularizasse, em particular na forma de aprovisionamento de Infraestrutura como Serviço. Diversas companhias investiram em infraestrutura para se tornarem provedores desse tipo de serviço, seja para o publico ou para proverem recursos para seus próprios negócios. Isto aumentou o numero de centros de dados virtualizados e gerou o interesse na interoperabilidade entre os diferentes provedores. Entretanto, devido a falta de padronização de tecnologias, e devido a limitações na arquitetura das redes atuais, essa interoperabilidade ainda e um assunto em aberto. Com base nisso, o presente trabalho apresenta uma arquitetura de redirecionamento de fluxos de rede baseada em OpenFlow para o suporte a continuidade de serviço durante a migração de aplicações entre diferentes provedores de IaaS. Os testes realizados comprovam sua aplicabilidade em um cenário real, controlando apenas as bordas da rede, e sem a instalação de nenhum hardware específico.
64

Avaliação de desempenho de plataformas de virtualização de redes. / Performance evaluation of network virtualization plataforms.

Leopoldo Alexandre Freitas Mauricio 27 August 2013 (has links)
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual. Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência, flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso, apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da rede para implementar alternativas ao protocolo IP. / The aim of this work is to evaluate the performance of routing virtual environments built on x86 machines and network devices existing on the Internet today. Among the most widely used virtualization platforms, we want to identify which best meets the requirements of a virtual routing to allow programming of the core production networks. Virtualization platforms Xen and KVM were installed on modern large capacity x86 machines, and they were compared for efficiency, flexibility and isolation between networks, which are the requirements for good performance of a virtual network. The tests results show that, despite being a full virtualization platform, KVM has better performance than Xen in forwarding and routing packets when the VIRTIO is used. Furthermore, only Xen had isolation problems between networks. We also evaluate the effect of the NUMA architecture, very common in modern x86 servers, on the performance of VMs when lots of memory and processor cores are allocated to them. The results show that Input and Output (I/O) network performance can be compromised whether the amounts of virtual memory and CPU allocated to VM do not respect the size of the existing hardware NUMA nodes. Finally, we study the OpenFlow. It allows slicing networks into routers, switches and x86 machines to create virtual environments with different routing forwarding rules. We found that, when installed with Xen and KVM, it enables the migration of virtual networks among different physical nodes, without interruptions in the data streams, and allows to increase the performance of packet forwarding in the virtual networks created. Thus, it was possible to program the core network to implement alternatives to IP protocol.
65

Detecção online de agregações hierárquicas bidimensionais de fluxos em redes definidas por software / Online detection of bidimensional hierarchical heavy hitters in software-defined networks

Cruz, Mário Augusto da 16 December 2014 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-27T14:11:27Z No. of bitstreams: 2 Dissertação - Mário Augusto da Cruz - 2014.pdf: 990265 bytes, checksum: 491a60613f98d994e59969035aa281ca (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-27T14:49:41Z (GMT) No. of bitstreams: 2 Dissertação - Mário Augusto da Cruz - 2014.pdf: 990265 bytes, checksum: 491a60613f98d994e59969035aa281ca (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-03-27T14:49:41Z (GMT). No. of bitstreams: 2 Dissertação - Mário Augusto da Cruz - 2014.pdf: 990265 bytes, checksum: 491a60613f98d994e59969035aa281ca (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-12-16 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Software Defined Networking represents a new paradigm that eases the operation, monitoring and network managing through the decoupling between the control plane and the data plane. However, in this new context, some classic solutions in the network monitoring field need to be revisited, as there are new constraints, but there are also new opportunities. In monitoring context, one strategy commonly used, mainly in high capacity networks, is the tracking of the most frequent items, also known as heavy hitters. One approach to monitoring the most frequent items consists in detecting the hierarchical heavy hitters, which allows an efficient real time monitoring. In this work, we propose and evaluate a new monitoring solution capable of online detection of hierarchical heavy hitters, using the characteristics of software defined networks, in special the OpenFlow protocol. Our proposal, combines a flexible accounting of flow rules, from OpenFlow switches, with inspection of traffic samples through a dedicated device. We evaluate our proposal in a simulated and emulated environments, both using packet traces generated artificially and also from real networks. The results show that our proposal has satisfactory accuracy and low convergence time in comparison to a previous solution to OpenFlow networks, in addition to identify heavy hitters in two dimensions. / As Redes Definidas por Software representam um novo paradigma que flexibiliza a operação, o monitoramento e a gerência de redes através do desacoplamento entre o plano de controle e o plano de dados. No entanto, nesse novo contexto, algumas soluções clássicas da área de monitoramento de redes precisam ser revistas, pois há novas restrições, mas também novas oportunidades. No contexto de monitoramento, uma estratégia comumente utilizada, sobretudo em redes de alta capacidade, é o acompanhamento dos itens mais frequentes, também conhecidos como heavy hitters. Uma das abordagens para monitoramento dos itens mais frequentes consiste em detectar as agregações hierárquicas de fluxos, a qual possibilita realizar um monitoramento eficiente em tempo real. Neste trabalho, propomos e avaliamos uma nova solução de monitoramento capaz de detectar de maneira online as agregações hierárquicas de fluxos, utilizando características de redes definidas por software, em especial do protocolo OpenFlow. Nossa proposta, combina uma contabilização flexível de regras de fluxos, proveniente dos comutadores OpenFlow, com uma inspeção de amostras de tráfego através de um dispositivo dedicado. Avaliamos nossa proposta em ambientes simulado e emulado, utilizando traces de pacotes gerados artificialmente e também de redes reais. Os resultados mostram que nossa proposta possui uma acurácia satisfatória e baixo tempo de convergência em comparação a uma solução anterior para redes OpenFlow, além de identificar heavy hitters em duas dimensões.
66

Multi-operator greedy routing based on open routers / Routeurs ouverts avec routage glouton dans un contexte multi-opérateurs

Venmani, Daniel Philip 26 February 2014 (has links)
Les évolutions technologies mobiles majeures, tels que les réseaux mobiles 3G, HSPA+ et LTE, ont augmenté de façon significative la capacité des données véhiculées sur liaison radio. Alors que les avantages de ces évolutions sont évidents à l’usage, un fait moins connu est que ces améliorations portant principalement sur l’accès radio nécessitent aussi des avancées technologiques dans le réseau de collecte (backhaul) pour supporter cette augmentation de bande passante. Les fournisseurs d’accès Internet (FAI) et les opérateurs de réseau mobile doivent relever un réel défi pour accompagner l’usage des smartphones. Les coûts opérationnels associés aux méthodes traditionnelles de backhaul augmentent plus vite que les revenus générés par les nouveaux services de données. Ceci est particulièrement vrai lorsque le réseau backhaul doit lui-même être construit sur des liens radio. Un tel réseau de backhaul mobile nécessite (i) une gestion de qualité de service (QoS) liée au trafic avec des exigences strictes en matière de délai et de gigue, (ii) une haute disponibilité / fiabilité. Alors que la plupart des FAI et des opérateurs de réseau mobile font état des avantages de mécanismes de redondance et de résilience pour garantir une haute disponibilité, force est de constater que les réseaux actuels sont encore exposés à des indisponibilités. Bien que les causes de ces indisponibilités soient claires, les fluctuations rapides et / ou des pannes imprévues du trafic continuent d’affecter les plus grands opérateurs. Mais ces opérateurs ne pourraient-ils pas mettre en place des modèles et des mécanismes pour améliorer la survie des réseaux pour éviter de telles situations ? Les opérateurs de réseaux mobiles peuvent-ils mettre en place ensemble des solutions à faible coût qui assureraient la disponibilité et la fiabilité des réseaux ? Compte tenu de ce constat, cette thèse vise à : (i) fournir des solutions de backhaul à faible coût ; l’objectif est de construire des réseaux sans fil en ajoutant de nouvelles ressources à la demande plutôt que par sur-dimensionnements, en réponse à un trafic inattendu surgit ou à une défaillance du réseau, afin d’assurer une qualité supérieure de certains services (ii) fournir des communications sans interruption, y compris en cas de défaillance du réseau, mais sans redondance. Un léger focus porte sur l’occurrence de ce problème sur le lien appelé «dernier kilomètre» (last mile). Cette thèse conçoit une nouvelle architecture de réseaux backhaul mobiles et propose une modélisation pour améliorer la survie et la capacité de ces réseaux de manière efficace, sans reposer sur des mécanismes coûteux de redondance passive. Avec ces motivations, nous étudions le problème de partage de ressources d'un réseau de backhaul entre opérateurs concurrents, pour lesquelles un accord de niveau de service (SLA) a été conclu. Ainsi, nous présentons une étude systématique de solutions proposées portant sur une variété d’heuristiques de partage empiriques et d'optimisation des ressources. Dans ce contexte, nous poursuivons par une étude sur un mécanisme de recouvrement après panne qui assure efficacement et à faible coût la protection et la restauration de ressources, permettant aux opérateurs via une fonction basée sur la programmation par contraintes de choisir et établir de nouveaux chemins en fonction des modèles de trafic des clients finaux. Nous illustrons la capacité de survie des réseaux backhaul disposant d’un faible degré de redondance matérielle, par la gestion efficace d’équipements de réseau de backhaul répartis géographiquement et appartenant aux différents opérateurs, en s’appuyant sur des contrôleurs logiquement centralisés mais physiquement distribués, en respectant des contraintes strictes sur la disponibilité et la fiabilité du réseau / Revolutionary mobile technologies, such as high-speed packet access 3G (HSPA+) and LTE, have significantly increased mobile data rate over the radio link. While most of the world looks at this revolution as a blessing to their day-to-day life, a little-known fact is that these improvements over the radio access link results in demanding tremendous improvements in bandwidth on the backhaul network. Having said this, today’s Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are intemperately impacted as a result of this excessive smartphone usage. The operational costs (OPEX) associated with traditional backhaul methods are rising faster than the revenue generated by the new data services. Building a mobile backhaul network is very different from building a commercial data network. A mobile backhaul network requires (i) QoS-based traffic with strict requirements on delay and jitter (ii) high availability/reliability. While most ISPs and MNOs have promised advantages of redundancy and resilience to guarantee high availability, there is still the specter of failure in today’s networks. The problems of network failures in today’s networks can be quickly but clearly ascertained. The underlying observation is that ISPs and MNOs are still exposed to rapid fluctuations and/or unpredicted breakdowns in traffic; it goes without saying that even the largest operators can be affected. But what if, these operators could now put in place designs and mechanisms to improve network survivability to avoid such occurrences? What if mobile network operators can come up with low-cost backhaul solutions together with ensuring the required availability and reliability in the networks? With this problem statement in-hand, the overarching theme of this dissertation is within the following scopes: (i) to provide low-cost backhaul solutions; the motivation here being able to build networks without over-provisioning and then to bring-in new resources (link capacity/bandwidth) on occasions of unexpected traffic surges as well as on network failure conditions for particularly ensuring premium services (ii) to provide uninterrupted communications even at times of network failure conditions, but without redundancy. Here a slightly greater emphasis is laid on tackling the ‘last-mile’ link failures. The scope of this dissertation is therefore to propose, design and model novel network architectures for improving effective network survivability and network capacity, at the same time by eliminating network-wide redundancy, adopted within the context of mobile backhaul networks. Motivated by this, we study the problem of how to share the available resources of a backhaul network among its competitors, with whom a Service Level Agreement (SLA) has been concluded. Thus, we present a systematic study of our proposed solutions focusing on a variety of empirical resource sharing heuristics and optimization frameworks. With this background, our work extends towards a novel fault restoration framework which can cost-effectively provide protection and restoration for the operators, enabling them with a parameterized objective function to choose desired paths based on traffic patterns of their end-customers. We then illustrate the survivability of backhaul networks with reduced amount of physical redundancy, by effectively managing geographically distributed backhaul network equipments which belong to different MNOs using ‘logically-centralized’ physically-distributed controllers, while meeting strict constraints on network availability and reliability
67

Le chemin vers les architectures futures des services mobiles : du Follow Me Cloud (FMC) au Follow Me edge Cloud (FMeC) / The Path towards Future Mobile Service Architectures : from Follow Me Cloud (FMC) to Follow Me edge Cloud (FMeC)

Aissioui, Abdelkader 22 December 2017 (has links)
Les travaux décrits dans cette thèse de doctorat visent à traiter les futures architectures de fourniture de services mobiles basés sur le cloud, à travers l'évolution des infrastructures réseau partant de Mobile Cloud Computing (MCC) au Mobile Edge Computing (MEC). Nous nous sommes essentiellement concentrés sur le concept Follow Me Cloud (FMC) comme une nouvelle stratégie de fourniture de services pour une meilleure expérience utilisateur et une utilisation efficace des ressources. Cela permet aux services basés sur le cloud de "suivre" leurs utilisateurs mobiles au cours de leurs déplacements à travers les technologies de réseau d'accès, tout en fournissant le service basé sur le cloud via le point de service le plus optimal au sein de l'infrastructure cloud. Plusieurs contributions sont proposées dans cette thèse, avec des évaluations à la fois en analyse théorique et en simulation scientifique.Premièrement, nous avons proposé une architecture alternative FMC qui permet: (i) d'ouvrir la conception FMC sur les technologies d'accès réseau mobile non-3GPP (ii) d'assurer l'interopérabilité entre différents domaines PMIPv6 permettant au MN une itinérance inter-domaines PMIPv6 avec une mobilité IP transparente ainsi qu'une continuité de session de service.(iii) d'offrir une architecture sans tunnel dans les situations d'itinérance de MN, en évitant ainsi toute surcharge supplémentaire liée aux tunnels dans la gestion de la mobilité. Le schéma proposé exploite la technologie SDN/OpenFlow et le protocole de gestion de la mobilité PMIPv6 en les intégrant dans un unique framework permettant de réaliser la vision FMC.Deuxièmement, pour aborder les problèmes d'évolutivité et de résilience dans les architectures SDN/OpenFlow centralisées de plan de contrôle, nous avons introduit une nouvelle conception d'un contrôleur SDN élastique et distribué adapté pour MCC et plus particulièrement pour les systèmes de gestion FMC. Nous avons illustré comment le nouveau schéma de plan de contrôle est distribué sur une architecture hiérarchique à deux niveaux, un premier niveau avec un seul contrôleur SDN global et un second niveau avec plusieurs contrôleurs SDN locaux. Ensuite, nous avons présenté les éléments constitutifs de notre nouvel framework de plan contrôle, le calcul de l'indicateur de performance (KPI) du système, et nous avons fixé l'objectif clé de notre conception visant à maintenir la valeur KPI du système dans une fenêtre de seuil prédéfinie. Enfin, nous avons démontré comment cet objectif est atteint en adaptant dynamiquement le nombre et l'emplacement des contrôleurs SDN locaux en utilisant la technologie NFV pour provisionner les contrôleurs SDN en tant que instances VNF (fonction réseau virtuelle) dans le cloud.Troisièmement, nous avons introduit le concept FMeC, exploitant les capacités offertes par la combinaison des architectures MEC et FMC dans le but de satisfaire aux exigences des systèmes automobiles 5G. Nous avons commencé par définir les éléments clés du concept FMeC permettant de fournir la technologie FMC en bordure des réseaux mobiles. Ensuite, nous avons présenté une projection de notre solution FMeC sur un cas d'utilisation de conduite automatisée intégrant l'industrie automobile aux infrastructures Telecom en vue de la vision automobile 5G future. Avec une focalisation sur les types de communications V2I/N, nous avons présenté la conception de notre architecture FMeC basée sur les technologies SDN/OpenFlow et les entités de l'infrastructure MEC dont les ressources sont mises en commun pour fournir un cloud de bordure fédéré. Enfin, nous avons présenté notre framework sensible à la mobilité pour le placement des services dans le cloud de bordure, ce dernier est fondé sur un ensemble d'algorithmes de base qui permettent d'atteindre les exigences de QoS de la conduite automatisée en termes de latence ultra-courte au sein du réseau 5G. / This Ph.D. thesis aims to deal with the future delivery architectures of mobile cloud-based services, through network infrastructures evolving from Mobile Cloud Computing (MCC) to Mobile Edge Computing (MEC). We mainly focused on Follow Me Cloud (FMC) concept as a new service delivery strategy for improved user experience and efficient resource utilization. That enables cloud-based services to follow their mobile users during their movement across access network technologies and by delivering the cloud-service via the optimal service point inside the cloud infrastructure. Several contributions are proposed in this thesis and evaluated in both theoretical analysis and scientific simulation.First, we proposed an alternative FMC architecture that allows: (i) to open the FMC design on non-3GPP mobile network access technologies (ii) to provide interoperability among different PMIPv6 domains permitting MNs inter-PMIPv6 domain roaming with seamless IP mobility and service session continuity (iii) to offer a tunnel-free architecture in MNs roaming situation, avoiding any additional overhead associated with tunneling in mobility management. This proposed scheme leverage SDN/OpenFlow technology and PMIPv6 mobility management protocol by integrating them within a framework permitting to realize the FMC vision.Second, to address the scalability and resiliency concerns in centralized SDN/OpenFlow control plane architecture, we introduced a new design of an elastic distributed SDN controller tailored for Mobile Cloud Computing (MCC) and more notably for Follow Me Cloud (FMC) management systems. We illustrated how the new control plane scheme is distributed on two-level hierarchical architecture, a first level with a single global SDN controller and a second level with several local SDN controllers. Then, we presented the building blocks of our novel control plane framework, the system Key Performance Indicator (KPI) computation and set the key objective of our design aiming to keep the system KPI value within a predefined threshold window. Last, we proved how this goal is achieved by adapting the number of local SDN controllers and their locations in an elastic manner and deploying them as VNF instances on the cloud thanks to NFV technology.Third, we introduced FMeC concept, leveraging the intertwining of MEC and FMC architectures with the aim of sustaining requirements of the 5G automotive systems. We began by defining FMeC key concept elements permitting to provide FMC technology at the edge of mobile networks. Then, we presented an automated driving use case projection of our FMeC solution integrating automotive with Telco infrastructures towards the future 5G automotive vision. Focusing on the V2I/N communications types, we introduced our FMeC design architecture based on SDN/OpenFlow technologies and MEC infrastructure entities whose resources are pooled together to provide a federated edge clouds. Finally, we presented our mobility-aware framework for edge-cloud service placement based on a set of basic algorithms that permit achieving the automated driving QoS requirements in terms of ultra-short latency within 5G network.
68

Mitigation of inter-domain Policy Violations at Internet eXchange Points

Raheem, Muhammad January 2019 (has links)
Economic incentives and the need to efficiently deliver Internet have led to the growth of Internet eXchange Points (IXPs), i.e., the interconnection networks through which a multitude of possibly competing network entities connect to each other with the goal of exchanging traffic. At IXPs, the exchange of traffic between two or more member networks is dictated by the Border gateway Protocol (BGP), i.e., the inter-domain routing protocol used by network operators to exchange reachability information about IP prefix destinations. There is a common “honest-closed-world” assumption at IXPs that two IXP members exchange data traffic only if they have exchanged the corresponding reachability information via BGP. This state of affairs severely hinders security as any IXP member can send traffic to another member without having received a route from that member. Filtering traffic according to BGP routes would solve the problem. However, IXP members can install filters but the number of filtering rules required at a large IXP can easily exceed the capacity of the network devices. In addition, an IXP cannot filter this type of traffic as the exchanged BGP routes between two members are not visible to the IXP itself. In this thesis, we evaluated the design space between reactive and proactive approaches for guaranteeing consistency between the BGP control-plane and the data-plane. In a reactive approach, an IXP member operator monitors, collects, and analyzes the incoming traffic to detect if any illegitimate traffic exists whereas, in a proactive approach, an operator configures its network devices to filter any illegitimate traffic without the need to perform any monitoring. We focused on proactive approaches because of the increased security of the IXP network and its inherent simplified network management. We designed and implemented a solution to this problem by leveraging the emerging Software Defined Networking (SDN) paradigm, which enables the programmability of the forwarding tables by separating the controland dataplanes. Our approach only installs rules in the data-plane that allow legitimate traffic to be forwarded, dropping anything else. As hardware switches have high performance but low memory space, we decided to make also use of software switches. A “heavy-hitter” module detects the forwarding rules carrying most of the traffic and installs them into the hardware switch. The remaining forwarding rules are installed into the software switches.We evaluated the prototype in an emulated testbed using the Mininet virtualnetwork environment. We analyzed the security of our system with the help of static verification tests, which confirmed compliance with security policies. The results reveal that with even just 10% of the rules installed in the hardware switch, the hardware switch directly filter 95% of the traffic volume with nonuniform Internet-like traffic distribution workloads. We also evaluated the latency and throughput overheads of the system, though the results are limited by the accuracy of the emulated environment. The scalability experiments show that, with 10K forwarding rules, the system takes around 40 seconds to install and update the data plane. This is due to inherent slowness of emulated environment and limitations of the POX controller, which is coded in Python. / Ekonomiska incitament och behovet av att effektivt leverera Internet har lett till tillväxten av Internet eXchange Points (IXP), dvs de sammankopplingsnät genom vilka en mängd möjligen konkurrerande nätverksenheter förbinder varandra med målet att utbyta trafik. Vid IXPs dikteras utbytet av trafik mellan två eller flera medlemsnät av gränsgatewayprotokollet (BGP), dvs det inter-domänroutingprotokollet som används av nätoperatörer för att utbyta tillgänglighetsinformation om IP-prefixdestinationer. Det finns ett gemensamt antagande om "honest-closed-world" vid IXP, att två IXP-medlemmar endast utbyter datatrafik om de har bytt ut motsvarande tillgänglighetsinformation via BGP. Detta tillstånd försvårar allvarligt säkerheten eftersom varje IXP-medlem kan skicka trafik till en annan medlem utan att ha mottagit en rutt från den medlemmen. Filtrering av trafik enligt BGP-vägar skulle lösa problemet. IXPmedlemmar kan dock installera filter men antalet filtreringsregler som krävs vid en stor IXP kan enkelt överskrida nätverksenheternas kapacitet. Dessutom kan en IXP inte filtrera denna typ av trafik eftersom de utbytta BGP-vägarna mellan två medlemmar inte är synliga för IXP-enheten själv.I denna avhandling utvärderade vi utrymmet mellan reaktiva och proaktiva metoder för att garantera överensstämmelse mellan BGP-kontrollplanet och dataplanet. I ett reaktivt tillvägagångssätt övervakar, samlar och analyserar en inkommande trafik en IXP-medlem för att upptäcka om någon obehörig trafik finns, medan en operatör konfigurerar sina nätverksenheter för att filtrera någon obehörig trafik utan att behöva övervaka . Vi fokuserade på proaktiva tillvägagångssätt på grund av den ökade säkerheten för IXP-nätverket och dess inneboende förenklad nätverkshantering. Vi konstruerade och genomförde en lösning på detta problem genom att utnyttja det nya SDN-paradigmet (Software Defined Networking), vilket möjliggör programmerbarheten hos vidarebefordringsborden genom att separera kontrolloch dataplanerna. Vårt tillvägagångssätt installerar bara regler i dataplanet som tillåter legitim trafik att vidarebefordras, släppa allt annat. Eftersom hårdvaruomkopplare har hög prestanda men lågt minne, bestämde vi oss för att även använda programvaruomkopplare. En "heavy-hitter" -modul detekterar vidarebefordringsreglerna som transporterar större delen av trafiken och installerar dem i hårdvaruomkopplaren. De återstående spolningsreglerna installeras i programvaruomkopplarna.Vi utvärderade prototypen i en emulerad testbädd med hjälp av virtuella nätverksmiljö Mininet. Vi analyserade säkerheten för vårt system med hjälp av statiska verifieringsprov, vilket bekräftade överensstämmelse med säkerhetspolicyerna. Resultaten visar att med bara 10% av de regler som installerats i hårdvaruomkopplaren filtrerar hårdvaruomkopplaren direkt 95% av trafikvolymen med ojämn Internetliknande trafikfördelningsarbete. Vi utvärderade också latensoch genomströmningsomkostnaderna för systemet, även om resultaten begränsas av noggrannheten hos den emulerade miljön. Skalbarhetsexperimenten visar att med 10K-vidarebefordringsregler tar systemet cirka 40 sekunder för att installera och uppdatera dataplanet. Detta beror på inneboende långsamma emulerade miljöer och begränsningar av POX-kontrollern, som kodas i Python.
69

Impact of using cloud-based SDNcontrollers on the networkperformance

Henriksson, Johannes, Magnusson, Alexander January 2019 (has links)
Software-Defined Networking (SDN) is a network architecture that differs from traditionalnetwork planes. SDN has tree layers: infrastructure, controller, and application. Thegoal of SDN is to simplify management of larger networks by centralizing control into thecontroller layer instead of having it in the infrastructure. Given the known advantages ofSDN networks, and the flexibility of cloud computing. We are interested if this combinationof SDN and cloud services affects network performance, and what affect the cloud providersphysical location have on the network performance. These points are important whenSDN becomes more popular in enterprise networks. This seems like a logical next step inSDN, centralizing branch networks into one cloud-based SDN controller. These questionswere created with a literature studies and answered with an experimentation method. Theexperiments consist of two network topologies both locally hosted SDN (baseline) and cloudhosted SDN. The topology used Zodiac FX switches and Linux hosts. The following metricswas measured: throughput, latency, jitter, packet loss, and time to add new hosts. Theconclusion is that SDN as a cloud service is possible and does not significantly affect networkperformance. One limitation with this thesis was the hardware, resulting in big fluctuationin throughput and packet loss.
70

Survivor : estratégias de posicionamento de controladores orientadas à sobrevivência em redes definidas por software / Survivor : enhanced controller placement strategies for improving sdn survivability

Müller, Lucas Fernando January 2014 (has links)
O paradigma SDN simplifica o gerenciamento da rede ao concentrar todas as tarefas de controle em uma única entidade, o controlador. Nesse modo de operação, os dispositivos de encaminhamento só funcionam de forma completa enquanto conectados a um controlador. Neste contexto, a literatura recente identificou questões fundamentais, como o isolamento de dispositivos em função de disrupções na rede e a sobrecarga de um controlador, e propôs estratégias de posicionamento do controlador para enfrentá-las. Contudo, as propostas atuais têm limitações cruciais: (i) a conectividade dispositivo-controlador é modelada usando um único caminho, ainda que na prática possam ocorrer múltiplas conexões concorrentes; (ii) alterações no comportamento da chegada de novos fluxos são manipulados sob demanda, assumindo que a rede em si pode sustentar altas taxas de requisição; e (iii) mecanismos de recuperação de falhas requerem informações pré-definidas, que, por sua vez, não são otimizadas. Esta dissertação apresenta Survivor, uma nova abordagem de posicionamento do controlador para redes WAN que visa enfrentar esses desafios. A abordagem trata três aspectos de forma explícita durante o projeto da rede: a conectividade, a capacidade e a recuperação. Além disso, tais aspectos são planejados para dois estados distintos da rede: pré e pós-disrupção. Em outras palavras, a rede é configurada da melhor forma tanto para operação normal, quanto para operação após eventos de disrupção. Para este fim, a abordagem é dividida em duas etapas. A primeira define o posicionamento de instâncias do controlador, enquanto a segunda especifica uma lista de controladores de backup para cada dispositivo na rede. Ademais, são desenvolvidas duas estratégias com base na abordagem Survivor. A primeira, implementada em Programação Linear Inteira, garante uma solução ótima a um custo computacional alto. A segunda, implementada através de heurísticas, fornece soluções sub-ótimas a um custo computacional muito mais baixo. Comparações com o estado-da-arte mostram que a abordagem Survivor provê ganhos significativos na sobrevivência (identificado na probabilidade mais baixa de perda de conectividade) e no estado convergente da rede através de mecanismos de recuperação mais inteligentes. / The SDN paradigm simplifies network management by focusing all control tasks into a single entity, the controller. In this way, forwarding devices can only operate correctly while connected to a logically centralized controller. Within this context, recent literature identified fundamental issues, such as device isolation due to disruptions in the network and controller overload, and proposed controller placement strategies to tackle them. However, current proposals have crucial limitations: (i) device-controller connectivity is modeled using single paths, yet in practice multiple concurrent connections may occur; (ii) peaks in the arrival of new flows are only handled on-demand, assuming that the network itself can sustain high request rates; and (iii) failover mechanisms require predefined information which, in turn, has been overlooked. This dissertation presents Survivor, a novel controller placement approach for WAN networks that addresses these challenges. The approach explicitly considers the following three aspects in the network design process: connectivity, capacity and recovery. Moreover, these aspects are planned for two distinct states of the network: pre and postdisruption. In other words, the network is configured optimally for both normal operation and for operation after disruption events. To this end, the approach is divided into two steps. The first defines the positioning of the controller instances, and the second specifies a list of backup controllers for each device on the network. Moreover, two strategies based on Survivor are developed. The first strategy, implemented with Integer Linear Programming, guarantees an optimal solution with a high computational cost. The second strategy, implemented using heuristics, provides sub-optimal solutions with a much lower computational cost. Comparisons to the state-of-the-art show that the Survivor approach provides significant increases in network survivability (identified with the lowest probability of connectivity loss) and converged network state through smarter recovery mechanisms.

Page generated in 0.0219 seconds