• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 10
  • 2
  • 1
  • Tagged with
  • 33
  • 33
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Engineering Content-Centric Future Internet Applications

Perkaz, Alain January 2018 (has links)
The Internet as we know it today has sustained continuous evolution since its creation, radically changing means of communication and ways in which commerce is globally operated. From the World Wide Web to the two-way video calls, it has shifted the ways people communicate and societies function. The Internet itself was first conceived as a network that would enable the communication between multiple trusted and known hosts, but as the time passed, it has notably evolved. Due to the significant adoption of Internet-connected devices (phones, personal computers, tablets...), the initial device homogeneity has shifted towards an extremely heterogeneous environment in which many different devices consume and publish resources, also referred as services. As the number of connected devices and resources increases, it becomes critical to building systems that enable the autonomic publication, consumption, and retrieval of those resources. As the inherent complexity of systems continues to grow, it is essential to set boundaries to their achievable capabilities. The traditional approaches to network-based computing are not sufficient, and new reference approaches should be presented. In this context the Future Internet (FI) term emerges, a worldwide execution environment connecting large sets of heterogeneous and autonomic devices and resources. In such environments, systems leverage service annotations to fulfil emerging goals and dynamically organise resources based on interests. Although research has been conducted in those areas, active research is being carried out in the following areas: extensible machine-readable annotation of services, dynamic service discovery, architectural approaches for decentralised systems, and interest-focused dynamic service organisations. These concepts will be explained in the next section, as they will serve to contextualise the later presented problem statement and research questions.
12

Uma abordagem baseada em aspectos topológicos para expansão de redes físicas no contexto de virtualização de redes / An approach based on topological factors for the expansion of physical infrastructure in the context of network virtualization

Luizelli, Marcelo Caggiani January 2014 (has links)
A virtualização de redes é um mecanismo que permite a coexistência de múltiplas redes virtuais sobre um mesmo substrato físico. Um dos desafios de pesquisa abordados na literatura é o mapeamento eficiente de recursos virtuais em infraestruturas físicas. Embora o referido desafio tenha recebido considerável atenção, as abordagens que constituem o estado-da-arte apresentam alta taxa de rejeição, i.e., a proporção de solicitações de redes virtuais negadas em relação ao total de solicitações efetuadas ao substrato é elevada. Nesta dissertação, caracteriza-se, inicialmente, a relação entre a qualidade dos mapeamentos de redes virtuais e as estruturas topológicas dos substratos subjacentes. Avalia-se as soluções exatas de um modelo de mapeamento online sob diferentes classes de topologias de rede. A partir do entendimento dos fatores topológicos que influenciam diretamente o processo de mapeamento de redes virtuais, propõe-se uma estratégia para planejar a expansão de redes de provedores de infraestrutura de forma a reduzir consistentemente a taxa de rejeição de requisições de redes virtuais e melhor aproveitar os recursos ociosos da mesma. Os resultados obtidos evidenciam que grande parte das rejeições de redes virtuais ocorre em situações em que há grande disponibilidade de recursos, mas alguns poucos já saturados acabam inviabilizando, em função de características de conectividade do substrato, o atendimento de novas requisições. Ademais, os resultados obtidos utilizando a estratégia proposta evidenciam que o fortalecimento de partes-chave da infraestrutura levam a uma ocupação muito mais satisfatória. Uma expansão de 10% a 20% dos recursos da infraestrutura contribui para um aumento sustentado de até 30% no número de redes virtuais aceitas e de até 45% no aproveitamento dos recursos em comparação com a rede original. / Network virtualization is a mechanism that allows the coexistence of multiple virtual networks on top of a single physical substrate. One of the research challenges addressed recently in the literature is the efficient mapping of virtual resources on physical infrastructures. Although this challenge has received considerable attention, state-of-the-art approaches present, in general, a high rejection rate, i.e., the ratio between the number of denied virtual network requests and the total amount of requests is considerably high. In this thesis, we characterize the relationship between the quality of virtual network mappings and the topological structures of the underlying substrates. Exact solutions of an online embedding model are evaluated under different classes of network topologies. From the understanding of the topological factors that directly influence the virtual network embedding process, we propose an expansion strategy of physical infrastructure in order to suggest adjustments that lead to higher virtual network acceptance and, in consequence, to improved physical resource utilization. The obtained results demonstrate that most of rejections occur in situations in which a significant amount of resource is available, but a few saturated devices and links, depending on connectivity features of the physical substrate, hinder the acceptance of new requests. Moreover, the obtained results using the proposed strategy evidence that an expansion of 10% to 20% of the infrastructure resources leads to a sustained increase of up to 30% in the number of accepted virtual networks and of up to 45% in resource usage compared to the original network.
13

Named Data Networking in Local Area Networks

Shi, Junxiao, Shi, Junxiao January 2017 (has links)
The Named Data Networking (NDN) is a new Internet architecture that changes the network semantic from packet delivery to content retrieval and promises benefits in areas such as content distribution, security, mobility support, and application development. While the basic NDN architecture applies to any network environment, local area networks (LANs) are of particular interest because of their prevalence on the Internet and the relatively low barrier to deployment. In this dissertation, I design NDN protocols and implement NDN software, to make NDN communication in LAN robust and efficient. My contributions include: (a) a forwarding behavior specification required on every NDN node; (b) a secure and efficient self-learning strategy for switched Ethernet, which discovers available contents via occasional flooding, so that the network can operate without manual configuration, and does not require a routing protocol or a centralized controller; (c) NDN-NIC, a network interface card that performs name-based packet filtering, to reduce CPU overhead and power consumption of the main system during broadcast communication on shared media; (d) the NDN Link Protocol (NDNLP), which allows the forwarding plane to add hop-by-hop headers, and provides a fragmentation-reassembly feature so that large NDN packets can be sent directly over Ethernet with limited MTU.
14

Uma abordagem baseada em aspectos topológicos para expansão de redes físicas no contexto de virtualização de redes / An approach based on topological factors for the expansion of physical infrastructure in the context of network virtualization

Luizelli, Marcelo Caggiani January 2014 (has links)
A virtualização de redes é um mecanismo que permite a coexistência de múltiplas redes virtuais sobre um mesmo substrato físico. Um dos desafios de pesquisa abordados na literatura é o mapeamento eficiente de recursos virtuais em infraestruturas físicas. Embora o referido desafio tenha recebido considerável atenção, as abordagens que constituem o estado-da-arte apresentam alta taxa de rejeição, i.e., a proporção de solicitações de redes virtuais negadas em relação ao total de solicitações efetuadas ao substrato é elevada. Nesta dissertação, caracteriza-se, inicialmente, a relação entre a qualidade dos mapeamentos de redes virtuais e as estruturas topológicas dos substratos subjacentes. Avalia-se as soluções exatas de um modelo de mapeamento online sob diferentes classes de topologias de rede. A partir do entendimento dos fatores topológicos que influenciam diretamente o processo de mapeamento de redes virtuais, propõe-se uma estratégia para planejar a expansão de redes de provedores de infraestrutura de forma a reduzir consistentemente a taxa de rejeição de requisições de redes virtuais e melhor aproveitar os recursos ociosos da mesma. Os resultados obtidos evidenciam que grande parte das rejeições de redes virtuais ocorre em situações em que há grande disponibilidade de recursos, mas alguns poucos já saturados acabam inviabilizando, em função de características de conectividade do substrato, o atendimento de novas requisições. Ademais, os resultados obtidos utilizando a estratégia proposta evidenciam que o fortalecimento de partes-chave da infraestrutura levam a uma ocupação muito mais satisfatória. Uma expansão de 10% a 20% dos recursos da infraestrutura contribui para um aumento sustentado de até 30% no número de redes virtuais aceitas e de até 45% no aproveitamento dos recursos em comparação com a rede original. / Network virtualization is a mechanism that allows the coexistence of multiple virtual networks on top of a single physical substrate. One of the research challenges addressed recently in the literature is the efficient mapping of virtual resources on physical infrastructures. Although this challenge has received considerable attention, state-of-the-art approaches present, in general, a high rejection rate, i.e., the ratio between the number of denied virtual network requests and the total amount of requests is considerably high. In this thesis, we characterize the relationship between the quality of virtual network mappings and the topological structures of the underlying substrates. Exact solutions of an online embedding model are evaluated under different classes of network topologies. From the understanding of the topological factors that directly influence the virtual network embedding process, we propose an expansion strategy of physical infrastructure in order to suggest adjustments that lead to higher virtual network acceptance and, in consequence, to improved physical resource utilization. The obtained results demonstrate that most of rejections occur in situations in which a significant amount of resource is available, but a few saturated devices and links, depending on connectivity features of the physical substrate, hinder the acceptance of new requests. Moreover, the obtained results using the proposed strategy evidence that an expansion of 10% to 20% of the infrastructure resources leads to a sustained increase of up to 30% in the number of accepted virtual networks and of up to 45% in resource usage compared to the original network.
15

Virtual network provisioning framework for the future Internet / Architecture d'allocation de réseaux virtuels pour l'Internet du futur

Louati, Inès 26 April 2010 (has links)
Les avancées récentes de la recherche dans le domaine de la virtualisation des réseaux ainsi que l'émergence de nouveaux acteurs, ont beaucoup motivé la recherche et le développement de nouvelles approches et techniques permettant de relever les défis de l'Internet du Futur. Cette thèse a été motivée par ces avancées et par le besoin de concevoir une architecture d'allocation de réseaux virtuels à la demande à partir d'une infrastructure physique partagée. L'objectif de la thèse était, par conséquent, de concevoir et développer des algorithmes d'optimisation et des méthodes d'allocation de ressources virtuelles pour composer et créer des réseaux virtuels selon les besoins des utilisateurs et les conditions du réseau physique partagé. Ce travail suppose l'existence d'un acteur tiers «broker», appelé fournisseur de réseau virtuel, responsable de négocier et d'allouer des ressources virtuelles, offertes comme des services par des fournisseurs d'infrastructures, et de créer et offrir des réseaux virtuels à la demande pour des utilisateurs. L'objectif de la thèse est donc de développer des mécanismes et des algorithmes de découverte (ou «matching»), de correspondance (ou «mapping») et d'instanciation de réseaux virtuels tout en optimisant l'utilisation des ressources du substrat d'une part, et en réduisant, d'autre part, le coût pour les fournisseurs. Dans une première partie, l'analyse et la comparaison de plusieurs algorithmes d'allocation de réseau virtuel proposés dans la littérature ont été menées. Les différentes phases d'allocation de réseaux virtuels comprenant le matching, le mapping et l'instanciation sont définies et explorées en considérant la présence de plusieurs fournisseurs d'infrastructures (multi-domaine). La deuxième partie de cette thèse porte sur la conception, le développement et l'évaluation des algorithmes heuristiques de découverte (matching) permettant la recherche de correspondance entre les besoins spécifiés par les requêtes de réseau virtuel et les propriétés fonctionnelles (ou statiques) des ressources disponibles du substrat physique. Des techniques de regroupement conceptuel sont utilisées pour faciliter et accélérer la découverte et le matching des ressources virtuelles. Des solutions de partitionnement exactes et heuristiques, basées sur des algorithmes de max-flow/min-cut et des techniques de programmation linéaire, sont également proposées et évaluées pour partitionner les requêtes de réseaux virtuels entre plusieurs fournisseurs d'infrastructures tout en réduisant les coûts. La troisième partie de la thèse se focalise sur la conception, le développement et l'évaluation des algorithmes de mapping heuristiques et exacts qui consistent à extraire un graphe de réseau virtuel à partir d'un graphe de substrat physique d'une manière optimale. Un algorithme de mapping heuristique et distribué, basé sur l'approche multi-agents, est développé et évalué permettant d'améliorer le passage à l'échelle et d'assurer une distribution de charge. Un autre algorithme de mapping exact est également modélisé comme étant un programme linéaire afin d'assurer une sélection optimale des ressources tout en réduisant les coûts et maximisant le taux d'acceptation des requêtes. Dans la dernière partie, des algorithmes d'allocation adaptative de réseaux virtuels sont proposés, développés et évalués pour maintenir des réseaux virtuels suite à des changements dynamiques au niveau des services demandés ou bien au niveau des infrastructures physiques. / Recent advances in computer and network virtualisation combined with the emergence of new actors and business models motivated much research and development of new approaches to face the challenges of future Internet architectures. This thesis was motivated by these advances and by the need for efficient algorithms and frameworks to allocate virtual resources and create on demand virtual networks over shared physical infrastructures. The objective of the thesis has consequently been to conceive and develop provisioning algorithms and methods to set up and maintain virtual networks according to user needs and networking conditions. The work assumes the existence of virtual network providers acting as brokers that request virtual resources, on behalf of users, from multiple infrastructure providers. The investigation and research objective is to explore how virtual resources, offered as a service by infrastructure providers, are allocated while optimising the use of substrate resources and reducing the cost for providers. This thesis starts off with the analysis and comparison of several virtual network provisioning approaches and algorithms proposed in the literature. Provisioning phases are defined and explored including resource matching, embedding and binding. The scenario where multiple infrastructure providers are involved in the virtual network provisioning is addressed and a mathematical model of the VN provisioning problem is formulated. The second part of this thesis provides the design, implementation and evaluation of exact and heuristic matching algorithms to search, find and match virtual network requests with available substrate resources. Conceptual clustering techniques are used to facilitate finding and matching of virtual resources in the initial provisioning phases. Exact and heuristic algorithms are also proposed and evaluated to efficiently split virtual network requests over multiple infrastructure providers while reducing the matching cost. The request splitting problem is solved using both max-flow min-cut algorithms and linear programming techniques. The third part of this thesis presents the design, implementation and evaluation of exact and heuristic embedding algorithms to simultaneously assign virtual nodes and links to substrate resources. A distributed embedding algorithm, relying on a multi-agent based approach, is developed for large scale networks. An exact embedding algorithm, formulated as a mixed integer program, is also proposed and evaluated to ensure optimal node and link mapping while reducing cost and increasing the acceptance ratio of requests. Finally, this thesis presents the design and development of adaptive provisioning frameworks and algorithms to maintain virtual networks subject to dynamic changes in services demands and in physical infrastructures. Adaptive matching and embedding algorithms are designed, developed and evaluated to repair resource failures and dynamically optimize substrate networks utilisation.
16

Security Issues in Network Virtualization for the Future Internet

Natarajan, Sriram 01 September 2012 (has links)
Network virtualization promises to play a dominant role in shaping the future Internet by overcoming the Internet ossification problem. Since a single protocol stack cannot accommodate the requirements of diverse application scenarios and network paradigms, it is evident that multiple networks should co-exist on the same network infrastructure. Network virtualization supports this feature by hosting multiple, diverse protocol suites on a shared network infrastructure. Each hosted virtual network instance can dynamically instantiate custom set of protocols and functionalities on the allocated resources (e.g., link bandwidth, CPU, memory) from the network substrate. As this technology matures, it is important to consider the security issues and develop efficient defense mechanisms against potential vulnerabilities in the network architecture. The architectural separation of network entities (i.e., network infrastructures, hosted virtual networks, and end-users) introduce set of attacks that are to some extent different from what can be observed in the current Internet. Each entity is driven by different objectives and hence it cannot be assumed that they always cooperate to ensure all aspects of the network operate correctly and securely. Instead, the network entities may behave in a non-cooperative or malicious way to gain benefits. This work proposes set of defense mechanisms that addresses the following challenges: 1) How can the network virtualization architecture ensure anonymity and user privacy (i.e., confidential packet forwarding functionality) when virtual networks are hosted on third-party network infrastructures?, and 2) With the introduction of flexibility in customizing the virtual network and the need for intrinsic security guarantees, can there be a virtual network instance that effectively prevents unauthorized network access by curbing the attack traffic close to the source and ensure only authorized traffic is transmitted?. To address the above challenges, this dissertation proposes multiple defense mechanisms. In a typical virtualized network, the network infrastructure and the virtual network are managed by different administrative entities that may not trust each other, raising the concern that any honest-but-curious network infrastructure provider may snoop on traffic sent by the hosted virtual networks. In such a scenario, the virtual network might hesitate to disclose operational information (e.g., source and destination addresses of network traffic, routing information, etc.) to the infrastructure provider. However, the network infrastructure does need sufficient information to perform packet forwarding. We present Encrypted IP (EncrIP), a protocol for encrypting IP addresses that hides information about the virtual network while still allowing packet forwarding with longest-prefix matching techniques that are implemented in commodity routers. Using probabilistic encryption, EncrIP can avoid that an observer can identify what traffic belongs to the same source-destination pairs. Our evaluation results show that EncrIP requires only a few MB of memory on the gateways where traffic enters and leaves the network infrastructure. In our prototype implementation of EncrIP on GENI, which uses standard IP header, the success probability of a statistical inference attack to identify packets belonging to the same session is less than 0.001%. Therefore, we believe EncrIP presents a practical solution for protecting privacy in virtualized networks. While virtualizing the infrastructure components introduces flexibility by reprogramming the protocol stack, it doesn't directly solve the security issues that are encountered in the current Internet. On the contrary, the architecture increases the chances of additive vulnerabilities, thereby increasing the attack space to exploit and launch several attacks. Therefore it is important to consider a virtual network instance that ensures only authorized traffic is transmitted and attack traffic is squelched as close to their source as possible. Network virtualization provides an opportunity to host a network that can guarantee such high-levels of security features thereby protecting both the end systems and the network infrastructure components (i.e., routers, switches, etc.). In this work, we introduce a virtual network instance using capabilities-based network which present a fundamental shift in the security design of network architectures. Instead of permitting the transmission of packets from any source to any destination, routers deny forwarding by default. For a successful transmission, packets need to positively identify themselves and their permissions to each router in the forwarding path. The proposed capabilities-based system uses packet credentials based on Bloom filters. This high-performance design of capabilities makes it feasible that traffic is verified on every router in the network and most attack traffic can be contained within a single hop. Our experimental evaluation confirm that less than one percent of attack traffic passes the first hop and the performance overhead can be as low as 6% for large file transfers. Next, to identify packet forwarding misbehaviors in network virtualization, a controller-based misbehavior detection system is discussed as part of the future work. Overall, this dissertation introduces novel security mechanisms that can be instantiated as inherent security features in the network architecture for the future Internet. The technical challenges in this dissertation involves solving problems from computer networking, network security, principles of protocol design, probability and random processes, and algorithms.
17

Etude et mise en place d’une plateforme d’adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels / Conception and implementation of an hardware accelerated video adaptation platform in a home network context

Aubry, Willy 19 December 2012 (has links)
Les avancées technologiques ont permis la commercialisation à grande échelle de terminaux mobiles. De ce fait, l’homme est de plus en plus connecté et partout. Ce nombre grandissant d’usagers du réseau ainsi que la forte croissance du contenu disponible, aussi bien d’un point de vue quantitatif que qualitatif saturent les réseaux et l’augmentation des moyens matériels (passage à la fibre optique) ne suffisent pas. Pour surmonter cela, les réseaux doivent prendre en compte le type de contenu (texte, vidéo, ...) ainsi que le contexte d’utilisation (état du réseau, capacité du terminal, ...) pour assurer une qualité d’expérience optimum. A ce sujet, la vidéo fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommé par les utilisateurs mais est aussi l’un des plus contraignant en terme de ressources nécéssaires à sa distribution (taille serveur, bande passante, …). Adapter un contenu vidéo en fonction de l’état du réseau (ajuster son débit binaire à la bande passante) ou des capacités du terminal (s’assurer que le codec soit nativement supporté) est indispensable. Néanmoins, l’adaptation vidéo est un processus qui nécéssite beaucoup de ressources. Cela est antinomique à son utilisation à grande echelle dans les appareils à bas coûts qui constituent aujourd’hui une grande part dans l’ossature du réseau Internet. Cette thèse se concentre sur la conception d’un système d’adaptation vidéo à bas coût et temps réel qui prendrait place dans ces réseaux du futur. Après une analyse du contexte, un système d’adaptation générique est proposé et évalué en comparaison de l’état de l’art. Ce système est implémenté sur un FPGA afin d’assurer les performances (temps-réels) et la nécessité d’une solution à bas coût. Enfin, une étude sur les effets indirects de l’adaptation vidéo est menée. / On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, …) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.
18

Modelo de título para a próxima geração de Internet. / Title model for the next generation Internet.

Pereira, João Henrique de Souza 14 February 2012 (has links)
A Arquitetura TCP/IP consolidou-se como padrão nas redes de computadores e permanece em crescente expansão de uso. Apesar de seus protocolos principais terem sido especificados há cerca de 30 anos, ainda permanecem como os mais utilizados nas camadas 3 e 4 devido à sua alta qualidade de projeto, que resultou em elevada eficiência e flexibilidade de uso. A organização do TCP/IP em camadas, com especificações pelo IETF, consolidou-se na década de 1980, quando também tornou-se padrão o modelo de referência OSI, da ISO. Posteriormente, o TCP/IP firmou-se, de fato, como o padrão de protocolos para as redes de computadores e com maior utilização sobre o conjunto de protocolos da ISO e de outras instituições de padronização, visto que a Internet tornou-se o maior sistema aberto interconectado e há poucas redes que utilizam os protocolos OSI, embora vários sistemas abertos sejam baseados em seu modelo de referência em camadas. Há extensas pesquisas e discussões voltadas para alternativas de evolução desta arquitetura e, nesta área de estudos, este trabalho define o Modelo de Título para a próxima geração de Internet. Modelo este que permite a unificação de diferentes tipos de endereços utilizados, com possibilidade de reduzir as complexidades das redes de computadores e melhorar o suporte às necessidades de comunicação dessas redes. A partir de uma visão ontológica sobre as redes de computadores e as necessidades de comunicação no uso destas, este trabalho contribui para a evolução tecnológica da comunicação em rede, com melhoria no atendimento de tais necessidades pela aproximação semântica entre a camada superior e as inferiores. A possibilidade de contribuição nessa área da ciência motiva esta pesquisa, que também apresenta a ontologia do Modelo de Título para formalizar a compreensão semântica e o suporte às necessidades das entidades de comunicação, bem como suas possíveis alterações, conforme o contexto. / The TCP/IP architecture has become a standard on computer networks and remains in growing expansion of use. Despite its key protocols have been specified 30 years ago, they still remain the most used in layers 3 and 4, due to its high quality design, which resulted in high efficiency and flexibility of use. The TCP/IP layers model from the IETF has established itself in the 80\'s, when also has became a standard the OSI reference model from the ISO. Subsequently, the TCP/IP has become, the fact, the standard with greater use over the set of ISO protocols and the other institutions of standardization, since the Internet has become the largest open system interconnected and there are few networks that use OSI protocols, although several open systems are based on its layers reference model. Moreover, there is extensive research and discussions in alternatives for the evolution of this architecture and, in this study area, this Thesis defines the Title Model for the next generation Internet. This model allows the unification of the different types of addresses, with the possibility of reducing the complexities of computer networks and better support to the communication needs in computer networks. Based in one ontological view on computer networks and the needs in its use, this study has the intention to contribute to the technological evolution of network communication in order to better meet the entities communication needs and approximate the upper and lower layers. The possibility to contribute in this technological area motivates this research that also presents the ontology for the title model to increase the semantic comprehension for the support of the entities needs and their possible changes in the context.
19

QoS-RCC: um mecanismo com orquestração de sobre-provisionamento de recursos e balanceamento de carga para roteamento orientado a QoS na internet do futuro / A mechanism with orchestration of the overprovisioning of resources and load balancing for QoS-oriented routing in future internet

Freitas, Leandro Alexandre 18 February 2011 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-10-08T19:01:14Z No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-09T11:25:06Z (GMT) No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-10-09T11:25:06Z (GMT). No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2011-02-18 / The Future Internet concepts and designs of 4WARD project concerns a clean-slate architecture with various networking innovations, including a new connectivity paradigm called Generic Path (GP). In GP architecture, several facilities are designed to efficiently support complex value-added applications and services with assured Quality of Service (QoS). GPs mainly abstract underlying network heterogeneity, and any entity, regardless its scope (technology, location or architectural layer) communicate each other in a single way via a common interface. To that, cooperation with network-layer provisioning mechanisms is required in the sense to map data paths meeting session-demanded resources (QoS requirements - minimum bandwidth and maximum delay/loss experience) into appropriate GPs. In contrast as support today, robust and scalable QoS-provisioning facilities are strongly required for efficient GP allocations. Therefore, this dissertation introduces the QoS-Routing and Resource Control (QoSRRC), a set of GP-compliant facilities to cope with the hereinabove requirements. QoSRRC complements GP architecture with QoS-oriented routing, aided with load balancing, to select paths meeting session-demands while keeping residual bandwidth to increase user experience. For scalability, QoS-RRC operates based on an overprovisioning-centric approach, which places low state storage and network operations. Initial QoS-RRC performance evaluation was carried out in Network Simulator v.2 (NS2), demonstrating drastic improvements of flow delay experience and bandwidth use among a relevant state-of-the-art solution. Moreover, the impact of QoS-RRC compared to current IP QoS and routing standards on the user experience has been evaluated, by analysing main objective and subjective Quality of Experience (QoE) metrics, namely Peak Signal to Noise Ratio (PSNR), The Structural Similarity Index (SSIM), Video Quality Metric (VQM) and Mean Opinion Score (MOS). / Os conceitos e modelos para Internet do Futuro no Projeto 4WARD abordam uma arquitetura clean-slate ("recomeçar a Internet do zero") com várias inovações na rede, incluindo um novo paradigma de conectividade, chamado Caminho Genérico (Generic Path - GP). Na arquitetura GP, várias facilidades foram projetadas para suportar eficientemente complexas aplicações de valor agregado e serviços com garantia de Qualidade de Serviço (Quality of Service - QoS). Os GPs abstraem principalmente a heterogeneidade das redes e de qualquer entidade, independentemente de seu escopo (tecnologia, localização ou camada de arquitetural). Para isso, a cooperação da camada de rede com mecanismos de aprovisionamento é necessária, de modo a mapear as demandas dos recursos exigidos pela sessão (requisitos de QoS, como por exemplo largura de banda mínima e máximo atraso/perda) nos GPs adequados. Em contraste com o suporte atual, o aprovisionamento de QoS robusto e escalável é fortemente exigido para alocações eficientes de GPs. Portanto, esta dissertação apresenta o QoS-Routing and Resource Control (QoS-RRC), um mecanismo de apoio a criação de GPs de modo a lidar com suas exigências. O QoS-RRC complementa arquitetura GP com roteamento orientado a QoS, auxiliado com balanceamento de carga, para selecionar os caminhos que vão ao encontro as demandas da sessão, enquanto mantém largura de banda residual para aumentar a experiência do usuário. Para obter escalabilidade, o QoS-RRC opera com base em uma abordagem centrada no aprovisionamento, que emprega baixo armazenamento de estado e poucas operações de rede. A avaliação de desempenho do QoS-RRC foi realizada com o simulador Network Simulator v.2 (NS2), demonstrando drásticas melhorias da qualidade dos fluxos quanto a experiência de atraso e largura de banda, se comparado com as soluções do estado da arte. Além disso, o impacto do QoS-RRC em comparação com o atual QoS das redes IP e os mecanismos de roteamento padrão, sobre a experiência do usuário, também foi avaliado, analisando métricas objetivas e subjetivas de Qualidade da Experiência (Quality of Expericence - QoE), ou seja, Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), Video Quality Metric (VQM) e Mean Opinion Score (MOS).
20

Modelo de título para a próxima geração de Internet. / Title model for the next generation Internet.

João Henrique de Souza Pereira 14 February 2012 (has links)
A Arquitetura TCP/IP consolidou-se como padrão nas redes de computadores e permanece em crescente expansão de uso. Apesar de seus protocolos principais terem sido especificados há cerca de 30 anos, ainda permanecem como os mais utilizados nas camadas 3 e 4 devido à sua alta qualidade de projeto, que resultou em elevada eficiência e flexibilidade de uso. A organização do TCP/IP em camadas, com especificações pelo IETF, consolidou-se na década de 1980, quando também tornou-se padrão o modelo de referência OSI, da ISO. Posteriormente, o TCP/IP firmou-se, de fato, como o padrão de protocolos para as redes de computadores e com maior utilização sobre o conjunto de protocolos da ISO e de outras instituições de padronização, visto que a Internet tornou-se o maior sistema aberto interconectado e há poucas redes que utilizam os protocolos OSI, embora vários sistemas abertos sejam baseados em seu modelo de referência em camadas. Há extensas pesquisas e discussões voltadas para alternativas de evolução desta arquitetura e, nesta área de estudos, este trabalho define o Modelo de Título para a próxima geração de Internet. Modelo este que permite a unificação de diferentes tipos de endereços utilizados, com possibilidade de reduzir as complexidades das redes de computadores e melhorar o suporte às necessidades de comunicação dessas redes. A partir de uma visão ontológica sobre as redes de computadores e as necessidades de comunicação no uso destas, este trabalho contribui para a evolução tecnológica da comunicação em rede, com melhoria no atendimento de tais necessidades pela aproximação semântica entre a camada superior e as inferiores. A possibilidade de contribuição nessa área da ciência motiva esta pesquisa, que também apresenta a ontologia do Modelo de Título para formalizar a compreensão semântica e o suporte às necessidades das entidades de comunicação, bem como suas possíveis alterações, conforme o contexto. / The TCP/IP architecture has become a standard on computer networks and remains in growing expansion of use. Despite its key protocols have been specified 30 years ago, they still remain the most used in layers 3 and 4, due to its high quality design, which resulted in high efficiency and flexibility of use. The TCP/IP layers model from the IETF has established itself in the 80\'s, when also has became a standard the OSI reference model from the ISO. Subsequently, the TCP/IP has become, the fact, the standard with greater use over the set of ISO protocols and the other institutions of standardization, since the Internet has become the largest open system interconnected and there are few networks that use OSI protocols, although several open systems are based on its layers reference model. Moreover, there is extensive research and discussions in alternatives for the evolution of this architecture and, in this study area, this Thesis defines the Title Model for the next generation Internet. This model allows the unification of the different types of addresses, with the possibility of reducing the complexities of computer networks and better support to the communication needs in computer networks. Based in one ontological view on computer networks and the needs in its use, this study has the intention to contribute to the technological evolution of network communication in order to better meet the entities communication needs and approximate the upper and lower layers. The possibility to contribute in this technological area motivates this research that also presents the ontology for the title model to increase the semantic comprehension for the support of the entities needs and their possible changes in the context.

Page generated in 0.4688 seconds