Spelling suggestions: "subject:"packet forward"" "subject:"jacket forward""
1 |
MicroCuckoo Hash Engine for High-Speed IP LookupTata, Nikhitha 23 June 2017 (has links)
The internet data traffic is tripling every two years due to the exponential growth in the number of routers. Routers implement the packet classification methodology by determining the flow of the packet, based on various rule checking mechanisms that are performed on the packet headers. However, the memory components like TCAMs used by these various rules are very expensive and power hungry. Henceforth, the current IP Lookup algorithms implemented in hardware are even though able to achieve multi-gigabit speeds, yet suffer with great memory overhead. To overcome this limitation, we propose a packet classification methodology that comprises of MicroCuckoo-hash technique, to route packets. This approach alleviates the memory requirements significantly, by completely eliminating the need for TCAM cells. Cuckoo hash is used to achieve very high speed, hardware accelerated table lookups and also are economical compared to TCAMs. The proposed IP Lookup algorithm is implemented as a simulation-based hardware/software model. This model is developed, tested and synthesized using Vivado HLS tool. / Master of Science / The internet data traffic is tripling every two years; due to the exponential growth in the number of routers. Routers implement the packet classification methodology by determining the flow of the packet, based on various rule checking mechanisms that are performed on the packet headers. However, the memory components like TCAMs used by these various rules are very expensive and power hungry. Henceforth, the current IP Lookup algorithms implemented in hardware are even though able to achieve multi-gigabit speeds, yet suffer with great memory overhead. To overcome this limitation, we propose a packet classification methodology that comprises of MicroCuckoo-hash technique, to route packets. This approach alleviates the memory requirements significantly, by completely eliminating the need for TCAM cells. Cuckoo hash is used to achieve very high speed, hardware accelerated table lookups and also are economical compared to TCAMs. The proposed IP Lookup algorithm is implemented as a simulation-based hardware/software model. This model is developed, tested and synthesized using Vivado HLS tool.
|
2 |
A Coalitional Game Analysis for Selfish Packet-Forwarding NetworksYu, Cih-Sian 21 October 2010 (has links)
In wireless packet-forwarding networks, the nodes or users are always selfish to maximize their utilities in nature. Selfish users would not like to help others for forward each others¡¦ packets, which will cause the network performance degrades severely. To solve the packet-forwarding problem, we propose a novel coalitional game approach based orthogonal decode-and-forward (ODF) relaying scheme to encourage the selfish users for cooperation. In the game-theoretic analysis, we study the properties and stability of the coalitions thoroughly. Furthermore, we prove that the cohesive behavior can be obtained by the aspect of outage probability indeed in this game. Simulation results show that the proposed ODF coalitional game can enforce cooperation exactly and it is always beneficial to form the cooperative groups for all users.
|
3 |
Security Issues in Network Virtualization for the Future InternetNatarajan, Sriram 01 September 2012 (has links)
Network virtualization promises to play a dominant role in shaping the future Internet by overcoming the Internet ossification problem. Since a single protocol stack cannot accommodate the requirements of diverse application scenarios and network paradigms, it is evident that multiple networks should co-exist on the same network infrastructure. Network virtualization supports this feature by hosting multiple, diverse protocol suites on a shared network infrastructure. Each hosted virtual network instance can dynamically instantiate custom set of protocols and functionalities on the allocated resources (e.g., link bandwidth, CPU, memory) from the network substrate. As this technology matures, it is important to consider the security issues and develop efficient defense mechanisms against potential vulnerabilities in the network architecture.
The architectural separation of network entities (i.e., network infrastructures, hosted virtual networks, and end-users) introduce set of attacks that are to some extent different from what can be observed in the current Internet. Each entity is driven by different objectives and hence it cannot be assumed that they always cooperate to ensure all aspects of the network operate correctly and securely. Instead, the network entities may behave in a non-cooperative or malicious way to gain benefits. This work proposes set of defense mechanisms that addresses the following challenges: 1) How can the network virtualization architecture ensure anonymity and user privacy (i.e., confidential packet forwarding functionality) when virtual networks are hosted on third-party network infrastructures?, and 2) With the introduction of flexibility in customizing the virtual network and the need for intrinsic security guarantees, can there be a virtual network instance that effectively prevents unauthorized network access by curbing the attack traffic close to the source and ensure only authorized traffic is transmitted?.
To address the above challenges, this dissertation proposes multiple defense mechanisms. In a typical virtualized network, the network infrastructure and the virtual network are managed by different administrative entities that may not trust each other, raising the concern that any honest-but-curious network infrastructure provider may snoop on traffic sent by the hosted virtual networks. In such a scenario, the virtual network might hesitate to disclose operational information (e.g., source and destination addresses of network traffic, routing information, etc.) to the infrastructure provider. However, the network infrastructure does need sufficient information to perform packet forwarding. We present Encrypted IP (EncrIP), a protocol for encrypting IP addresses that hides information about the virtual network while still allowing packet forwarding with longest-prefix matching techniques that are implemented in commodity routers. Using probabilistic encryption, EncrIP can avoid that an observer can identify what traffic belongs to the same source-destination pairs. Our evaluation results show that EncrIP requires only a few MB of memory on the gateways where traffic enters and leaves the network infrastructure. In our prototype implementation of EncrIP on GENI, which uses standard IP header, the success probability of a statistical inference attack to identify packets belonging to the same session is less than 0.001%. Therefore, we believe EncrIP presents a practical solution for protecting privacy in virtualized networks.
While virtualizing the infrastructure components introduces flexibility by reprogramming the protocol stack, it doesn't directly solve the security issues that are encountered in the current Internet. On the contrary, the architecture increases the chances of additive vulnerabilities, thereby increasing the attack space to exploit and launch several attacks. Therefore it is important to consider a virtual network instance that ensures only authorized traffic is transmitted and attack traffic is squelched as close to their source as possible. Network virtualization provides an opportunity to host a network that can guarantee such high-levels of security features thereby protecting both the end systems and the network infrastructure components (i.e., routers, switches, etc.). In this work, we introduce a virtual network instance using capabilities-based network which present a fundamental shift in the security design of network architectures. Instead of permitting the transmission of packets from any source to any destination, routers deny forwarding by default. For a successful transmission, packets need to positively identify themselves and their permissions to each router in the forwarding path. The proposed capabilities-based system uses packet credentials based on Bloom filters. This high-performance design of capabilities makes it feasible that traffic is verified on every router in the network and most attack traffic can be contained within a single hop. Our experimental evaluation confirm that less than one percent of attack traffic passes the first hop and the performance overhead can be as low as 6% for large file transfers.
Next, to identify packet forwarding misbehaviors in network virtualization, a controller-based misbehavior detection system is discussed as part of the future work. Overall, this dissertation introduces novel security mechanisms that can be instantiated as inherent security features in the network architecture for the future Internet. The technical challenges in this dissertation involves solving problems from computer networking, network security, principles of protocol design, probability and random processes, and algorithms.
|
4 |
以公車亭為基礎之耐延遲車載網路封包轉發策略 / A kiosk based packet forwarding strategy in vehicular delay tolerant networks陳維偵, Chen, Wei Chen Unknown Date (has links)
在耐延遲網路(Delay Tolerant Network)中,因節點之間的高移動性、連接的不確定性環境嚴苛限制,採用Store-And-Forward 訊息傳輸的模式提供一個可接受的網路表現。常見的路由協定可分為機會路由、基於預測的路由以及調度路由,然而這些路由協定使用在市區環境中,有些許不足的地方,因此我們提出適用在市區封包轉發策略。
我們提出的以公車亭為基礎之耐延遲車載網路封包轉發策略,是在市區環境中建立一個以公車亭為基礎的資料傳送架構,包含汽車、公車、公車站、公車轉運站四種節點。我們建立節點與節點相遇時資料傳送規則,例如汽車與汽車相遇、汽車與公車相遇、公車與公車站相遇、公車與公車轉運站相遇、公車轉運站與公車相遇、公車站與汽車相遇、汽車與目的地相遇時各自有不同的資料傳送判斷與限制。
實驗結果也證明所提出的演算法,除了可以有效地減少延遲傳送時間並提高訊息成功傳送率,及在各節點有限的緩衝區大小下,我們的封包轉發策略有著最突出的效能。 / In Delay Tolerant Networks (DTNs), there does not exist an end-to-end path due to intermittent connectivity and high node mobility. Messages are stored for a period of time at network nodes and are conveyed hop-by-hop to the destination. The current DTN routing protocols can be summarized into three categories: opportunistic, prediction-based and scheduling protocols. However, these routing protocols have some deficiencies and are not specifically focused on the urban areas.
Based on the characteristics of urban areas, we proposed a kiosk based packet forwarding strategy for vehicular delay tolerant networks. We established the rules of data transmission when one node contacts other nodes. More specifically, Car-to-Car, Car-to-Bus, Bus-to-Bus stop, Bus-to-Bus transfer station, Bus transfer station-to-Bus, Bus stop-to-Car ,Car-to-Destination contacts, have different judgments and restrictions for data forwarding.
The simulation results demonstrate that we proposed packet forwarding strategy would reduce the delivery delay, and improve the successful delivery rate. Especially with limited buffer and little overhead, our proposed strategy has the most prominent performance.
|
5 |
Efficient Cache Organization For Application Specific And General Purpose ProcessorsRajan, Kaushik 05 1900 (has links)
The performance gap between processor and memory continues to remain a major performance bottleneck in both application specific and general purpose processors. This thesis strives to ease the above bottleneck by exploiting the characteristics of the application domain to improve the cache organization for two distinct processor architectures:
(1) application specific processors for packet forwarding, (2) general purpose processors.
Packet forwarding algorithms make use of a trie data structure to determine the forwarding route. We observe that the locality characteristics of the nodes at various levels of such a trie are different. Nodes that are closer to the root node, especially those that are immediate children of the root node (level-one nodes), exhibit higher temporal locality than nodes lower down the trie. Based on this observation we propose a novel Heterogeneously Segmented Cache Architecture (HSCA) that uses separate caches for level-one and lower-level nodes, each with carefully chosen sizes. We also propose a new replacement policy to enhance the performance of HSCA. Performance evaluation indicates that HSCA results in up to 32% reduction in average memory access time over a unified cache that shares the same cache space among all levels of the trie. HSCA also outperforms a previously proposed results cache.
The use of a large root branching factor in a forwarding trie forcefully introduces a large number of nodes at level-one. Among these, only nodes that cover prefixes from the routing table are useful while the rest, are superfluous. We find that as many as 75% of the level-one nodes are superfluous. This leads to a skewed distribution of useful nodes among the cache sets of the level-one nodes cache. We propose a novel two-level mapping framework that achieves a better nodes to cache set mapping and hence incurs fewer conflict misses. Two-level mapping first aggregates nodes into Initial Partitions (IPs) using lower order bits and then remaps them from IPs into Refined Partitions (RPs), that form sets, based on some higher order bits. It provides flexibility in placement by allowing each IP to choose a different remap function. We propose three schemes conforming to the framework. A speedup in average memory access time of as much as 16% is gained over HSCA.
In general purpose processor architectures, the design objectives of caches at various levels of the hierarchy are different. To ensure low access latencies, L1 caches are small and have low associativities, making them more susceptible to conflict misses. The extent of conflict misses incurred is governed by the placement function and the memory access patterns exhibited by the program. We propose a mechanism to learn the access characteristics of the program at runtime by analyzing the repetitive phases of program. We then make use of the two-level mapping framework to dynamically adapt the placement function. Further, we elegantly incorporate two-level mapping into the cache organization without increasing the cache access latency. Performance evaluation reveals that the proposed adaptive placement mechanism eliminates 32—36% of misses on average over a range of cache sizes.
To prevent expensive off-chip accesses, L2 caches are larger and have higher associativities. Hence, the replacement policy plays a significant role in determining L2 cache performance. Further, as the inherent temporal locality in memory accesses is filtered out by the L1 cache, an L2 cache using the widely prevalent LRU replacement policy incurs significantly higher misses than the optimal replacement policy (OPT). We propose to bridge this gap through a novel replacement strategy that mimics the replacement decisions of OPT. The L2 cache is logically divided into two components, a Shepherd Cache (SC) with a simple FIFO replacement and a Main Cache (MC) with an emulation of optimal replacement. The SC plays the dual role of caching lines and shepherding the replacement decisions close to optimal for MC. Our proposed organization can cover 40% of the gap between LRU and OPT, resulting in 7% overall speedup.
|
6 |
Kooperatives Forwarding in drahtlosen MaschennetzenZubow, Anatolij 16 July 2009 (has links)
In der vorliegenden Arbeit werden praktische Protokolle für spontane drahtlose Multi-Hop Maschennetze vorgestellt, diese betrachten das drahtlose System ganzheitlich und berücksichtigen damit die Besonderheiten des drahtlosen Mediums, wie Fading, Interferenz sowie starke Signaldämpfung aufgrund von Entfernung bzw. Hindernissen. Interferenz ist eine Hauptursache für Paketverlust, Durchsatz und Latenz können durch die gleichzeitige Verwendung mehrerer interferenzfreier Kanäle verbessert werden. In Sensor- bzw. Community-Netzen kommt preiswerte und energiesparende Hardware zum Einsatz, die Verwendung zusätzlicher Antennen bzw. Radios ist deshalb nicht möglich. Andererseits werden aber zukünftige drahtlose Netze eine 100-mal höhere Knotendichte, verglichen mit heutigen Netzen, zeigen. Durch die Ausnutzung der im System inhärent vorliegenden Ressource Nutzer (Multi-User Diversität) werden durch Kooperation virtuelle Multi-Antennen und Multi-Radiosysteme aufgebaut. Aufgrund des großen Abstands zwischen den Knoten erreicht man erstens eine hohe räumliche Diversität und zweitens lassen sich damit auch negative Effekte, wie Interferenz zwischen benachbarten Kanälen, minimieren. Es werden Algorithmen sowohl für die Mediumzugriff- als auch die Routing-Schicht vorgestellt. Da keine spezielle physikalische Schicht notwendig ist, kann IEEE 802.11 verwendet werden. Schließlich kann auch auf die bereits heute verfügbare IEEE 802.11 Hardware, die nur eine Kanalumschaltzeit im Millisekundenbereich erlaubt, zurückgegriffen werden. Die zwei vorgestellten Protokolle eignen sich für Umgebungen mit hoher bzw. geringer Interferenz durch fremde WiFi-Netze. Bezüglich Durchsatz werden moderne Protokolle, wie DSR auf Basis von IEEE 802.11 und ETX-Metrik, um ein Vielfaches übertroffen, außerdem ist die Latenz klein und das TCP/IP-Protokoll kann unverändert verwendet werden. / In this work practical protocols are introduced for spontaneous wireless multi-hop mesh networks which contemplate the wireless system integrally and therefore take into account particular features of the wireless medium, like fading, interference as well as strong signal attenuation due to distance or obstacles. Interference is one of the main causes for packet loss. Throughput and latency can be improved by the simultaneous use of several non-interfering channels. In sensor or community networks inexpensive and energy-saving hardware is used. Additional antennas or radios are impossible therefore; on the other hand future wireless networks will show a 100 times higher node density in comparison with today''s networks, however. By the usage of the resource user (multi-user diversity), that is inherently present in the system virtual multi-antennas and multi-radio systems can be built up by cooperation. Firstly, a high spatial diversity can be achieved due to the large distance between the nodes and secondly, negative effects like interference can be minimized between neighboring channels. Algorithms are introduced both for medium access and routing layer. Since a special physical layer is not required IEEE 802.11 can be used. These days already available 802.11 hardware, which allows a channel switching time in milliseconds, is finally usable. The two protocols introduced here are suitable for environments with a high or low interference caused by foreign WiFi networks. Regarding their performance modern protocols like DSR based on 802.11 and ETX metric are surpassed by far. Moreover, the latency is small and the TCP/IP protocol can be used in its unchanged form.
|
7 |
CoreLB: uma proposta de balanceamento de carga na rede com Openflow e SNMPDossa, Clebio Gavioli 18 August 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-11-01T15:35:45Z
No. of bitstreams: 1
Clebio Dossa_.pdf: 1252617 bytes, checksum: 784b95c29ee09e2a922686b26cb7aa51 (MD5) / Made available in DSpace on 2016-11-01T15:35:45Z (GMT). No. of bitstreams: 1
Clebio Dossa_.pdf: 1252617 bytes, checksum: 784b95c29ee09e2a922686b26cb7aa51 (MD5)
Previous issue date: 2016-08-18 / Nenhuma / Atualmente, muitos serviços distribuem a carga entre diversos nós computacionais direcionando as conexões com alguma estratégia de balanceamento para divisão da carga. O advento do uso de redes definidas por software (SDN) está mudando paradigmas da administração de redes, absorvendo serviços especializados, automatizando processos e gerando inteligência para regras estáticas com uma grande variedade de opções de implementação. O balanceamento de carga é um dos serviços especializados que pode usufruir dos conceitos de SDN, sem definições e processos estáticos como ocorre muitas vezes nos atuais modelos usados de balanceamento de carga. A definição dos protocolos que suportam SDN usualmente permitem soluções alternativas e eficientes para este problema, desta forma, neste trabalho, é apresentada uma proposta de metodologia para balanceamento de carga entre distintos servidores de um pool com a troca do destino de tráfego realizada pela rede. Esta solução é chamada Core-based load balance (CoreLB), pois o serviço especializado de balanceamento de carga é realizado pela rede onde a administração de pacotes é nativamente realizada. A metodologia faz uso do protocolo SNMP para análise de recursos dos servidores com o objetivo de avaliar a situação de carga de cada nó computacional e de estatísticas de consumo de rede através do protocolo OpenFlow. Este trabalho avaliou o balanceamento de carga em serviços Web e a união de estatísticas de rede e da carga dos servidores, para a tomada de decisão de balanceamento, mostra-se uma metodologia eficiente e com melhores tempos de resposta ao usuário comparado com outras metodologias de avaliadas. Também melhorou a distribuição de consumo de recursos entre os servidores. / Currently, most services balance the load between distinct hosts forwarding connections with a load balance strategy in front. Usually, a dedicated appliance is responsible to performthe balance and may be a fault point and become expensive. The new concepts of computer network architecture with Software-Defined Networking (SND) are changing the network management, absorving specialist services, automating process and building intelligence to statics rules with loads of delivery options. The load balance is a specialized service that can enjoy in a positive way of SDN concepts, with low costs, in a flexible way as per the process needs instead of a plastered process definitions that occurs in many actual models. The OpenFlow protocol definition allow us to use a new solution to address this issue. This work shows a load balance purpose between distinct hosts with the destination change of connections made by the network core. It calls Core-based load balance (CoreLB) because the specialized load balance service move to the network core where the package forwarding is naturally made. This solution intend to use the SNMP protocol to analyse the hosts resources to evaluate server’s load. Using the network forwarding statistics and OS load informations, an efficient solution of load balance, the metodology proved to be efficient with better users’ response times average of 19% than no balanced scenario as well as around 9% better than others load balance strategies and a properly balance consumption of resources from hosts side. This process can be inhered in distinct models, however, this research intend to evaluate Web Services.
|
Page generated in 0.0842 seconds