Spelling suggestions: "subject:"future 1nternet"" "subject:"future centernet""
1 |
Filtering engine model for VIMNetMahajan, Harshad S. January 2010 (has links)
No description available.
|
2 |
A New Addressing and Forwarding Architecture for the InternetGuo, Cong January 2011 (has links)
The current Internet routing and addressing architecture is facing a serious scalability problem. The default free zone (DFZ) routing table size grows at an increasing and potentially alarming rate. The Internet architecture uses a single namespace - the IP address, to express two functions about a network entity: its identifier and locator. This overloading of semantics leads to the scalability problem as a consequence of multihoming, traffic engineering, and nonaggregatable address allocations. The current Internet architecture does not inherently support emerging features such as mobility either.
This thesis presents a simple addressing and forwarding architecture (SAFA) for the Internet. SAFA separates the locator namespace from the ID namespace so that the locators can follow the hierarchies in the Internet topology and be aggregated. The locators are allocated dynamically and automatically. The hierarchical format of locators gives end systems more control over the route selection. A straightforward forwarding scheme is designed based on the hierarchical addressing scheme. The meshed part of the Internet topology is integrated into the forwarding procedure through a special forwarding table. With a rendezvous service that maps from IDs to locators, SAFA also provides scalable support for mobility, multihoming and traffic engineering. Our work also includes an Internet topology study and a prototype implementation of the
architecture. The evaluation results suggest that SAFA would be feasible in the current Internet if deployed.
|
3 |
Performance Evaluation of Concurrent Multipath Transmission : Measurements and AnalysisTedla, Sukesh Kumar January 2015 (has links)
Context: The data transmission mechanisms in a multi-homed network has gained importance in the past few years because of its potentials. Concurrent multipath transmission (CMT) technique uses the available network interfaces for transmission by pooling multiple paths together. It allows transport mechanisms to work independent of the underlying technology, which resembles the concept of Transport Virtualization (TV). As a result, in the development of Future Internet Architectures (FIA), TV plays a vital role. The leading commercial software technologies like IOS and Android have implemented such mechanisms in their devices. Multipath TCP and CMT-SCTP are the protocols under development which support this feature. The implementation and evaluation of CMT in real-time is complex because of the challenges like path binding, out-of-order packet delivery, packet-reordering and end-to-end delay. Objectives: The main objective of this thesis is to identify the possibilities of implementing CMT in real-time using multiple access technologies, and to evaluate the performance of transmission by measurements and analysis under different scenarios. Methods: To fulfill the objectives of the thesis, different methods are adopted. The development of CMT scenario is based on a spiral methodology where each spiral refers to different objectives. The sub-stages in a spiral are mainly implementation, observations, decisions and modifications. In order to implement and identify the possibilities of CMT in real-time, a deep literature study is performed beforehand. Results: The throughput of data transmission is less affected by varying the total number of TCP connections in transmission. Under different cases it is observed that the throughput has significant impact by varying number of efficient paths in transmission. Conclusion: From the experimental methodology of this work it can be observed that, CMT can be implemented in real-time using off-the-shelf components. Based on the experimentation results, it can be concluded that the throughput of transmission is affected by increasing number of paths. It can also be concluded that the total number of TCP connections during the transmission have less impact on throughput.
|
4 |
A New Addressing and Forwarding Architecture for the InternetGuo, Cong January 2011 (has links)
The current Internet routing and addressing architecture is facing a serious scalability problem. The default free zone (DFZ) routing table size grows at an increasing and potentially alarming rate. The Internet architecture uses a single namespace - the IP address, to express two functions about a network entity: its identifier and locator. This overloading of semantics leads to the scalability problem as a consequence of multihoming, traffic engineering, and nonaggregatable address allocations. The current Internet architecture does not inherently support emerging features such as mobility either.
This thesis presents a simple addressing and forwarding architecture (SAFA) for the Internet. SAFA separates the locator namespace from the ID namespace so that the locators can follow the hierarchies in the Internet topology and be aggregated. The locators are allocated dynamically and automatically. The hierarchical format of locators gives end systems more control over the route selection. A straightforward forwarding scheme is designed based on the hierarchical addressing scheme. The meshed part of the Internet topology is integrated into the forwarding procedure through a special forwarding table. With a rendezvous service that maps from IDs to locators, SAFA also provides scalable support for mobility, multihoming and traffic engineering. Our work also includes an Internet topology study and a prototype implementation of the
architecture. The evaluation results suggest that SAFA would be feasible in the current Internet if deployed.
|
5 |
Design of a Scalable Path Service for the InternetAscigil, Mehmet O 01 January 2015 (has links)
Despite the world-changing success of the Internet, shortcomings in its routing and forwarding system have become increasingly apparent. One symptom is an escalating tension between users and providers over the control of routing and forwarding of packets: providers understandably want to control use of their infrastructure, and users understandably want paths with sufficient quality-of-service (QoS) to improve the performance of their applications. As a result, users resort to various “hacks” such as sending traffic through intermediate end-systems, and the providers fight back with mechanisms to inspect and block such traffic.
To enable users and providers to jointly control routing and forwarding policies, recent research has considered various architectural approaches in which provider- level route determination occurs separately from forwarding. With this separation, provider-level path computation and selection can be provided as a centralized service: users (or their applications) send path queries to a path service to obtain provider- level paths that meet their application-specific QoS requirements. At the same time, providers can control the use of their infrastructure by dictating how packets are forwarded across their network. The separation of routing and forwarding offers many advantages, but also brings a number of challenges such as scalability. In particular, the path service must respond to path queries in a timely manner and periodically collect topology information containing load-dependent (i.e., performance) routing information.
We present a new design for a path service that makes use of expensive pre- computations, parallel on-demand computations on performance information, and caching of recently computed paths to achieve scalability. We demonstrate that, us- ing commodity hardware with a modest amount of resources, the path service can respond to path queries with acceptable latency under a realistic workload. The ser- vice can scale to arbitrarily large topologies through parallelism. Finally, we describe how to utilize the path service in the current Internet with existing Internet applica- tions.
|
6 |
A NETWORK PATH ADVISING SERVICEWu, Xiongqi 01 January 2015 (has links)
A common feature of emerging future Internet architectures is the ability for applications to select the path, or paths, their packets take between a source and destination. Unlike the current Internet architecture where routing protocols find a single (best) path between a source and destination, future Internet routing protocols will present applications with a set of paths and allow them to select the most appropriate path. Although this enables applications to be actively involved in the selection of the paths their packets travel, the huge number of potential paths and the need to know the current network conditions of each of the proposed paths will make it virtually impossible for applications to select the best set of paths, or just the best path.
To tackle this problem, we introduce a new Network Path Advising Service (NPAS) that helps future applications choose network paths. Given a set of possible paths, the NPAS service helps applications select appropriate paths based on both recent path measurements and end-to-end feedback collected from other applications. We describe the NPAS service abstraction, API calls, and a distributed architecture that achieves scalability by determining the most important things to monitor based on actual usage. By analyzing existing traffic patterns, we will demonstrate it is feasible for NPAS to monitor only a few nodes and links and yet be able to offer advice about the most important paths used by a high percentage of traffic. Finally, we describe a prototype implementation of the NPAS components as well as a simulation model used to evaluate the NPAS architecture.
|
7 |
Conception et évaluation des systèmes logiciels de classifications de paquets haute-performance / Design and evaluation of high performance software based packet classification systemsHe, Peng 02 May 2015 (has links)
La classification de paquets consiste à vérifier par rapport à un ensemble de règles prédéfinies le contenu des entêtes de paquets. Cette vérification permet d'appliquer à chaque paquet l'action adaptée en fonction de règles qu'il valide. La classification de paquets étant un élément clé du plan de données des équipements de traitements de paquets, elle est largement utilisée dans de nombreuses applications et services réseaux, comme les pare-feu, l'équilibrage de charge, les réseaux privés virtuels, etc. Au vu de son importance, la classification de paquet a été intensivement étudiée durant les vingt dernières années. La solution classique à ce problème a été l'utilisation de matériel dédiés et conçus pour cet usage. Néanmoins, l'émergence des centres de données, des réseaux définis en logiciel nécessite une flexibilité et un passage à l'échelle que les applications classiques ne nécessitaient pas. Afin de relever ces défis des plateformes de traitement multi-cœurs sont de plus en plus utilisés. Cette thèse étudie la classification de paquets suivant trois dimensions : la conception des algorithmes, les propriétés des règles de classification et la mise en place logicielle, matérielle et son optimisation. La thèse commence, par faire une rétrospective sur les diverses algorithmes fondés sur des arbres de décision développés pour résoudre le problème de classification de paquets. Nous proposons un cadre générique permettant de classifier ces différentes approches et de les décomposer en une séquence de « méta-méthodes ». Ce cadre nous a permis de monter la relation profonde qui existe ces différentes méthodes et en combinant de façon différentes celle-ci de construire deux nouveaux algorithmes de classification : HyperSplit-op et HiCuts-op. Nous montrons que ces deux algorithmes atteignent des gains de 2~200x en terme de taille de mémoire et 10%~30% moins d'accès mémoire que les meilleurs algorithmes existant. Ce cadre générique est obtenu grâce à l'analyse de la structure des ensembles de règles utilisés pour la classification des paquets. Cette analyse a permis de constater qu'une « couverture uniforme » dans l'ensemble de règle avait un impact significatif sur la vitesse de classification ainsi que l'existence de « structures orthogonales » avait un impact important sur la taille de la mémoire. Cette analyse nous a ainsi permis de développer un modèle de consommation mémoire qui permet de découper les ensembles de règles afin d'en construire les arbres de décision. Ce découpage permet jusqu'à un facteur de 2.9 d'augmentation de la vitesse de classification avec une réduction jusqu'à 10x de la mémoire occupé. La classification par ensemble de règle simple n'est pas le seul cas de classification de paquets. La recherche d'adresse IP par préfixe le plus long fourni un autre traitement de paquet stratégique à mettre en œuvre. Une troisième partie de cette thèse c'est donc intéressé à ce problème et plus particulièrement sur l'interaction entre la charge de mise à jour et la vitesse de classification. Nous avons observé que la mise à jour des préfixes longs demande plus d'accès mémoire que celle des préfixes court dans les structures de données d'arbre de champs de bits alors que l'inverse est vrai dans la structure de données DIR-24-8. En combinant ces deux approches, nous avons propose un algorithme hybride SplitLookup, qui nécessite deux ordres de grandeurs moins d'accès mémoire quand il met à jour les préfixes courts tout en gardant des performances de recherche de préfixe proche du DIR-24-8. Tous les algorithmes étudiés, conçus et implémentés dans cette thèse ont été optimisés à partir de nouvelles structures de données pour s'exécuter sur des plateformes multi-cœurs. Ainsi nous obtenons des débits de recherche de préfixe atteignant 40 Gbps sur une plateforme TILEPro64. / Packet classification consists of matching packet headers against a set of pre-defined rules, and performing the action(s) associated with the matched rule(s). As a key technology in the data-plane of network devices, packet classification has been widely deployed in many network applications and services, such as firewalling, load balancing, VPNs etc. Packet classification has been extensively studied in the past two decades. Traditional packet classification methods are usually based on specific hardware. With the development of data center networking, software-defined networking, and application-aware networking technology, packet classification methods based on multi/many processor platform are becoming a new research interest. In this dissertation, packet classification has been studied mainly in three aspects: algorithm design framework, rule-set features analysis and algorithm implementation and optimization. In the dissertation, we review multiple proposed algorithms and present a decision tree based algorithm design framework. The framework decomposes various existing packet classification algorithms into a combination of different types of “meta-methods”, revealing the connection between different algorithms. Based on this framework, we combine different “meta-methods” from different algorithms, and propose two new algorithms, HyperSplit-op and HiCuts-op. The experiment results show that HiCuts-op achieves 2~20x less memory size, and 10% less memory accesses than HiCuts, while HyperSplit-op achieves 2~200x less memory size, and 10%~30% less memory accesses than HyperSplit. We also explore the connections between the rule-set features and the performance of various algorithms. We find that the “coverage uniformity” of the rule-set has a significant impact on the classification speed, and the size of “orthogonal structure” rules usually determines the memory size of algorithms. Based on these two observations, we propose a memory consumption model and a quantified method for coverage uniformity. Using the two tools, we propose a new multi-decision tree algorithm, SmartSplit and an algorithm policy framework, AutoPC. Compared to EffiCuts algorithm, SmartSplit achieves around 2.9x speedup and up to 10x memory size reduction. For a given rule-set, AutoPC can automatically recommend a “right” algorithm for the rule-set. Compared to using a single algorithm on all the rulesets, AutoPC achieves in average 3.8 times faster. We also analyze the connection between prefix length and the update overhead for IP lookup algorithms. We observe that long prefixes will always result in more memory accesses using Tree Bitmap algorithm while short prefixes will always result in large update overhead in DIR-24-8. Through combining two algorithms, a hybrid algorithm, SplitLookup, is proposed to reduce the update overhead. Experimental results show that, the hybrid algorithm achieves 2 orders of magnitudes less in memory accesses when performing short prefixes updating, but its lookup speed with DIR-24-8 is close. In the dissertation, we implement and optimize multiple algorithms on the multi/many core platform. For IP lookup, we implement two typical algorithms: DIR-24-8 and Tree Bitmap, and present several optimization tricks for these two algorithms. For multi-dimensional packet classification, we have implemented HyperCuts/HiCuts and the variants of these two algorithms, such as Adaptive Binary Cuttings, EffiCuts, HiCuts-op and HyperSplit-op. The SplitLookup algorithm has achieved up to 40Gbps throughput on TILEPro64 many-core processor. The HiCuts-op and HyperSplit-op have achieved up to 10 to 20Gbps throughput on a single core of Intel processors. In general, our study reveals the connections between the algorithmic tricks and rule-set features. Results in this dissertation provide insight for new algorithm design and the guidelines for efficient algorithm implementation.
|
8 |
Quality-Impact Assessment of Software Products and Services in a Future Internet PlatformFotrousi, Farnaz January 2015 (has links)
The idea of a Future Internet platform is to deliver reusable and common functionalities to facilitate making wide ranges of software products and services. The Future Internet platform, introduced by the Future Internet Public Private Partnership (FI-PPP) project, makes the common functionalities available through so-called Enablers to be instantly integrated into software products and services with less cost and complexity rather than a development from scratch. Quality assessment of software products and services and gaining insights into whether the quality fulfills users’ expectations within the platform are challenging. The challenges are due to the propagation of quality in the heterogeneous composite software that uses Enablers and infrastructure developed by third parties. The practical problem is how to assess the quality of such composite software as well as the impacts of the quality on users’ Quality of Experience (QoE). The research objective is to study an analytics-driven Quality-Impact approach identifying how software quality analytics together with their impact on QoE of users can be used for the assessment of software products and services in a Future Internet platform. The research was conducted with one systematic mapping study, two solution proposals, and one empirical study. The systematic mapping study is contributed to produce a map overviewing important analytics for managing a software ecosystem. The thesis also proposes a solution to introduce a holistic software-human analytics approach in a Future Internet platform. As the core of the solution, it proposes a Quality-Impact inquiry approach exemplified with a real practice. In the early validation of the proposals, a mixed qualitative-quantitative empirical research is conducted with the aim of designing a tool for the inquiry of user feedback. This research studies the effect of the instrumented feedback tool on QoE of a software product. The findings of the licentiate thesis show that satisfaction, performance, and freedom from risks analytics are important groups of analytics for assessing software products and services. The proposed holistic solution takes up the results by describing how to measure the analytics and how to assess them practically using a composition model during the lifecycle of products and services in a Future Internet platform. As the core of the holistic approach, the Quality-Impact assessment approach could elicit relationships between software quality and impacts of the quality on stakeholders. Moreover, the early validation of the Quality-Impact approach parameterized suitable characteristics of a feedback tool. We found that disturbing feedback tools have negligible impacts on the perceived QoE of software products. The Quality-Impact approach is helpful to acquire insight into the success of software products and services contributing to the health and sustainability of the platform. This approach was adopted as a part of the validation of FI-PPP project. Future works will address the validation of the Quality-Impact approach in the FI-PPP or other real practices.
|
9 |
Uma abordagem baseada em aspectos topológicos para expansão de redes físicas no contexto de virtualização de redes / An approach based on topological factors for the expansion of physical infrastructure in the context of network virtualizationLuizelli, Marcelo Caggiani January 2014 (has links)
A virtualização de redes é um mecanismo que permite a coexistência de múltiplas redes virtuais sobre um mesmo substrato físico. Um dos desafios de pesquisa abordados na literatura é o mapeamento eficiente de recursos virtuais em infraestruturas físicas. Embora o referido desafio tenha recebido considerável atenção, as abordagens que constituem o estado-da-arte apresentam alta taxa de rejeição, i.e., a proporção de solicitações de redes virtuais negadas em relação ao total de solicitações efetuadas ao substrato é elevada. Nesta dissertação, caracteriza-se, inicialmente, a relação entre a qualidade dos mapeamentos de redes virtuais e as estruturas topológicas dos substratos subjacentes. Avalia-se as soluções exatas de um modelo de mapeamento online sob diferentes classes de topologias de rede. A partir do entendimento dos fatores topológicos que influenciam diretamente o processo de mapeamento de redes virtuais, propõe-se uma estratégia para planejar a expansão de redes de provedores de infraestrutura de forma a reduzir consistentemente a taxa de rejeição de requisições de redes virtuais e melhor aproveitar os recursos ociosos da mesma. Os resultados obtidos evidenciam que grande parte das rejeições de redes virtuais ocorre em situações em que há grande disponibilidade de recursos, mas alguns poucos já saturados acabam inviabilizando, em função de características de conectividade do substrato, o atendimento de novas requisições. Ademais, os resultados obtidos utilizando a estratégia proposta evidenciam que o fortalecimento de partes-chave da infraestrutura levam a uma ocupação muito mais satisfatória. Uma expansão de 10% a 20% dos recursos da infraestrutura contribui para um aumento sustentado de até 30% no número de redes virtuais aceitas e de até 45% no aproveitamento dos recursos em comparação com a rede original. / Network virtualization is a mechanism that allows the coexistence of multiple virtual networks on top of a single physical substrate. One of the research challenges addressed recently in the literature is the efficient mapping of virtual resources on physical infrastructures. Although this challenge has received considerable attention, state-of-the-art approaches present, in general, a high rejection rate, i.e., the ratio between the number of denied virtual network requests and the total amount of requests is considerably high. In this thesis, we characterize the relationship between the quality of virtual network mappings and the topological structures of the underlying substrates. Exact solutions of an online embedding model are evaluated under different classes of network topologies. From the understanding of the topological factors that directly influence the virtual network embedding process, we propose an expansion strategy of physical infrastructure in order to suggest adjustments that lead to higher virtual network acceptance and, in consequence, to improved physical resource utilization. The obtained results demonstrate that most of rejections occur in situations in which a significant amount of resource is available, but a few saturated devices and links, depending on connectivity features of the physical substrate, hinder the acceptance of new requests. Moreover, the obtained results using the proposed strategy evidence that an expansion of 10% to 20% of the infrastructure resources leads to a sustained increase of up to 30% in the number of accepted virtual networks and of up to 45% in resource usage compared to the original network.
|
10 |
Enabling future internet research : the FEDERICA caseSzegedi, Peter, Riera, Jordi Ferrer, Garcia-Espin, Joan Antoni, Hidell, Markus, Sjödin, Peter, Söderman, Pehr, Ruffini, Marco, O’Mahony, Donal, Bianco, Andrea, Giraudo, Luca, Ponce de Leon, Miguel, Power, Gemma, Cervelló-Pastor, Cristina, López, Víctor, Naegele-Jackson, Susanne January 2011 (has links)
The Internet, undoubtedly, is the most influential technical invention of the 20th century that affects and constantly changes all aspects of our day-to-day lives nowadays. Although it is hard to predict its long-term consequences, the potential future of the Internet definitely relies on future Internet research. Prior to every development and deployment project, an extensive and comprehensive research study must be performed in order to design, model, analyze, and evaluate all impacts of the new initiative on the existing environment. Taking the ever-growing size of the Internet and the increasing complexity of novel Internet-based applications and services into account, the evaluation and validation of new ideas cannot be effectively carried out over local test beds and small experimental networks. The gap which exists between the small-scale pilots in academic and research test beds and the realize validations and actual deployments in production networks can be bridged by using virtual infrastructures. FEDERICA is one of the facilities, based on virtualization capabilities in both network and computing resources, which creates custom-made virtual environments and makes them available for Future Internet Researchers. This article provides a comprehensive overview of the state-of-the-art research projects that have been using the virtual infrastructure slices of FEDERICA in order to validate their research concepts, even when they are disruptive to the test bed’s infrastructure, to obtain results in realistic network environments. / © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC20120119 / FEDERICA
|
Page generated in 0.0756 seconds