• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evaluation of Network-Layer Security Technologies for Cloud Platforms / Utvärdering av säkerhetsteknologier för nätverksskiktet i molnplattformar

Duarte Coscia, Bruno Marcel January 2020 (has links)
With the emergence of cloud-native applications, the need to secure networks and services creates new requirements concerning automation, manageability, and scalability across data centers. Several solutions have been developed to overcome the limitations of the conventional and well established IPsec suite as a secure tunneling solution. One strategy to meet these new requirements has been the design of software-based overlay networks. In this thesis, we assess the deployment of a traditional IPsec VPN solution against a new secure overlay mesh network called Nebula. We conduct a case study by provisioning an experimental system to evaluate Nebula in four key areas: reliability, security, manageability, and performance. We discuss the strengths of Nebula and its limitations for securing inter-service communication in distributed cloud applications. In terms of reliability, the thesis shows that Nebula falls short to meet its own goals of achieving host-to-host connectivity when attempting to traverse specific firewalls and NATs. With respect to security, Nebula provides certificate-based authentication and uses current and fast cryptographic algorithms and protocols from the Noise framework. Regarding manageability, Nebula is a modern solution with a loosely coupled design that allows scalability with cloud-ready features and easier deployment than IPsec. Finally, the performance of Nebula clearly shows an overhead for being a user-space software application. However, the overhead can be considered acceptable in certain server-to-server microservice interactions and is a fair trade-off for its ease of management in comparison to IPsec. / Med framväxten av molninbyggda applikationer skapar behovet av säkra nätverk och tjänster nya krav på automatisering, hanterbarhet och skalbarhet över datacenter. Flera lösningar har utvecklats för att övervinna begränsningarna i den konventionella och väletablerade IPsec-sviten som en säker tunnellösning. En strategi för att möta dessa nya krav har varit utformningen av mjukvarubaserade överläggsnätverk. I den här avhandlingen bedömer vi implementeringen av en traditionell IPsec VPN-lösning mot ett nytt säkert överläggsmeshnätverk som kallas Nebula. Vi genomför en fallstudie genom att bygga upp ett ett experimentellt system för att utvärdera Nebula inom fyra nyckelområden: tillförlitlighet, säkerhet, hanterbarhet och prestanda. Vi diskuterar styrkan i Nebula och dess begränsningar för att säkra kommunikation mellan tjänster i distribuerade molnapplikationer. När det gäller tillförlitlighet visar avhandlingen att Nebula inte uppfyller sina egna mål om att uppnå värd-tillvärd- anslutning när man försöker korsa specifika brandväggar och NAT. När det gäller säkerhet tillhandahåller Nebula certifikatbaserad autentisering och använder aktuella och snabba kryptografiska algoritmer och protokoll från Noise-ramverket. När det gäller hanterbarhet är Nebula en modern lösning med en löst kopplad design som möjliggör skalbarhet med molnklara funktioner och enklare distribution än IPsec. Slutligen visar prestandan hos Nebula tydligt en overhead för att vara en användarutrymme-programvara. Dock kan kostnaderna anses vara acceptabla i vissa server-till-server-mikroserviceinteraktioner och är en rättvis avvägning om vi tar i betraktande dess enkla hantering jämfört med IPsec.
22

Metaheuristic based peer rewiring for semantic overlay networks / Métaheuristique pour la configuration dynamique de réseaux pair-à-pair dans le context des réseaux logiques sémantiques

Yang, Yulian 28 March 2014 (has links)
Nous considérons une plate-forme pair-à-pair pour la Recherche d'Information (RI) collaborative. Chaque pair héberge une collection de documents textuels qui traitent de ses sujets d'intérêt. En l'absence d'un mécanisme d'indexation global, les pairs indexent localement leurs documents et s'associent pour fournir un service distribué de réponse à des requêtes. Notre objectif est de concevoir un protocole décentralisé qui permette aux pairs de collaborer afin de transmettre une requête depuis son émetteur jusqu'aux pairs en possession de documents pertinents. Les réseaux logiques sémantiques (Semantic Overlay Networks, SON) représentent la solution de référence de l'état de l'art. Les pairs qui possèdent des ressources sémantiques similaires sont regroupés en clusters. Les opérations de RI seront alors efficaces puisqu'une requête sera transmise aux clusters de pairs qui hébergent les ressources pertinentes. La plupart des approches actuelles consistent en une reconfiguration dynamique du réseau de pairs (peer rewiring). Pour ce faire, chaque pair exécute périodiquement un algorithme de marche aléatoire ou gloutonne sur le réseau pair-à-pair afin de renouveler les pairs de son cluster. Ainsi, un réseau à la structure initialement aléatoire évolue progressivement vers un réseau logique sémantique. Jusqu'à présent, les approches existantes n'ont pas considéré que l'évolution de la topologie du réseau puisse influer sur les performances de l'algorithme de reconfiguration dynamique du réseau. Cependant, s'il est vrai que, pour une configuration initiale aléatoire des pairs, une marche aléatoire sera efficace pour découvrir les pairs similaires, lorsque des clusters commencent à émerger une approche gloutonne devient alors mieux adaptée. Ainsi, nous proposons une stratégie qui applique un algorithme de recuit simulé (Simulated Annealing, SA) afin de faire évoluer une stratégie de marche aléatoire vers une stratégie gloutonne lors de la construction du SON. Cette thèse contient plusieurs avancées concernant l'état de l'art dans ce domaine. D'abbord, nous modélisions formellement la reconfiguration dynamique d'un réseau en un SON. Nous identifions un schéma générique pour la reconfiguration d'un réseau pair-à-pair, et après le formalisons en une procédure constituée de trois étapes. Ce framework cohérent offre à ses utilisateurs de quoi le paramétrer. Ensuite, le problème de la construction d'un SON est modélisé sous la forme d'un problème d'optimisation combinatoire pour lequel les opérations de reconfiguration du réseau correspondent à la recherche décentralisée d'une solution locale. Fondée sur ce modèle, une solution concrète à base de recuit simulé est proposée. Nous menons une étude expérimentale poussée sur la construction du SON et la RI sur SONs, et validions notre approche. / A Peer-to-Peer (P2P) platform is considered for collaborative Information Retrieval (IR). Each peer hosts a collection of text documents with subjects related to its owner's interests. Without a global indexing mechanism, peers locally index their documents, and provide the service to answer queries. A decentralized protocol is designed, enabling the peers to collaboratively forward queries from the initiator to the peers with relevant documents. Semantic Overlay Network (SONs) is one the state of the art solutions, where peers with semantically similar resources are clustered. IR is efficiently performed by forwarding queries to the relevant peer clusters in an informed way. SONs are built and maintained mainly via peer rewiring. Specifically, each peer periodically sends walkers to its neighborhood. The walkers walk along peer connections, aiming at discovering more similar peers to replace less similar neighbors of its initiator. The P2P network then gradually evolves from a random overlay network to a SON. Random and greedy walk can be applied individually or integrated in peer rewiring as a constant strategy during the progress of network evolution. However, the evolution of the network topology may affect their performance. For example, when peers are randomly connected with each other, random walk performs better than greedy walk for exploring similar peers. But as peer clusters gradually emerge in the network, a walker can explore more similar peers by following a greedy strategy. This thesis proposes an evolving walking strategy based on Simulated Annealing (SA), which evolves from a random walk to a greedy walk along the progress of network evolution. According to the simulation results, SA-based strategy outperforms current approaches, both in the efficiency to build a SON and the effectiveness of the subsequent IR. This thesis contains several advancements with respect to the state of the art in this field. First of all, we identify a generic peer rewiring pattern and formalize it as a three-step procedure. Our technique provides a consistent framework for peer rewiring, while allowing enough flexibility for the users/designers to specify its properties. Secondly, we formalize SON construction as a combinatorial optimization problem, with peer rewiring as its decentralized local search solution. Based on this model, we propose a novel SA-based approach to peer rewiring. Our approach is validated via an extensive experimental study on the effect of network wiring on (1) SON building and (2) IR in SONs.
23

Distributed cross-layer scalable multimedia services over next generation convergent networks : architectures and performances

Le, Tien Anh 15 June 2012 (has links) (PDF)
Multimedia services are the killer applications on next generation convergent networks. Video contents are the most resource consuming part of a multimedia flux. Video transmission, video multicast and video conferencing services are the most popular types of video communication with increasing difficulty levels. Four main parts of the distributed cross-layer scalable multimedia services over next generation convergent networks are considered in this research work, both from the architecture and performance point of views. Firstly, we evaluate the performance of scalable multimedia transmissions over an overlay network. For that, we evaluate the performance of scalable video end-to-end transmissions over EvalSVC. It is capable of evaluating the end-to-end transmission of SVC bit-streams. The output results are both objective and subjective metrics of the video transmission. Through the interfaces with real networks and an overlay simulation platform, the transmission performance of different types of SVC scalability and AVC bit-streams on a bottle-neck and an overlay network will be evaluated. This evaluation is new because it is conducted on the end-to-end transmission of SVC contents and not on the coding performance. Next, we will study the multicast mechanism for multimedia content over an overlay network in the following part of this PhD thesis. Secondly, we tackle the problems of the distributed cross-layer scalable multimedia multicast over the next generation convergent networks. For that, we propose a new application-network cross layer multi-variable cost function for application layer multicast of multimedia delivery over convergent networks. It optimizes the variable requirements and available resources from both the application and the network layers. It can dynamically update the available resources required for reaching a particular node on the ALM's media distribution tree. Mathematical derivation and theoretical analysis have been provided for the newly proposed cost function so that it can be applied in more general cases of different contexts. An evaluation platform of an overlay network built over a convergent underlay network comprised of a simulated Internet topology and a real 4G mobile WiMAX IEEE802.16e wireless network is constructed. If multicast is the one-to-many mechanism to distribute the multimedia content, a deeper study on the many-to-many mechanism will be done in the next part of the thesis through a new architecture for video conferencing services. Thirdly, we study the distributed cross-layer scalable video conferencing services over the overlay network. For that, an enriched human perception-based distributed architecture for scalable video conferencing services is proposed with theoretical models and performance analysis. Rich theoretical models of the three different architectures: the proposed perception-based distributed architecture, the conventional centralized architecture and perception-based centralized architecture have been constructed by using queuing theory to reflect the traffic generated, transmitted and processed at the perception-based distributed leaders, the perception-based centralized top leader, and the centralized server. The performance of these three different architectures has been considered in 4 different aspects. While the distributed architecture is better than the centralized architecture for a scalable multimedia conferencing service, it brings many problems to users who are using a wireless network to participate into the conferencing service. A special solution should be found out for mobile users in the next part of the thesis. Lastly, the distributed cross-layer scalable video conferencing services over the next generation convergent network is enabled. For that, an IMS-based distributed multimedia conferencing services for Next Generation Convergent Networks is proposed. [...]
24

Distributed cross-layer scalable multimedia services over next generation convergent networks : architectures and performances / Approche cross-layer pour services multimedia évolutifs distribués sur la prochaine génération de réseaux convergents : architectures et performances

Le, Tien Anh 15 June 2012 (has links)
Multi-parti de conférence multimédia est le type le plus compliqué de la communication mais aussi le service principalement utilisé sur Internet. Il est aussi la killer application sur les réseaux 4G. Dans cette recherche, nous nous concentrons sur trois parties principales du service de téléconférence: L'architecture de distribution de médias, le codage vidéo, ainsi que l'intégration du service dans les infrastructures sans fil 4G. Nous proposons un algorithme d'application nouvelle couche de multidiffusion utilisant une architecture de services distribués. L'algorithme proposé estime que les limites de la perception humaine, tout en participant à une conférence vidéo afin de minimiser le trafic qui n'est pas nécessaire pour la session de communication. Riche des modèles théoriques de la perception basée proposé architecture distribuée, l'architecture traditionnelle centralisée et basée sur la perception architecture centralisée ont été construits en utilisant la théorie d'attente afin de refléter le trafic généré, transmises et traitées à l'pairs distribués, les dirigeants et le serveur centralisé. La performance des architectures a été pris en compte dans les différents aspects de la durée totale d'attente, le retard de point à point et le taux de service requis pour le débit total. Ces résultats aident le lecteur à avoir une vue globale des performances de la proposition dans une comparaison équitable avec les méthodes conventionnelles. Pour construire l'arbre de distribution des médias pour l'architecture distribuée, une fonction de coût nouvelle application-aware multi-variable est proposée. Il tient compte des besoins variables des applications et des mises à jour dynamiquement les ressources disponibles nécessaires pour parvenir à un nœud particulier sur l'arbre de distribution de l'ALM. Codage vidéo scalable est utilisé comme le principal multi-couche codec à la conférence. Afin d'évaluer la performance de la fonction de coût multi-variable nouvellement proposées dans une application dynamique et avancé environnement réseau sans fil, codage vidéo scalable (SVC) des transmissions sur un réseau overlay ALM construite sur un réseau sous-4G/WiMAX réelles ont été utilisées. Nous avons développé EvalSVC et l'utiliser comme plate-forme principale pour évaluer la fonction de coût proposé. Comme un problème commun, l'architecture distribuée nécessite que les pairs contribuent une partie de leur bande passante et capacité de calcul afin de maintenir la superposition mutuelle inter-connexion. Cette exigence se développe en un grave problème pour les utilisateurs mobiles et l'infrastructure sans fil, comme la ressource radio de ce réseau est extrêmement coûteux, et est l'une des raisons pour lesquelles l'architecture distribuée n'a pas été largement appliquée dans la prochaine génération (4G) des réseaux. C'est aussi la raison principale pour laquelle les services multimédias tels que vidéo-conférence doivent s'appuyer sur une architecture centralisée coûteuse construite sur un des contrôleurs des médias coûteux fonction des ressources (CRFM), via l'IMS (IP Multimedia Subsystem). Ce travail de recherche propose une nouvelle architecture distribuée utilisant la capacité de renseignement et extra, actuellement disponibles sur le LTE et stations de base WiMAX de réduire le besoin des débits que chacun a à fournir par les pairs afin de maintenir le réseau overlay. Cette réduction permet d'économiser des ressources précieuses et de radio permet une architecture distribuée pour fournir des services de visioconférence sur les réseaux 4G, avec tous les avantages d'une architecture distribuée, comme la flexibilité, l'évolutivité, les petits retards et à moindre coût. De plus, cela peut être mis en œuvre avec une modification minimale de la plate-forme standardisée IMS et les infrastructures 4G, économisant ainsi les opérateurs et les fournisseurs de services d'investissements excessifs. [...] / Multimedia services are the killer applications on next generation convergent networks. Video contents are the most resource consuming part of a multimedia flux. Video transmission, video multicast and video conferencing services are the most popular types of video communication with increasing difficulty levels. Four main parts of the distributed cross-layer scalable multimedia services over next generation convergent networks are considered in this research work, both from the architecture and performance point of views. Firstly, we evaluate the performance of scalable multimedia transmissions over an overlay network. For that, we evaluate the performance of scalable video end-to-end transmissions over EvalSVC. It is capable of evaluating the end-to-end transmission of SVC bit-streams. The output results are both objective and subjective metrics of the video transmission. Through the interfaces with real networks and an overlay simulation platform, the transmission performance of different types of SVC scalability and AVC bit-streams on a bottle-neck and an overlay network will be evaluated. This evaluation is new because it is conducted on the end-to-end transmission of SVC contents and not on the coding performance. Next, we will study the multicast mechanism for multimedia content over an overlay network in the following part of this PhD thesis. Secondly, we tackle the problems of the distributed cross-layer scalable multimedia multicast over the next generation convergent networks. For that, we propose a new application-network cross layer multi-variable cost function for application layer multicast of multimedia delivery over convergent networks. It optimizes the variable requirements and available resources from both the application and the network layers. It can dynamically update the available resources required for reaching a particular node on the ALM's media distribution tree. Mathematical derivation and theoretical analysis have been provided for the newly proposed cost function so that it can be applied in more general cases of different contexts. An evaluation platform of an overlay network built over a convergent underlay network comprised of a simulated Internet topology and a real 4G mobile WiMAX IEEE802.16e wireless network is constructed. If multicast is the one-to-many mechanism to distribute the multimedia content, a deeper study on the many-to-many mechanism will be done in the next part of the thesis through a new architecture for video conferencing services. Thirdly, we study the distributed cross-layer scalable video conferencing services over the overlay network. For that, an enriched human perception-based distributed architecture for scalable video conferencing services is proposed with theoretical models and performance analysis. Rich theoretical models of the three different architectures: the proposed perception-based distributed architecture, the conventional centralized architecture and perception-based centralized architecture have been constructed by using queuing theory to reflect the traffic generated, transmitted and processed at the perception-based distributed leaders, the perception-based centralized top leader, and the centralized server. The performance of these three different architectures has been considered in 4 different aspects. While the distributed architecture is better than the centralized architecture for a scalable multimedia conferencing service, it brings many problems to users who are using a wireless network to participate into the conferencing service. A special solution should be found out for mobile users in the next part of the thesis. Lastly, the distributed cross-layer scalable video conferencing services over the next generation convergent network is enabled. For that, an IMS-based distributed multimedia conferencing services for Next Generation Convergent Networks is proposed. [...]
25

Virtuální prostředí přístupu k uzlům v PlanetLab / Virtual Access to Nodes in PlanetLab

Fic, Jiří January 2008 (has links)
PlanetLab as a distributed systems testbed offers a unique opportunity for developing and testing new applications useful for future Internet. This work brings up a scheme and a solution of the problem with accessing PlanetLab by a larger group of students e.g. for the purpose of solving their courseworks. A designed system empowers its administrator to create and control virtual user accounts which provide possibility for all its users to connect to selected nodes in the PlanetLab.
26

Performance Characteristics of the Interplanetary Overlay Network in 10 Gbps Networks

Huff, John D. 01 June 2021 (has links)
No description available.
27

An Efficient and Secure Overlay Network for General Peer-to-Peer Systems

WANG, HONGHAO 22 April 2008 (has links)
No description available.
28

Contribution to the cross-layer optimization of intra-cluster communication mechanisms in personal networks (Contribución a la optimización intercapa de los mecanismos de comunicación intra-cluster en redes personales)

Sánchez González, Luis 13 March 2009 (has links)
En el futuro, los dispositivos digitales formarán parte del entorno en el que las personas se desenvuelvan, participarán en nuestros objetivos y necesidades y nos ayudarán a "hacer más haciendo menos". A través de los dispositivos portátiles o aquellos que estén embebidos a nuestro alrededor el usuario será capaz de interactuar con el futuro universo de servicios e infraestructuras ubicuas. El principal paradigma que se seguirá se basa en que este universo estará centrado en el usuario ya que éste demandará los servicios que más le convengan en cualquier momento y lugar, todo ello preservando nuestra privacidad y seguridad. Este nuevo concepto no sólo se aplica a un entorno de ocio sino que en el campo profesional las redes inalámbricas de próxima generación permitirán incrementar nuestra productividad, reduciendo el peso de tareas repetitivas, poniendo a nuestra disposición la información relevante en el momento adecuado y según sean las necesidades particulares del usuario en ese momento y permitiéndonos trabajar con otras personas independientemente de donde se encuentren. En particular, se intuye que las redes de próxima generación se diseñen de forma que aglutinen todos los servicios disponibles a través de los diferentes sistemas que existan de forma que las posibles deficiencias de alguno de ellos se vean compensadas por otro. Lo que se pretende conseguir es que el usuario pueda disfrutar en todo momento y lugar de los servicios que desee sin que ello suponga un esfuerzo.Este concepto implica diferentes retos tecnológicos y la integración de múltiples sistemas. Dentro de estos retos tecnológicos esta Tesis aborda los siguientes: soporte de la heterogeneidad en lo referente a las tecnologías de acceso radio que existen y que eventualmente aparecerán en el futuro y que coexistirán en un mismo terminal; desarrollo de técnicas de optimización basadas en la cooperación entre diferentes capas de la pila de protocolos; implementación de estrategias de selección de la red que mejor pueda soportar un determinado servicio ante la posibilidad de utilización de múltiples tecnologías de acceso; optimización del uso de recursos energéticos en las comunicaciones dentro de la red; protección de la privacidad y la seguridad de las comunicaciones personales del usuario.Desde el punto de vista de las aportaciones, en esta Tesis se ha contribuido mediante el diseño, implementación y validación de una serie de técnicas de optimización de las comunicaciones en redes de dispositivos móviles basadas en información intercapa. Para ello, se propone una arquitectura de protocolos novedosa que permite soportar la heterogeneidad en términos de tecnologías de acceso dentro del mismo terminal. El concepto de aislar las capas superiores de la pila de protocolos de las tecnologías de acceso subyacentes se consigue a través de una Capa de Convergencia Universal (UCL, en sus siglas en inglés). El diseño y la especificación esta arquitectura así como de los bloques funcionales que la componen son la primera contribución que se hace en esta Tesis. La UCL supone el marco en el que el resto de técnicas de optimización que se presentan han sido desarrolladas.Igualmente, se desarrollan una serie de técnicas basadas en optimización intercapa que permiten una gestión eficiente de los recursos disponibles así como una mejora en el uso de la energía. Finalmente, se implementan los mecanismos de autenticación y encriptación que permiten asegurar las comunicaciones dentro de la red. El diseño, implementación y validación de estos mecanismos supone la segunda contribución en esta Tesis. El empleo de técnicas de optimización basadas en información procedentes de diferentes capas de la pila de protocolos es la base de los dos mecanismos que se han propuesto. El primero de ellos se basa en la selección dinámica de la tecnología de acceso a utilizar para obtener un rendimiento óptimo del sistema. La segunda estrategia de optimización consiste en el uso simultáneo de varias tecnologías de acceso para conseguir una mejora en las prestaciones de la red. Aparte de la optimización en cuanto al rendimiento en términos de ancho de banda y calidad de servicio, se ha evaluado la mejora de la eficiencia energética conseguida gracias a las soluciones propuestas. Los resultados obtenidos permiten concluir que las propuestas realizadas en el marco de esta Tesis representan una optimización tanto en parámetros de calidad de servicio como en la eficiencia energética del sistema.El mayor avance respecto del estado del arte se centra en habilitar al usuario para que utilice de manera transparente, eficiente y segura los dispositivos que tiene a su alrededor independientemente de la heterogeneidad que presenten y sin requerir de un conocimiento intensivo de la tecnología. El usuario podrá comunicarse haciendo un uso óptimo de los recursos a su alcance sin preocuparse de tener que gestionarlos él mismo. / In the future, computation will be human-centred: it will enter the human world, handling our goals and needs and helping us to do more by doing less. Next generation wireless systems should provide the user access with a broad range of services in a transparent way, independently of user location, by making the technology invisible and embedded in the natural surroundings. New systems will boost our productivity. They will help us automate repetitive human tasks, control a wide range of physical devices in our environment, find the information we need (when we need it, without obliging us to examine thousands of search-engine hits), and enable us to work together with other people through space and time.The achievement of this paradigm led to the identification of a set of optimizations in intra-cluster communications that were needed to fully support it. Firstly, heterogeneity will be a fundamental characteristic of next generation wireless communications since more and more personal devices are equipped with multiple network access technologies so that the user can have access to the different services that the different operational environments provide. However, Next Generation Networks (NGN) will comprise such a diverse number of possibilities that the users cannot be expected to take technical decisions on their own. It is necessary to provide mechanisms that intelligently select the optimal available access network based on context information such as user preferences, power consumption, link quality, etc. Finally, users need to trust the system that supports their personal communications. Within a personal network the most confidential information might be exchanged and the user need to be sure that this will never be disclosed. If the system fails in these features, NGN in general and PNs in particular will never happen.This Thesis has contributed with the development of the mechanisms that tackle the abovementioned challenges. The design and specification of a convergence framework, the so-called Universal Convergence Layer (UCL), has been the first topic addressed. This framework aims to manage all the network access interfaces with which a device is equipped so that they can be transparently used by upper layers as if the node were equipped with a single access technology. On the other hand, the UCL enables the cross-layer optimization paradigm. Its privileged location within the protocol stack gives the UCL the possibility to support both bottom-up and top-down information flow. In this sense, two different solutions based on cross-layer optimization have been proposed to enhance the performance and energy efficiency of the system. The first one deals with the selection at run-time of the most appropriate wireless interface to be used in order to improve the system performance. The second one leverages the striping concept in order to exploit all the network interfaces available. Finally, the UCL also plays a key role in security issues as an enabler for providing link-layer security mechanisms that ensure data confidentiality and integrity, authenticity and non-repudiation. The techniques implemented for node authentication combined with traffic encryption in ad-hoc networks have been thoroughly assessed and have demonstrated their appropriateness.The biggest advance in the state-of-the-art comes from enabling the user to have easy, affordable and seamless control of their devices over heterogeneous communications networks. They are empowered to communicate efficiently and securely with their selected interaction groups, no matter what kind of access is available for them to use.

Page generated in 0.0398 seconds