• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 16
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 31
  • 29
  • 21
  • 17
  • 17
  • 17
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Avaliação de desempenho de variantes dos Protocolos DCCP e TCP em cenários representativos

Doria, Priscila Lôbo Gonçalves 15 May 2012 (has links)
The Datagram Congestion Control Protocol (DCCP) is a prominent transport protocol that has attracted the attention of the scientific community for its rapid progress and good results. The main novelty of DCCP is the performance priority design, as in UDP, however with congestion control capabilities, as in TCP. Literature about DCCP is still scarce and needs to be complemented to gather enouth scientific elements to support new research properly. In this context, this work joins the efforts of the scientific community to analise, mensure, compare and characterize DCCP in relevant scenarios that cover many real world situations. Three open questions were preliminarly identified in the literature: How DCCP behaves (i) when fighting for the same link bandwidth with other transport protocols; (ii) with highly relevant ones (e.g., Compound TCP, CUBIC) and (iii) fighting for the same link bandwidth with Compound TCP and CUBIC, adopting multimedia applications (e.g., VoIP). In this work, computational simulations are used to compare the performance of two DCCP variants (DCCP CCID2 and DCCP CCID3) with three highly representative TCP variants (Compound TCP, CUBIC and TCP SACK), in real world scenarios, including concurrent use of the same link by protocols, link errors and assorted bandwidths, latencies and traffic patterns. The simulation results show that, under contention, in most scenarios DCCP CCID2 has achieved higher throughput than Compound TCP or TCP SACK. Throughout the simulations there was a tendency of DCCP CCID3 to have lower throughput than the other chosen protocol. However, the results also showed that DCCP CCID3 has achieved significanly better throughput in the presence of link errors and higher values of latency and bandwidth, eventualy outperforming Compound TCP and TCP SACK. Finally, there was a tendency of predominance of CUBIC´ throughtput, which can be explained by its aggressive algorithm (i.e., non-linear) of return of the transmission window to the previous value before the discard event. However, CUBIC has presented the highest packet drop and the lowest delivery rate. / O Datagram Congestion Control Protocol (DCCP) é um proeminente protocolo de transporte que vem atraindo a atenção da comunidade científica pelos seus rápidos avanços e bons resultados. A principal inovação do DCCP é a priorização de desempenho, como ocorre com o UDP, mas com capacidade de realizar controle de congestionamento, como ocorre com o TCP. Entretanto, a literatura sobre o DCCP ainda é escassa e necessita ser complementada para trazer elementos científicos suficientes para novas pesquisas. Neste contexto, este trabalho vem se somar aos esforços da comunidade científica para analisar, mensurar, comparar e caracterizar o DCCP em cenários representativos que incorporem diversas situações de uso. Identificaram-se então três questões alvo, ainda em aberto na literatura: qual é o comportamento do DCCP (i) quando disputa o mesmo enlace com outros protocolos de transporte; (ii) com protocolos de transporte relevantes (e.g., Compound TCP, CUBIC) e (iii) em disputa no mesmo enlace com o Compound TCP e o CUBIC, utilizando aplicações multimídia (e.g., VoIP). Neste trabalho, simulações computacionais são utilizadas para comparar duas variantes do DCCP (CCID2 e CCID3) a três variantes do TCP (Compound TCP, CUBIC e TCP SACK), em cenários onde ocorrem situações de mundo real, incluindo utilização concorrente do enlace pelos protocolos, presença de erros de transmissão no enlace, variação de largura de banda, variação de latência, e variação de padrão e distribuição de tráfego. Os resultados das simulações apontam que, sob contenção, na maioria dos cenários o DCCP CCID2 obteve vazão superior à do Compound TCP, do DCCP CCID3 e do TCP SACK. Ao longo das simulações observou-se uma tendência do DCCP CCID3 a ter vazão inferior à dos demais protocolos escolhidos. Entretanto, os resultados apontaram que o DCCP CCID3 obteve desempenho significativamente melhor na presença de erros de transmissão e com valores maiores de latência e de largura de banda, chegando a ultrapassar a vazão do DCCP CCID2 e do TCP SACK. Por fim, observou-se uma tendência de predominância do protocolo CUBIC no tocante à vazão, que pode ser determinada pelo seu algoritmo agressivo (i.e., não-linear) de retorno da janela de transmissão ao valor anterior aos eventos de descarte. Entretanto, o CUBIC apresentou o maior descarte de pacotes e a menor taxa de entrega.
72

OPNET simulation of voice over MPLS With Considering Traffic Engineering

Radhakrishna, Deekonda, Keerthipramukh, Jannu January 2010 (has links)
Multiprotocol Label Switching (MPLS) is an emerging technology which ensures the reliable delivery of the Internet services with high transmission speed and lower delays. The key feature of MPLS is its Traffic Engineering (TE), which is used for effectively managing the networks for efficient utilization of network resources. Due to lower network delay, efficient forwarding mechanism, scalability and predictable performance of the services provided by MPLS technology makes it more suitable for implementing real-time applications such as voice and video. In this thesis performance of Voice over Internet Protocol (VoIP) application is compared between MPLS network and conventional Internet Protocol (IP) network. OPNET modeler 14.5 is used to simulate the both networks and the comparison is made based on some performance metrics such as voice jitter, voice packet end-to-end delay, voice delay variation, voice packet sent and received. The simulation results are analyzed and it shows that MPLS based solution provides better performance in implementing the VoIP application. In this thesis, by using voice packet end-to-end delay performance metric an approach is made to estimate the minimum number of VoIP calls that can be maintained, in MPLS and conventional IP networks with acceptable quality. This approach can help the network operators or designers to determine the number of VoIP calls that can be maintained for a given network by imitating the real network on the OPNET simulator. / 0046737675303
73

Software download over DoIP in Android

Lingfors, Anders January 2015 (has links)
The Android operating system, originally intended for smartphone devices, is now finding its way into cars and other vehicles. While the Android system already implements support for system updates, it is not suitable for use in the automotive domain. It is not compatible with modern automotive standards for diagnostic communication such as ISO 14229: Unified Diagnostic Services (UDS). This means that new tools, procedures and software would be needed to allow an Android device to be updated by a service technician in a repair shop or on the field. A better approach would be to add support for automotive diagnostic communication in Android. This way, the tools and supporting infrastructure that already exist can still be used. We have developed a solution for diagnostic communication on Android that is both modular and compatible with existing automotive standards. By using the standard ISO 13400: Diagnostic communication over Internet Protocol (DoIP), this solution enables both updating the system software on the Android device itself, as well as diagnostic communication with the ECUs on the vehicle’s internal CAN network. Thus, an existing diagnostic port based on a slower communication protocol such as CAN or J1587 could theoretically be replaced completely by the Android device’s Ethernet port. Finally, we have evaluated the performance of our implementation under various settings and conditions. These include varying the maximum size of a diagnostic message, different network settings, downloading software over a Wi-Fi link, and downloading data to multiple devices simultaneously. / Operativsystemet Android, ursprungligen avsett för smartphone-enheter, återfinns numera även i bilar och andra typer av fordon. Även om Android-systemet redan implementerar stöd för system-uppdateringar, är det inte lämpligt att använda i fordonsindustrin. Den är inte kompatibel med moderna fordons-standarder för diagnoskommunikation som t.ex. ISO:14229: Enhetliga diagnostiktjänster (UDS). Detta innebär att det skulle krävas nya verktyg, procedurer och mjukvara för att möjliggöra att en Android-enhet uppdaterades av en service-tekniker i verkstad eller i fält. Ett bättre tillvägagångssätt skulle vara att lägga till support för diagnoskommunikation i Android. På detta sätt skulle redan existerande verktyg och stödjande infrastruktur kunna fortsätta användas. Vi har utvecklat en lösning för diagnoskommunikation i Android som är både modulär och kompatibel med existerande fordonsstandarder. Genom att använda ISO 13400: Diagnostikkommunikation över Internet-protokoll (DoIP) möjliggör denna lösning både uppdatering av systemmjukvaran i själva Android-enheten, samt diagnoskommunikation med övriga styrenheter på fordonets interna CAN-nätverk. Därmed skulle en befintlig diagnosport baserad på ett långsammare kommunikationsprotokoll såsom CAN eller J1587 teoretiskt kunna ersättas helt och hållet med Android-enhetens Ethernet-port. Slutligen har vi testat vår implementations prestanda under varierande inställningar och förhållanden. Dessa inkluderar bland annat att variera den maximala storleken av ett diagnos-meddelande, olika nätverks-inställningar, att uppdatera mjukvaran över en Wi-Fi-länk, samt att uppdatera mjukvaran på flera enheter samtidigt.
74

Pokročilý systém umožňující zálohování počítačových dat / Advanced system for computer data back-up

Sobek, Jiří January 2012 (has links)
This master thesis is mainly focused on the backup systems and describes each individual backup techniques in detail. Next main point of this thesis, is explaining functionality of IPv4 and FTP, which are closely related to the topic. Outcome is a backup application written in Java language, which is capable of backup files on FTP server or on local/ network storage area. Backup application also allows settings for automatic backup and restoring files from the storage area. Finally there was made a measurment, where were point out advantages and disadvantages of transfer mediums and where was practically shown a logic of creating the backup system. Goal was a creation of multiplatform backup application.
75

Evoluční návrh hašovacích funkcí / Evolutionary Design of Hash Functions

Kidoň, Marek January 2016 (has links)
Hash tables are fast associative array implementations which became part of modern world of information technology and thanks to its simplicity became very popular among computer programmers. The choice of proper hash function is very important. Improperly selected hash function can result in poor hash table performance and its application. Currently there are many exceptional implementations of general hash functions. Such functions are not constrained to a concrete set of inputs, they perform on any input. On the other hand if we know the input domain we can design a specific hash function for desired application thus reaching better levels of performance compare to a general hash function. However hash function design is not trivial. There are no rules, standards, guides nor automated tools that would help us with such a task. In case of manual design the hash function author has to rely on his/her knowledge, experience, inventiveness and intuition. In case of such complicated tasks there is sometimes advantageous to choose a different path and use techniques such as evolution algorithms. Natural computing is an approach of certain problem solutions that are inspired by the process of species reproduction as defined by Charles Darwin. In this thesis we will design hash functions for the domain of IP addresses, that serve as an unique network device interface identifier in internet protocol networks. The chosen subset of natural computing is the genetic programming, a very specific technique that is an adequate approach to our problem thanks to its properties. Evolutionary designed hash functions offer good properties. They outperform state-of-the-art generic, human-created hash functions in terms of speed and collision resistance.
76

Enabling Pro-active Problem Management by Predictive Modelling : Data mining and statistical analysis of past problems to enable pro-active actions for Internet Protocol Television

Dahlin, Anton January 2017 (has links)
Service providers strive to guarantee a certain level of quality of their services, to stay relevant and to keep their customers satisfied and to avoid customer churn. IPTV is such a service. To be able to guarantee quality of service and uptime a good problem management system is vital. Problem management system, are a system of handling occurrences and solutions of all problems and errors. Its primary goal is to detect, prevent and solve errors and incidents, but also prevent reoccurring errors and minimizing the time line of existing problems by finding its root cause. On first contact of a problem there is human operators classifying the problem. Problems with problem management is that it is reactive and problems keep happening. To be able to become pro-active, problems need to be predicted before they occur. This thesis evaluates past problems to enable pro-active actions, by statistics and actionable data mining, using performance data from IPTV service and set top boxes, not utilized today by the company this thesis is carried out for. The result shows that pro-active actions are enabled by adding supervision of the performance data and a packet error ratio. The result also shows that some large problems gets classified too late. By utilizing machine learning and predictive modelling: Logistic regression and Artificial neural network, future disturbance incidents can be predicted. Using disturbance data gave a model accuracy of 80% and 80% respectively with a model Matthew correlation coefficient of 0.50 and 0.56 respectively. Given information of affected active service consumers, if provided, customer impact could easier be assessed. The larger part of this project went to the business understanding, data understanding, data preparation and analysis. By utilizing the unutilized performance data, the problem management system can become more proactive, with the addition of supervision of the IPTV performance data and predictive modelling. / Tjänsteleverantörer strävar att kunna garantera en viss grad av kvalité på de tjänster de erbjuder, för att fortsätta vara relevant, hålla sina kunder nöjda och undvika bortfall av kunder. IPTV är en sådan tjänst. För att kunna garantera en hög tjänstekvalité och hög drifttid, är ett bra problemhanteringssystem viktigt. Problemhanteringssystem är ett system som hanterar förekomsten av och lösningar till alla problem och fel. Dess primära mål är att upptäcka, motverka och lösa incidenter och fel. Även att motverka fortsatt upprepande fel samt att minimera tidslinjen för aktiva fel genom att hitta dess grundorsaken. Vid första kontakt av ett problem så är det mänskliga operatörer som klassificerar problemet. Ett problem med problemhanteringssystem är att de är reaktiva och problem fortsätter att uppkomma. För att kunna bli proaktiv, måste problem kunna predikteras innan de uppkommer. Denna uppsats utvärderar och analyserar gamla problem och incidenter för att möjliggöra proaktivitet, genom statistik och handlingsbar data mining, genom användandet av oanvända prestandadata från IPTV tjänst och digitalbox. Resultat visar att proaktivt agerande möjliggörs genom att lägga till bevakning av prestandadata och en paketfels kvot. Resultat visar även att stora problem blir klassificerad för sent. Genom användandet av maskininlärning och prediktiva modeller: logistisk regression och artificiellt neuralt nätverk, kan kommande incidenter predikteras. Användning av störningsdata gav en modellsäkerhet på 80% och 80% med en Matthew korrelations koefficient på 0.50 och 0.56 respektive. Information om antal aktiva påverkade kunder, om implementerat, hjälper till att bedöma kundpåverkan. Den större delen av detta projekt gick åt till företag förståelse, data förståelse, data förberedning och analys. Men genom användandet av prestandadata, kan problemhanteringssystemet bli proaktiv med tillägget av bevakning av IPTV tjänst och digitalbox data och prediktiv modellering.
77

Using Link Layer Information to Enhance Mobile IP Handover Mechanism. An investigation in to the design, analysis and performance evaluation of the enhanced Mobile IP handover mechanism using link layer information schemes in the IP environment.

Alnas, Mohamed J.R. January 2010 (has links)
Mobile computing is becoming increasingly important, due to the rise in the number of portable computers and the desire to have continuous network connectivity to the Internet, irrespective of the physical location of the node. We have also seen a steady growth of the market for wireless communication devices. Such devices can only have the effect of increasing the options for making connections to the global Internet. The Internet infrastructure is built on top of a collection of protocols called the TCP/IP protocol suite. Transmission Control Protocol (TCP) and Internet Protocol (IP) are the core protocols in this suite. There are currently two standards: one to support the current IPv4 and one for the upcoming IPv6 [1]. IP requires the location of any node connected to the Internet to be uniquely identified by an assigned IP address. This raises one of the most important issues in mobility because, when a node moves to another physical location, it has to change its IP address. However, the higher-level protocols require the IP address of a node to be fixed for identifying connections. The Mobile Internet Protocol (Mobile IP) is an extension to the Internet Protocol proposed by the Internet Engineering Task Force (IETF) that addresses this issue. It enables mobile devices to stay connected to the Internet regardless of their locations, without changing their IP addresses and, therefore, an ongoing IP session will not be interrupted [2, 3, 4]. More precisely, Mobile IP is a standard protocol that builds on the Internet Protocol by making mobility transparent to applications and higher-level protocols like TCP. However, before Mobile IP can be broadly deployed, there are still several technical barriers, such as long handover periods and packet loss that have to be overcome, in addition to other technical obstacles, including handover performance, security issues and routing efficiency [7]. This study presents an investigation into developing new handover mechanisms based on link layer information in Mobile IP and fast handover in Mobile IPv6 environments. The main goal of the developed mechanisms is to improve the overall IP mobility performance by reducing packet loss, minimizing signalling overheads and reducing the handover processing time. These models include the development of a cross-layer handover scheme using link layer information and Mobile Node (MN) location information to improve the performance of the communication system by reducing transmission delay, packet loss and registration signalling overheads. Finally, the new schemes are developed, tested and validated through a set of experiments to demonstrate the relative merits and capabilities of these schemes.
78

HASH STAMP MARKING SCHEME FOR PACKET TRACEBACK

NEIMAN, ADAM M. January 2005 (has links)
No description available.
79

Homing-Architekturen für Multi-Layer Netze: Netzkosten-Optimierung und Leistungsbewertung / Homing Architectures in Multi-Layer Networks: Cost Optimization and Performance Analysis

Palkopoulou, Eleni 21 December 2012 (has links) (PDF)
Die schichtenübergreifende Steuerung von Multi-Layer Netzen ermöglicht die Realisierung fortgeschrittener Netzarchitekturen sowie neuartiger Konzepte zur Steigerung der Ausfallsicherheit. Gegenstand dieser Arbeit ist ein neues ressourcensparendes Konzept zur Kompensation von Core-Router-Ausfallen in IP-Netzen. Core-Router-Ausfälle führen zur Abkopplung der an Ihnen angeschlossenen Zugangsrouter vom Netz. Daher werden die Zugangsrouter üblicherweise mit jeweils zwei oder mehreren verschiedenen Core-Routern verbunden (engl.: dual homing) was jedoch eine Verdoppelung der Anschlusskapazität im IP Netz bedingt. Bei dem neuen Verfahren - Dual Homing mit gemeinsam genutzten Router-Ersatzressourcen (engl.: dual homing with shared backup router resources, DH-SBRR) - erfolgt die Zugangsrouter-Anbindung zum einen zu einem Core-Router des IP-Netzes und zum anderen zu einem Netzelement der darunterliegenden Transportschicht. Damit lassen sich Router-Ersatzressourcen, die im IP-Netz an beliebigen Stellen vorgehalten werden können, uber das Transportnetz an die Stelle eines ausgefallenen Core-Routers schalten. Die Steuerung dieser Ersatzschaltung geschieht über eine schichten übergreifende, d.h. das Transportnetz- und IP-Netz umfassende Control-Plane - beispielsweise auf Basis von GMPLS. Da beim Umschalten der Routerressourcen auch aktuelle Zustände (bspw. Routing-Tabellen) auf die Router-Ersatzressourcen mit übertragen werden müssen, beinhaltet das neue Verfahren auch Konzepte zur Router-Virtualisierung. Zum Vergleich und zur Bewertung der Leistungsfähigkeit des neuen DH-SBRR Verfahrens werden in der Arbeit verschiedene Zugangsrouter-Homing-Varianten hinsichtlich Netz-Kosten, Netz-Verfügbarkeit, Recovery-Zeit und Netz-Energieverbrauch gegenübergestellt. Als Multi-Layer Netzszenarien werden zum einen IP über WDM und zum anderen IP über OTN (ODU) betrachtet. Zur Bestimmung der minimalen Netz-Kosten ist ein generisches Multi-Layer Netzoptimierungsmodell entwickelt worden, welches bei unterschiedlichen Homing-Architekturen angewendet werden kann. Neben dem Optimierungsmodell zur Netzkostenminimierung wird auch eine Modellvariante zur Minimierung des Energieverbrauchs vorgestellt. Um die Rechenzeit für die Lösung der Optimierungsprobleme zu verringern und damit auch größere Netzszenarien untersuchen zu können bedarf es heuristischer Lösungsverfahren. Im Rahmen der Arbeit ist daher eine neue speziell auf die Multilayer-Optimierungsprobleme zugeschnittene Lösungsheuristik entwickelt worden. Aus der Netzkosten-Optimierung ergibt sich, dass durch den Einsatz von DH-SBBR signifikante Kosteneinsparungen im Vergleich zu herkömmlichen Homing-Architekturen realisiert werden können. Änderungen der Verkehrslast, der Kosten der IP-Netzelemente oder der Netztopologie haben keinen signifikanten Einfluss auf dieses Ergebnis. Neben dem Kosten- und Energieeinsparungspotential sind auch die Auswirkungen auf die Netz-Verfügbarkeit und die Recovery-Zeit untersucht worden. Für die Ende-zu-Ende Verfügbarkeit bei Anwendung der verschiedenen Homing-Architekturen Können untere Grenzwerte angegeben werden. Zur Bestimmung der Recovery-Zeit bei Einsatz von DH-SBRR ist ein eigenes analytisches Berechnungsmodell entwickelt und evaluiert worden. Damit kann das DH-SBRR Verfahren zur Einhaltung vorgegebener Recovery-Zeiten (wie sie für bspw. Für bestimmte Dienste gefordert werden) entsprechend parametriert werden. / The emergence of multi-layer networking capabilities opens the path for the development of advanced network architectures and resilience concepts. In this dissertation we propose a novel resource-efficient homing scheme: dual homing with shared backup router resources. The proposed scheme realizes shared router-level redundancy, enabled by the emergence of control plane architectures such as generalized multi-protocol label switching. Additionally, virtualization schemes complement the proposed architecture. Different homing architectures are examined and compared under the prism of cost, availability, recovery time and energy efficiency. Multiple network layers are considered in Internet protocol over wavelength division multiplexing as well as Internet protocol over optical data unit settings - leading to the development of multi-layer optimization techniques. A generic multi-layer network design mathematical model, which can be applied to different homing architecture considerations, is developed. The optimization objective can be adapted to either minimizing the cost for network equipment or the power consumption of the network. In order to address potential issues with regard to computational complexity, we develop a novel heuristic approach specifically targeting the proposed architecture. It is shown that significant cost savings can be achieved - even under extreme changes in the traffic demand volume, in the cost for different types of network equipment, as well as in the network topology characteristics. In order to evaluate occurring tradeoffs in terms of performance, we study the effects on availability and recovery time. We proceed to derive lower bounds on end-to-end availability for the different homing architectures. Additionally, an analytical recovery time model is developed and evaluated. We investigate how service-imposed maximum outage requirements have a direct effect on the setting of the proposed architecture.
80

An Extension Of Multi Layer IPSec For Supporting Dynamic QoS And Security Requirements

Kundu, Arnab 02 1900 (has links) (PDF)
Governments, military, corporations, financial institutions and others exchange a great deal of confidential information using Internet these days. Protecting such confidential information and ensuring their integrity and origin authenticity are of paramount importance. There exist protocols and solutions at different layers of the TCP/IP protocol stack to address these security requirements. Application level encryption viz. PGP for secure mail transfer, TLS based secure TCP communication, IPSec for providing IP layer security are among these security solutions. Due to scalability, wide acceptance of the IP protocol, and its application independent character, the IPSec protocol has become a standard for providing Internet security. The IPSec provides two protocols namely the Authentication header (AH) and the Encapsulating Security Payload (ESP). Each protocol can operate in two modes, viz. transport and tunnel mode. The AH provides data origin authentication, connectionless integrity and anti replay protection. The ESP provides all the security functionalities of AH along with confidentiality. The IPSec protocols provide end-to-end security for an entire IP datagram or the upper layer protocols of IP payload depending on the mode of operation. However, this end-to-end model of security restricts performance enhancement and security related operations of intermediate networking and security devices, as they can not access or modify transport and upper layer headers and original IP headers in case of tunnel mode. These intermediate devices include routers providing Quality of Service (QoS), TCP Performance Enhancement Proxies (PEP), Application level Proxy devices and packet filtering firewalls. The interoperability problem between IPSec and intermediate devices has been addressed in literature. Transport friendly ESP (TF-ESP), Transport Layer Security (TLS), splitting of single IPSec tunnel into multiple tunnels, Multi Layer IPSec (ML-IPSec) are a few of the proposed solutions. The ML-IPSec protocol solves this interoperability problem without violating the end-to-end security for the data or exposing some important header fields unlike the other solutions. The ML-IPSec uses a multilayer protection model in place of the single end-to-end model. Unlike IPSec where the scope of encryption and authentication applies to the entire IP datagram, this scheme divides the IP datagram into zones. It applies different protection schemes to different zones. When ML-IPSec protects a traffic stream from its source to its destination, it first partitions the IP datagram into zones and applies zone-specific cryptographic protections. During the flow of the ML-IPSec protected datagram through an authorized intermediate gateway, certain type I zones of the datagram may be decrypted and re-encrypted, but the other zones will remain untouched. When the datagram reaches its destination, the ML-IPSec will reconstruct the entire datagram. The ML-IPSec protocol, however suffers from the problem of static configuration of zones and zone specific cryptographic parameters before the commencement of the communication. Static configuration requires a priori knowledge of routing infrastructure and manual configuration of all intermediate nodes. While this may not be an issue in a geo-stationary satellite environment using TCP-PEP, it could pose problems in a mobile or distributed environment, where many stations may be in concurrent use. The ML-IPSec endpoints may not be trusted by all intermediate nodes in a mobile environment for manual configuration without any prior arrangement providing the mutual trust. The static zone boundary of the protocol forces one to ignore the presence of TCP/IP datagrams with variable header lengths (in case of TCP or IP headers with OPTION fields). Thus ML-IPSec will not function correctly if the endpoints change the use of IP or TCP options, especially in case of tunnel mode. The zone mapping proposed in ML-IPSec is static in nature. This forces one to configure the zone mapping before the commencement of the communication. It restricts the protocol from dynamically changing the zone mapping for providing access to intermediate nodes without terminating the existing ML-IPSec communication. The ML-IPSec endpoints can off course, configure the zone mapping with maximum number of zones. This will lead to unnecessary overheads that increase with the number of zones. Again, static zone mapping could pose problems in a mobile or distributed environment, where communication paths may change. Our extension to the ML-IPSec protocol, called Dynamic Multi Layer IPSec (DML-IPSec) proposes a multi layer variant with the capabilities of dynamic zone configuration and sharing of cryptographic parameters between IPSec endpoints and intermediate nodes. It also accommodates IP datagrams with variable length headers. The DML-IPSec protocol redefines some of the IPSec and ML-IPSec fundamentals. It proposes significant modifications to the datagram processing stage of ML-IPSec and proposes a new key sharing protocol to provide the above-mentioned capabilities. The DML-IPSec supports the AH and ESP protocols of the conventional IPSec with some modifications required for providing separate cryptographic protection to different zones of an IP datagram. This extended protocol defines zone as a set of non-overlapping and contiguous partitions of an IP datagram, unlike the case of ML-IPSec where a zone may consist of non-contiguous portions. Every zone is provided with cryptographic protection independent of other zones. The DML-IPSec categorizes zones into two separate types depending on the accessibility requirements at the intermediate nodes. The first type of zone, called type I zone, is defined on headers of IP datagram and is required for examination and modification by intermediate nodes. One type I zone may span over a single header or over a series of contiguous headers of an IP datagram. The second type of zone, called type II zone, is meant for the payload portion and is kept secure between endpoints of IPSec communications. The single type II zone starts immediately after the last type I zone and spans till the end of the IP datagram. If no intermediate processing is required during the entire IPSec session, the single type II zone may cover the whole IP datagram; otherwise the single type II zone follows one or more type I zones of the IP datagram. The DML-IPSec protocol uses a mapping from the octets of the IP datagram to different zones, called zone map for partitioning an IP datagram into zones. The zone map contains logical boundaries for the zones, unlike physical byte specific boundaries of ML-IPSec. The physical boundaries are derived on-the-fly, using either the implicit header lengths or explicit header length fields of the protocol headers. This property of the DML-IPSec zones, enables it to accommodate datagrams with variable header lengths. Another important feature of DML-IPSec zone is that the zone maps need not remain constant through out the entire lifespan of IPSec communication. The key sharing protocol may modify any existing zone map for providing service to some intermediate node. The DML-IPSec also redefines Security Association (SA), a relationship between two endpoints of IPSec communication that describes how the entities will use security services to communicate securely. In the case of DML-IPSec, several intermediate nodes may participate in defining these security protections to the IP datagrams. Moreover, the scope of one particular set of security protection is valid on a single zone only. So a single SA is defined for each zone of an IP datagram. Finally all these individual zonal SA’s are combined to represent the security relationship of the entire IP datagram. The intermediate nodes can have the cryptographic information of the relevant type I zones. The cryptographic information related to the type II zone is, however, hidden from any intermediate node. The key sharing protocol is responsible for selectively sharing this zone information with the intermediate nodes. The DML-IPSec protocol has two basic components. The first one is for processing of datagrams at the endpoints as well as intermediate nodes. The second component is the key sharing protocol. The endpoints of a DML-IPSec communication involves two types of processing. The first one, called Outbound processing, is responsible for generating a DML-IPSec datagram from an IP datagram. It first derives the zone boundaries using the zone map and individual header field lengths. After this partitioning of IP datagram, zone wise encryption is applied (in case of ESP). Finally zone specific authentication trailers are calculated and appended after each zone. The other one, Inbound processing, is responsible for generating the original IP datagram from a DML-IPSec datagram. The first step in the inbound processing, the derivation of zone boundary, is significantly different from that of outbound processing as the length fields of zones remain encrypted. After receiving a DML-IPSec datagram, the receiver starts decrypting type I zones till it decrypts the header length field of the header/s. This is followed by zone-wise authentication verification and zone-wise decryption. The intermediate nodes processes an incoming DML-IPSec datagram depending on the presence of the security parameters for that particular DML-IPSec communication. In the absence of the security parameters, the key sharing protocol gets executed; otherwise, all the incoming DML-IPSec datagrams get partially decrypted according to the security association and zone mapping at the inbound processing module. After the inbound processing, the partially decrypted IP datagram traverses through the networking stack of the intermediate node . Before the IP datagram leaves the intermediate node, it is processed by the outbound module to reconstruct the DML-IPSec datagram. The key sharing protocol for sharing zone related cryptographic information among the intermediate nodes is the other important component of the DML-IPSec protocol. This component is responsible for dynamically enabling intermediate nodes to access zonal information as required for performing specific services relating to quality or security. Whenever a DML-IPSec datagram traverses through an intermediate node, that requires access to some of the type I zones, the inbound security database is searched for cryptographic parameters. If no entry is present in the database, the key sharing protocol is invoked. The very first step in this protocol is a header inaccessible message from the intermediate node to the source of the DML-IPSec datagram. The intermediate node also mentions the protocol headers that it requires to access in the body portion of this message. This first phase of the protocol, called the Zone reorganization phase, is responsible for deciding the zone mapping to provide access to intermediate nodes. If the current zone map can not serve the header request, the DML-IPSec endpoint reorganizes the existing zone map in this phase. The next phase of the protocol, called the Authentication Phase is responsible for verifying the identity of the intermediate node to the source of DML-IPSec session. Upon successful authentication, the third phase, called the Shared secret establishment phase commences. This phase is responsible for the establishment of a temporary shared secret between the source and intermediate nodes. This shared secret is to be used as key for encrypting the actual message transfer of the DML-IPSec security parameters at the next phase of the protocol. The final phase of the protocol, called the Security parameter sharing phase, is solely responsible for actual transfer of the security parameters from the source to the intermediate nodes. This phase is also responsible for updation of security and policy databases of the intermediate nodes. The successful execution of the four phases of the key sharing protocol enables the DML-IPSec protocol to dynamically modify the zone map for providing access to some header portions for intermediate nodes and also to share the necessary cryptographic parameters required for accessing relevant type I zones without disturbing an existing DML-IPSec communication. We have implemented the DML-IPSec for ESP protocol according to the definition of zones along with the key sharing algorithm. RHEL version 4 and Linux kernel version 2.6.23.14 was used for the implementation. We implemented the multi-layer IPSec functionalities inside the native Linux implementation of IPSec protocol. The SA structure was updated to hold necessary SA information for multiple zones instead of single SA of the normal IPSec. The zone mapping for different zones was implemented along with the kernel implementation of SA. The inbound and outbound processing modules of the IPSec endpoints were re-implemented to incorporate multi-layer IPSec capability. We also implemented necessary modules for providing partial IPSec processing capabilities at the intermediate nodes. The key sharing protocol consists of some user space utilities and corresponding kernel space components. We use ICMP protocol for the communications required for the execution of the protocol. At the kernel level, pseudo character device driver was implemented to update the kernel space data structures and necessary modifications were made to relevant kernel space functions. User space utilities and corresponding kernel space interface were provided for updating the security databases. As DML-IPSec ESP uses same Security Policy mechanism as IPSec ESP, existing utilities (viz. setkey) are used for the updation of security policy. However, the configuration of the SA is significantly different as it depends on the DML-IPSec zones. The DML-IPSec ESP implementation uses the existing utilities (setkey and racoon) for configuration of the sole type II zone. The type I zones are configured using the DML-IPSec application. The key sharing protocol also uses this application to reorganize the zone mapping and zone-wise cryptographic parameters. The above feature enables one to use default IPSec mechanism for the configuration of the sole type II zone. For experimental validation of DML-IPSec, we used the testbed as shown in the above figure. An ESP tunnel is configured between the two gateways GW1 and GW2. IN acts as an intermediate node and is installed with several intermediate applications. Clients C11 and C21 are connected to GW1 and GW2 respectively. We carried out detailed experiments for validating our solution w.r.t firewalling service. We used stateful packet filtering using iptables along with string match extension at IN. First, we configured the firewall to allow only FTP communication (using port information of TCP header and IP addresses of Inner IP header ) between C11 and C21. In the second experiment, we configured the firewall to allow only Web connection between C11 and C21 using the Web address of C11 (using HTTP header, port information of TCP header and IP addresses of Inner IP header ). In both experiments, we initiated the FTP and WEB sessions before the execution of the key sharing protocol. The session could not be established as the access to upper layer headers was denied. After the execution of the key sharing protocol, the sessions could be established, showing the availability of protocol headers to the iptables firewall at IN following the successful key sharing. We use record route option of ping program to validate the claim of handling datagrams with variable header lengths. This option of ping program records the IP addresses of all the nodes traversed during a round trip path in the IP OPTION field. As we used ESP in tunnel mode between GW1 and GW2, the IP addresses would be recorded inside the encrypted Inner IP header. We executed ping between C11 and C21 and observed the record route output. Before the execution of the key sharing protocol, the IP addresses of IN were absent in the record route output. After the successful execution of key sharing protocol, the IP addresses for IN were present at the record route output. The DML-IPSec protocol introduces some processing overhead and also increases the datagram size as compared to IPSec and ML-IPSec. It increases the datagram size compared to the standard IPSec. However, this increase in IP datagram size is present in the case of ML-IPSec as well. The increase in IP datagram length depends on the number of zones. As the number of zone increases this overhead also increases. We obtain experimental results about the processing delay introduced by DML-IPSec processing. For this purpose, we executed ping program from C11 to C21 in the test bed setup for the following cases: 1.ML-IPSec with one type I and one type II zone and 2. DML-IPSec with one type I and one type II zone. We observe around 10% increase in RTT in DML-IPSec with two dynamic zones over that of ML-IPSec with two static zones. This overhead is due to on-the-fly derivation of the zone length and related processing. The above experiment analyzes the processing delay at the endpoints without intermediate processing. We also analyzed the effect of intermediate processing due to dynamic zones of DML-IPSec. We used iptables firewall in the above mentioned experiment. The RTT value for DML-IPSec with dynamic zones increases by less than 10% over that of ML-IPSec with static zones. To summarize our work, we have proposed an extension to the multilayer IPSec protocol, called Dynamic Multilayer IPSec (DML-IPSec). It is capable of dynamic modification of zones and sharing of cryptographic parameters between endpoints and intermediate nodes using a key sharing protocol. The DML-IPSec also accommodates datagrams with variable header lengths. The above mentioned features enable any intermediate node to dynamically access required header portions of any DML-IPSec protected datagrams. Consequently they make the DML-IPSec suited for providing IPSec over mobile and distributed networks. We also provide complete implementation of ESP protocol and provide experimental validation of our work. We find that our work provides the dynamic support for QoS and security services without any significant extra overhead compared to that of ML-IPSec. The thesis begins with an introduction to communication security requirements in TCP/IP networks. Chapter 2 provides an overview of communication security protocols at different layers. It also describes the details of IPSec protocol suite. Chapter 3 provides a study on the interoperability issues between IPSec and intermediate devices and discusses about different solutions. Our proposed extension to the ML-IPSec protocol, called Dynamic ML-IPSec(DML-IPSec) is presented in Chapter 4. The design and implementation details of DML-IPSec in Linux environment is presented in Chapter 5. It also provides experimental validation of the protocol. In Chapter 6, we summarize the research work, highlight the contributions of the work and discuss the directions for further research.

Page generated in 0.0528 seconds