61 |
An investigation into MAC layer frame clustering for wirelass LAN intrusion detectionZhou, Wenzhe January 2006 (has links)
The proliferation of wireless networks has today made security a major concern in the design and operation of these networks. The most popular wireless local area networks (WLANs) are those confonning to the IEEE 802.11 WLAN standards. However, research has shown that there are many vulnerabilities that exist in the wireless MAC layer of these networks that provide opportunities for malicious hackers. Identification of~ttacks occurring inside WLANs is therefore critical to their future developme.nt. This thesis aims at developing a novel MAC frame clustering scheme to solve this problem. This approach is based on the observation that when active events occurs in wireless networks, for example, scanning, joining, and attacking, the management traffic pattern in the MAC layer will be impacted. By analyzing these impacts, MAC layer attacks can be observed and recognized. The methodology involved in this research is machine learning, and a major contribution ofthe work is the classification of attack patterns through observation ofmanagement traffic clusters. The work firstly clusters the MAC management frames into groups which represent corresponding events. For each specific cluster, or event, there are unique patterns. Throug];1/recognizing the patterns of the cluster, attacking clusters can be classified into known categories. The thesis proposes the· above theory and applies it to a MAC layer Intrusion Detection System (IDS) for 802.11 Wireless LANs. This is the first time that a MAC layer IDS has been based on this technique. The IDS consists of six functions: a Traffic Filtration Function (TFF), a Management Traffic Clustering Function (MTCF), an Information Filtration Function (IFF), a First Level Classification (FLC), a Cluster Infonnation Management Function (CIMF) and a Second Level Classification (SLC). The TFF filters the MAC layer management frames and certain information from the filtered frames will be stored in the IFF. The MTCF then clusters the rest of the management frames. A novel clustering algorithm based on a sliding window approach is developed for the MTCF. A two-level classification structure is designed to recognize the cluster types. This two-level structure ensures that the IDS is able to detect unknown pattern attacks and helps decrease system false alarms. The FLC decides whether a cluster represents an abnonnal event based on the content value of the cluster (CCV) and beacon infonnation. When there is an abnonnal event, theSLC is then executed in order to determine the category of the attack according to known patterns. The work has analyzed a variety of 802.11 WLAN MAC layer active attacks and selected thirteen features for classifying the clusters. Support Vector Machine is used as the classification approach. Data captured from the real network test bed are tested on the IDS and the results show high accuracy of detection. The work presented in this thesis is applied in the 802.11 WLANs, however the underlying principles can be applIed to other wireless networks.
|
62 |
A cooperative system for enhancing mobile accessRocha Sa e Moura, Jose Andre January 2011 (has links)
This thesis is in the context of a Next Generation Network environment formed by two or more distinct wireless access technologies covering a public area. These technologies are administered by different mobile operators, where an end-user terminal can connect to any access technology, through the corresponding wireless interface of that multi-interface terminal. In this emergent network environment, congestion will occur very frequently due to a significant increase in the amount of data traffic crossing the network infrastructure. Consequently, the operators must deploy a deployment strategy to cope with high values of data demand, using the available network resources, without compromising the users' connection quality. The work described in this thesis proposes a distributed brokerage service in the heterogeneous network infrastructure that provides a management solution using cooperative strategy from the mobile operators and allowing the terminals to make well informed decisions for their connections. In this way, a closed management loop among that brokerage service at the network and agents at the mobile terminals counteracts any abnormal traffic load. The brokerage service periodically evaluates each technology, combines both wireless and backhaul status in quality metrics and, disseminates these to the client terminals. Depending on the management policies of the brokerage service, the quality metrics and Service Level Agreements (SLAs), diverse classes of service can be supported across the distinct access technologies. The proposed management distributed algorithm proves to be technically robust and stable, because it performs very satisfactorily in diverse scenarios, supporting different management policies, network loads, user mobility patterns and levels of backhaul provisioning. In addition, the impact of deploying the brokerage service in a real scenario with a dynamic business model was studied. This study concludes that the operator who adopts a deployment strategy of enhancing its network infrastructure can be only rewarded from its initial investment when the brokerage service is in operation. In this way, the market operation becomes fairer.
|
63 |
Opportunistic data collection in people-centric sensor networksLodge, Tom January 2012 (has links)
Delay Tolerant Networking is an approach to networking that supports routing in the absence of a contemporaneous end-to-end path between a source and its destination. The concept came out of research into Interplanetary Networks, which support communication between planets by anticipating and utilising the orbital alignment of links along a path. DTN has since been generalised to address other challenging environments including battlefield networks, third world infrastructure provision and wildlife monitoring. This thesis considers the use of a new networking paradigm for data collection, a specialisation of Delay Tolerant Networking, known as Pocket Switched Networks. Pocket Switched Networks use short-hop networking technologies, opportunistic interactions between human-carried devices and human mobility to forward data, hop by hop to its destination. Studies have shown that, in theory, PSN can provide a cheap, infrastructure-free, best effort, high- latency support for applications. However, there are several problems faced by researches when it comes to designing and evaluating PSN protocols. First, there is a lack of compelling mainstream PSN (and even DTN) applications which makes it hard to evaluate PSN protocols within a realistic or widely accepted context. Second, it is hard to 'join up' theory with practice, for example there are few examples of results from practical deployments being reapplied within a theoretical context. Third, there are many inter-related dependencies that come from both the reliance on opportunistic contacts (i.e. human behaviour) and the use of commodity devices. The literature rarely accounts for the impact of workload, architecture, protocol, pragmatic and technical issues upon end performance. This thesis tackles the challenge of studying Pocket Switched Networks in light of these problems. The thesis responds to the challenge by presenting, using and reflecting upon a structured approach to the design and evaluation of PSN protocols. The approach consists of an application bootstrapping phase, a trace-based simulation phase and a deployment stage each of which iteratively informs the others. The thesis demonstrates the use of this structured approach in the design and evaluation of a novel anycast PSN protocol; HEATS/NI(, which utilises a metric of prior encounters with data 'sinks' in its routing decisions. When compared against optimal results in simulation, using Bluetooth traces collected over nine-months from a hundred devices, it performs well and outperforms protocols that do not leverage the repeating patterns of human movement. More generally the results, in both simulation and deployment, show that the contact distributions from a sparse social graph are sufficient to support routing and that stable paths can be created using a metric that leverages the structure inherent in human mobility. Our results confirm that the use of intermediate nodes as carriers in sparse networks provides a performance improvement over direct contact schemes. In order to support the simulation and deployment stages of the structured approach, we present a set of design principles that emerge from the practical constraints of supporting deployment and the need to reflect these constraints in increasingly faithful simulations. We describe the implementation of a set of tools and libraries that embody these principles and which are used to support the structured approach. In the final part of the thesis we discuss how the technical and environmental interdependencies as well as the design of the supporting middleware impact upon results. With reference to our results we show how the evaluation of PSN protocol performance must consider more than the tradeoffs between delivery ratio, latency and overhead. We reflect upon our use of our structured approach and how effective it is in tackling the problems that we highlight, We also argue that the performance results presented in current studies of PSN protocols and PSN networks will often be over optimistic given the constraints that emerge from deployment.
|
64 |
An aggregation-level framework for the virtual prototyping of ubiquitous computing and multimedia technologiesMartin, Nicholas John January 2009 (has links)
The principle of Ubiquitous Computing has expanded the possibilities for new types of application. By combining multiple devices and services, users are presented with functionality in more natural and convenient ways. However, its composite nature places an increased strain on the development process. Furthermore, with ever-increasing hardware capabilities comes the expectation and potential for more complex functionality. Without the correct mechanism, developers will be unable to fulfil the requirement for usable, appropriate functionality.
|
65 |
Design and analysis of a new distributed IP router frameworkRodríguez, Durón Francisco A. January 2009 (has links)
Recently, we have seen routers' life expectancy dramatically shortened as a result of Internet Service Providers (ISPs) and enterprise companies equipment replacement patterns: new, high performance routers are usually introduced in the core of their network, pushing existing routers towards the network edge, with routers already at the edge being often decomissioned before the end of their hardware life cycle. Furthermore, ISPs' customers interconnection demands have become more difficult to meet considering that they might change in time and that ISPs not only need to provide an inteconnection link but also a whole network solution. For solving the previous problems, this thesis focuses to explore alternative ways to utilise routers which can further extend their serviceability life cycle while providing a flexible platform that can facilitate ISPs to meet customers interconnection demands. We investigate new methods that could allow us to decouple routers logical routing and forwarding functionalities from the hardware that implements them. In contrast with the traditional and well-known techniques for achieving the lat- ter, we employ an automatic configuration management approach that modifies dynamically the configurations of a set of routers which can further exploit their usefulness by using them in conjuncture. The previous approach relies on the flexibility that routers offer through their configuration interface, hence including most routers regardless its type and make. Our proposed approach is based on a routers management architecture, namely RoMa, aimed at hosting multiple and various routers through a set of interconnected routers. Such architecture not only considers to host routers using a single but multiple hardware chassis. To demonstrate the feasability of providing the means for building a logical router using multiple hardware routers represents the core of this thesis. For this, we have designed a logical entity called distributed IP router. We focus on two key issues for building this type of logical routers. First, supporting the intra-domain routing function in these logical entities. Second, provisioning such logicial entities with aggregated links. Our findings suggest that such logical routers are feasible and can provide new highly desirable features in comparison with standard routers, as it is the case of the aggregated links.
|
66 |
Low complexity capacity-approaching codes for data transmissionNelson, Christopher J. January 2010 (has links)
This thesis analyzes the design of low complexity capacity approaching codes suitable for data transmission. The research documented in this thesis describes new and novel design methods for three well-known error control coding techniques, Turbo codes, LDPC block codes and LDPC convolutional codes, which are suitable for implementation in a number of modem digital communication systems. Firstly, we present Partial Unit Memory (PUM) based Turbo codes. A variant of Turbo codes which encompasses the advantages of both block and convolutional codes. The design methods of PUM Turbo codes are presented and Bit Error Rate (BER) simulations and Extrinsic Information Transfer (EXIT) chart analysis illustrates their performance. Partial Unit Memory codes are a class of low complexity, non-binary convolutional codes and have been shown to outperform equivalent convolutional codes. We present the EXIT charts of parallel concatenated PUM codes and PUM Woven Turbo Codes and analyse them to assess their performance compared with standard Turbo code designs. Resulting Extrinsic Information Transfer charts indicate that the proposed PUM-based codes have higher mutual information during iterative decoding than the equivalent Recursive, Systematic, Convolutional Turbo codes (RSC- TC) for the same Eb/No, i.e. the output of the decoders provides a better approximation of the decoded bits. The EXIT chart analysis is supported by BER plots, which confirms the behaviour predicted by the EXIT charts. We show that the concatenated PUM codes outperform the well-known turbo codes in the waterfall region, with comparable performance in the error floor region. In the second section we present Low Density Generator Matrix codes; a variant of LDPC codes that have low complexity encoding and decoding techniques. We present results of three construction methods and describe how LDGM codes can be modified to improve the error-floor region. We describe the design of random, structured and semi-random, semi- structured codes and how, by replacing the identity matrix with a staircase matrix, LDGM codes can show significant improvements in the error-floor region. Furthermore, we analyse the performance of serially concatenated LDGM codes and how they can benefit when we use the modified LDGM codes in either the outer code or the inner code. The results indicate that concatenated LDGM codes that incorporate LDGM staircase codes in the inner code will show improvements in error-floor performance while maintaining near capacity limit performances. While in the case of LDGM staircase codes being used as the outer codes no significant improvements in waterfall or error-floor regions are observed compared to a concatenated scheme that employs an LDGM identity outer code. Finally, we propose a new design of LDPC convolutional code, which we term as time invariant Low Density Parity Check Unit Memory (LDPC-UM) codes. The performance of LDPC block and Low Density Parity Check Unit Memory codes are compared, in each case, the Low Density Parity Check Unit Memory codes performance is at least as good as that of the LDPC block codes from which they are derived. LDPC-UM codes are the convolutional counterparts of LDPC block codes. Here, we describe techniques for the design of low complexity time invariant LDPC-UM codes by unwrapping the Tanner graph of algebraically constructed quasi-cyclic LDPC codes. The Tanner graph is then used to describe a pipelined message passing based iterative decoder for LDPC-UM codes and standard LDPC convolutional codes that outputs decoding results continuously.
|
67 |
On the security of key exchange protocolsWilliams, Stephen C. January 2011 (has links)
This thesis is primarily concerned with the security of key exchange protocols. Specifically, we consider composability properties for such protocols within the tradi- tional game-based framework. Our composition results are distinguished from virtually all existing work as we do not rely, neither directly nor indirectly, on the simulation paradigm. In addition we provide a formal analysis of the widely deployed SSH pro- tocol's key exchange mechanism. As a first step, we show composability properties for key exchange protocols secure in the prevalent model of Bellare and Rogaway. Roughly speaking, we show these may be composed with arbitrary two-party protocols that require symmetrically distributed keys. Here, we use session identifiers derived by the protocol to define notions of partner sessions. This leads to an interesting technical requirement, namely, it should be possible to determine which sessions are partnered given only the publicly available information. Next, we propose a new security definition for key exchange protocols. The defini- tion offers two important benefits. It is weaker than the more established ones and thus allows for the analysis of a larger class of protocols. Furthermore, security in the sense that we define enjoys rather general composability properties. In essence, we show that a key exchange can be securely composed with some other protocol, provided two main requirements hold. First, the security of the protocol can be reduced to that of some primitive, no matter how the keys for the primitive are distributed. Secondly, no adversary can break the primitive when keys for the primitive are obtained from execu- tions of the key exchange protocol. Proving that the two conditions are satisfied, and then applying our generic theorem, should be simpler than performing a monolithic analysis of the composed protocol. Finally, we provide a security analysis of the key exchange stage of the SSH protocol. Our proof is modular, and exploits the design of SSH. First, a shared secret key is obtained via a Diffie-Hellman key exchange. Next, a transform is applied to obtain the application keys used by later stages of SSH. We define models, following well- established paradigms, that clarify the security provided by each type of key. We show that although the shared secret key exchanged by SSH is not indistinguishable, the transformation then applied yields indistinguishable application keys.
|
68 |
Adaptive broadcast schemes in mobile ad hoc networksLiarokapis, Dimitrios January 2013 (has links)
The broadcast operation is perhaps one of the most fundamental services utilized frequently by other communication mechanisms in MANETs. It is the key element for exchanging control packets to support some services such as those provided by management and routing protocols. The dynamic nature of such network topologies and also the limited resources available, introduce a wide range of challenges when trying to design and implement a broadcast scheme that would function adequately in MANETs. Simple Flooding (FL) is a basic approach to broadcasting without global information; in which a broadcast packet is forwarded exactly once by every node in the network. in FL, the broadcast packet is guaranteed to be received by every node in the network given that there is no packet loss caused by collision and also there is no high-speed movement of nodes during the broadcast process. However, due to the broadcast nature of this environment, redundant transmissions in FL may cause the broadcast storm problem, in which redundant packets cause contention and collisions. Over the past years many studies have been conducted to develop broadcast mechanisms to alleviate the effects of FL. The focus of the early works was on the schemes where the mobile nodes make the rebroadcast decision based on fixed and preconfigured thresholds. The most common thresholds relate to the distance between sender and receiver (Distance Based scheme - DB), the number of duplicate packets received (Counter Based scheme - CB), and a fixed probability for rebroadcast (Probability Based scheme - PB). Despite the fact that these schemes have been shown to considerably improve the overall performance of the network, they have been found to depend highly on the combination of threshold selected, traffic load and level of population. The degree of dependency is such that in certain network topologies FL performs better than these schemes.
|
69 |
A middleware architecture for pervasive applications developmentIbrahim, Abdelgadir Elgailey January 2008 (has links)
A pervasive environment is a networked environment which is characterised by device or user mobility, limited device resources, and intermittent connectivity. There is a lack of suitable infrastructures and system architectures that facilitate pervasive application development. Shared spaces is a popular communication model in pervasive environments but there is no general understanding about the necessary properties a shared space implementation should exhibit in order to facilitate P2P interactions between mobile nodes in pervasive environments.
|
70 |
Differential evolution algorithms for network optimizationFarah, Abdulkadir January 2013 (has links)
Many real world optimization problems are difficult to solve, therefore, when solving such problems it is wise to employ efficient optimization algorithms which are capable of handling the problem complexities and of finding optimal or near optimal solutions within a reasonable time and without using excessive computational resources. The objective of this research is to develop Differential Evolution (DE) algorithms with improved performance capable of solving difficult and challenging global constrained and unconstrained optimization problems, as well as extending the application of the these algorithms to real-world optimization problems, particularly wireless broadband network placement and deployment problems. The adaptation of DE control parameters has also been investigated and a novel method using Mann-Iteration and Tournament scoring is proposed to improve the performance of the algorithm. A novel constraint handling technique called neighborhood constraints handling (NCR) method has been also proposed. A set of experiments are conducted to comprehensively test the performance of the proposed DE algorithms for global optimization. The numerical results for well-known optimization global optimization test problems are shown to prove the performance of the proposed methods. In addition, a novel wireless network test point (TP) reduction algorithm (TPR) has been presented. The TPR algorithm and the proposed DE algorithms have been applied for solving the optimal network placement problem. In order to utilize the value of flexibility a novel value optimization problem formulation integrating the state of the art approaches of cash flow (CF) analysis and real option analysis (ROA) for network deployment has been presented, utilizing the proposed DE algorithms to obtain the optimal roll-out sequence that maximizes the value of the wireless network deployment. A numerical experimentation, based on a case study scenario of an optimal network placement and deployment for wireless broadband access network, has been conducted to confirm the efficiency of these algorithms.
|
Page generated in 0.0318 seconds